Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels

Channel Catalog

Channel Description:

Most recent forum messages

older | 1 | .... | 5409 | 5410 | (Page 5411) | 5412 | 5413 | 5414 | newer

    0 0

    Hi folks,


    I am using vSphere 5.5 and SRM 5.8 Array based replication.

    I understand that I should be able to just add a virtual hard disk to one of my SRM protected VMs and should be picked up by Array replication, etc.


    Does anyone know if there would be a problem if I did this while there was an active SRM Recovery plan (including this VM) active?

    Should clean up the test first?





    0 0

    Ottimo articolo, sarebbe bello capire perché ad Alessandro è capito e a me no, potrebbe essere che io partivo da un 6.0 e lui da un 5.5. Cmq con le pulizie indicate si risolve... meglio così.



    0 0



    Have a look in the /var/log/vmkernel.log of when the hosts dropped from vCenter.

    Are you seeing a ton of Reservation Conflicts?

    #less /var/log/vmkernel.log | grep -i conflict

    (or if it happened a while ago and logs have rolled over):

    #zcat /var/run/log/vmkernel.*| grep -i conflict


    Basically when storage is pulled in a non-graceful manner it can knock out hostd on the hosts and thus they disconnect from vCenter.

    You can manually reset the LUN reservation from any of the hosts that normally have access to it using the steps outlined here:

    (you only have to do it from one host but might have to do it for multiple LUNs which can be identified from the vmkernel logging or dmesg)




    -o- If you found this comment useful or answer please select as 'Answer' and/or click the 'Helpful' button, please ask follow-up questions if you have any -o-

    0 0

    If both data store shared with host then you can do hot svmotion, other wise shut down vm and migrate on datastore 5

    0 0

    I've had a little time to troubleshoot this further.


    After researching "[VpxdDatastore::UrlToDSPath] Received a non-url" I ran across this kb article.

    Deploying multiple virtual machines in VMware vCenter Server 5.x and 6.0.x from the same template fails with the error:


    While this is not an exact match to my issue it does reference the same error message. I looked at my templates and they all have -ctk.vmdk files because they were created from virtual machines with snapshots. While this makes sense from the perspective that the templates are essentially clones of the existing VMs, it doesn't make sense for a template to have a -ctk file since they are supposed to be static copies. So maybe the presence of the -ctk file is causing this exception... easy enough to test. I created a new virtual machine (no OS loaded), confirmed that no -ctk.vmdk file existed for this VM, cloned the virtual machine to a template. Then confirmed that the new template didn't have a -ctk.vmdk file, updated my one-liner from the original post to reference the new template and fail. Dug into the vpxd.log file again and found the same exception message "[VpxdDatastore::UrlToDSPath] Received a non-url". So, still don't have an answer but thought it was worth mentioning to preventing duplicating effort. Thanks as always for your time and effort.



    New-VM -Name TestTemplate -VMHost (Get-VMHost -Datastore esmc4_fucs1_f1c1_incoming_01 | select -First 1) -Datastore "esmc4_fucs1_f1c1_incoming_01" -Template (Get-Template "NewTemplate")



    New-VM : 3/27/2017 2:48:10 PM    New-VM        The operation for the entity "NewTemplate" failed with the following message: "Could not complete network copy for file /vmfs/volumes/03195b0e-e6997962/NewTemplate/NewTemplate.vmdk"   

    At line:1 char:1

    0 0
  • 03/27/17--12:15: Re: How to find original CPU
  • Does any of the VMs still have a vmware*.log file with a time stamp prior to the CPU replacement in its datatsore folder?



    0 0

    These are jpg file not vmware.log

    but you can try this KB..




    EVC mode mismatch causes virtual machine migration issues (2014835) | VMware KB

    0 0

    Not sure what happened with this recent release but the NAT vmnet8 is not functioning the way it did in prior versions


    Win 7 Pro 64b sp 1 

    VMware Player 12.5.2

    UBUNTU 14.04 VM

    Network set to NAT

    Norton 360


    Telnet and ping work to the IP address ( assigned by vmnet8 ( Gateway (


    After updating to VMware Player 12.5.4 (nothing else changed)

    ping and telnet no longer work to the assigned IP address


    wire shark capture indicates that this IP address is unknown


    Bridged mode still causes a Blue Screen Crash when attempting to reboot a vm (this is a known issue with other versions of Player in Windows 7)


    Turned off Norton to ensure there wasn't a new block put in place by the Security rules ... no change


    Created a backup image of my current hd then reloaded a backup hd image containing 12.5.2 and compared all the settings for;

    vmnet8 ... appear to be the same

    vmnet1 ... appear to be the same

    vmnetcfg.exe ... appear to be the same

    player vm network settings ... appear to be the same


    restored my current hd image

    Found an original download of 12.5.2 ... uninstalled 12.5.4 and installed 12.5.2 ... ping and telnet work again


    So is vmnet8 broken in 12.5.4 ... or ... is there some new setting we need to be aware of ... or ... something I may have missed?


    Kind of makes one want to turn off updates when it takes 2 days to restore functionality

    0 0

    Hello jbax


    If they are still in reality thick-provisioned (proportionalCapacity = 100) they will not benefit.

    I have seen this occur even when it says that the Default vSAN Storage Policy has been applied (and even more times when MARVIN policies are involved).


    To start:

    Identifying and fixing Thick Objects:


    All commands run via RVC at the cluster level:

    List disks (so that you can focus on disks with high % RESERVED space, this indicates Objects that are indeed Thick, copy+paste this info for later):

    #vsan.disks_stats .


    Check policy applied to all VMs:

    #spbm.check_compliance ./resourcePool/vms/*

    (any Objects that say 'Unknown' here I would be immediately be suspicious of, you can look at their disks via Edit Settings on the VM in the Web Client, click drop-down for the disk in question and it will show you what Storage Policy it is using, if it says 'Datastore Default' this is NOT the same as vSAN Default Storage Policy and I bet will be thick)


    Display disk layout for all objects:

    #vsan.vm_object_info ./resourcePool/vms/*

    (copy-paste into notepad and look for any Objects with "proportionalCapacity = 100", or you can use grep against .txt on a host)


    If you have a ton of Objects/disks that have "proportionalCapacity = 100" then create a new Storage Policy identical to the policy they are using (except with a new name), apply this policy to the objects and go back to step one to see if the 'Reserved' space on disks has decreased, if it has then we are on the right track.




    -o- If you found this comment useful or answer please select as 'Answer' and/or click the 'Helpful' button, please ask follow-up questions if you have any -o-

    0 0



    We have a vSAN cluster with 5 hosts and FTT initially set to 2, and stripe of 1.


    I temporary have set FTT to 1 using below command that VMware support told me to use, because I wasn't able to enter the host with the failed drive in maintenance mode

    vsan.cluster_set_default_policy . '(("hostFailuresToTolerate" i1) ("forceProvisioning" i1))'


    Well, even at 1 it is not working. still having the same error :

    Failed to enter maintenance mode in the current VSAN data migration mode due to insufficient nodes or disks in the cluster...


    I have few questions here.


    1- I restarted the host, looked in the lsi megaraid controller and the disk is not present there.

        so I want to replace it.

        I need to decomission the failed disk from the disk group, right?


    2- by following VMware steps, which is to delete the disk from the group, I have the following message when I try to delete it.

        Action not available when Deduplication and compression is enabled on cluster.


    3- So what is the correct step here to replace a failed drive on a 5 host cluster that is (initially set to FTT2)?



    0 0

    I just upgraded to PowerCLI 6.5 and this script can not file   "VMware.VimAutomation.Core"

    which generates the error: "Cannot load the VMware Module. Is the PowerCLI installed?"


    PowerCLI is installed and I am able to run it from the command line. I installed from the VMware site.


    Any Ideas?

    0 0

    As new Kali 2 ( 64 )  replaced GCC-5.4 with GCC-6 new VMW WS 12.5.4 not starting and rising the error " A compatible version of gcc was not found. Kali have GCC-6.0. I tried all previous patches on VM and it is not solving the problem. Downgrading GCC is not possible as it is braking core.

    log shows;


    : No usable gcc found. Boo!



    If you know the fix, please share.


    I tried many solutions which I found related to previous release, but non of them worked. I spend 1 week of searching/trying/failing before posting it here:


    Following fix d't worked : VMware Workstation 12.5.2 build-4638234 does not compile on Fedora 25 4.9.3-200.fc25.x86_64 #1 SMP Fri Jan 13 01:01:13 UTC 2017

    0 0

    The previous recommendation was to have at least 10% of the workload size. Recently the recommendation has shifted more to workload IO mix (100% WR vs. 70/30 RD/WR)


    See this VMW blog for more info  Designing vSAN Disk groups - All Flash Cache Ratio Update - Virtual Blocks

    0 0

    Just to make sure we're dealing with a PowerCLI issue, and not a more general vSpshere issue, can you create a VM from that template via the Web Client?

    0 0

    Hello Jonathan,


    Is the disk gone from the disk-group?

    Via Web-Client: Cluster > Configure(assuming 6.5, it is 'Manage' in 6.0) > vSAN > Disk Management

    Via SSH to host that the Disk-group is mounted on:

    #esxcli vsan storage list


    If it is gone and rebooting the host doesn't cause ANY VMs to become inaccessible or go offline then powering the host down and replacing the disk should be okay but I can't be sure of how it will rebuild (see next point).


    Otherwise looks like you will have to evacuate the entire disk-group that this failed disk is located on to do this in a more clean way due to the fact that Dedupe+Compression works on a per disk-group basis:


    How to Manage Disks in a Cluster with Deduplication and Compression

    ■ You cannot remove a single disk from a disk group. You must remove the entire disk group to make modifications.


    Any chance you could PM me the SR number? (I work there )




    -o- If you found this comment useful or answer please select as 'Answer' and/or click the 'Helpful' button, please ask follow-up questions if you have any -o-

    0 0

    With 6.5R1 it became a lot easier, you can try




    0 0

    We are using esxi 6.0 and when we run the VM it does not connect to the physical cd drive of the host.


    We have tried every option, tested the cd in a desktop to see if it would boot, and it will not connect.


    We found some docs that pointed us to a summary tab or hardware panel and push a button for the cd drive but we are unable to find a panel with these names or to find a cd drive button anywhere.


    Any help is appreciated.



    0 0


    0 0

    *Supporting Documentation for 6.0 to show it is the same as later versions


    Adding or Removing Disks when Deduplication and Compression Is Enabled

    ■ Deduplication and compression is implemented at a disk group level. You cannot remove a capacity disk from the cluster with enabled deduplication and compression. You must remove the entire disk group.

    0 0

    Then I would suggest you open a SR for this issue, looks indeed like a bug to me.

older | 1 | .... | 5409 | 5410 | (Page 5411) | 5412 | 5413 | 5414 | newer