I was tasked with migrating VMs from Hyper-V 2012-R2 to VMware's ESXi 5.5 without the luxury of an additional box to install ESXi and just migrate them over with VMware Converter. So this was an offline tear down\migration of the original Hyper-V server.
I read a few posts about going from vhdx format to vmdk for offline VMs. Seemed easy enough. Use the Hyper-v server's PowerShell to go from vhdx to vhd, like this
Convert-VHD -Path "path to your vhdx" -DestinationPath "path to save your converted vhd"
Then I used Starwinds V2V converter (which you can download for free) to take the vhd to vmdk
http://www.starwindsoftware.com/converter
So I have the vms all set for ESXi, shuffled off and converted- I attach the disks to the newly created machines and get the dreaded
Failed to start the virtual machine.
Module DevicePowerOn power on failed.
Unable to create virtual SCSI device for scsi0:0, '/vmfs/volumes/50f8922d-eb60e350-2100-6c626d42c9ce/SSD08004.VMAD01.LOCAL__C_Drive-s001.vmdk'
Failed to open disk scsi0:0: Unsupported or invalid disk type 7. Ensure that the disk has been imported
So what to do now? What I should have done to begin with. Download and use the newest version of vCenter Converter Standalone! Just converting to vmdk w starwind is not good enough- there are formatting differences between Workstation, Player and Infrastructure products like ESX and ESXi and must be converted properly.
With my vms installed on the ESXi server but still un-startable, I used Converter to go from
VMware Infrastructure ------> VMware Player 6.0 (Use "Not pre-allocated" option to keep your disks thin-provisioned if you want)
and then back.....
VMware Workstation or other VMware virtual machine (vmx file) ---> ESXi host - same thing, choose "thin provisioned disk" in the destination options if you want
Also, with 2012\Win 8 and above. make sure to boot from EFI and not BIOS--- and I'm back in business!
Another snafu that happens every time, especially with Linux based VMs, is the virtual nic hardware associated with the underlying OS changes, since obviously the nic MAC address changes when the VM is re-imported\moved over to a new system like this. A lot of dependency services (ie Asterisk, etc) refer to the specific nic name in their configs.....and will break if it's changed. So you may have had your OS using eth0 , and now when you move the vm, that nic is apparently gone, and eth1 is active...OR..no nic at all is active when you issue ifconfig at the shell.
You may need to assign an IP to your box to get connectivity - here's the pertinent Linux commands
ifconfig -a view all interfaces
ifconfig eth1 up enable an interface
ifconifg eth1 192.168.0.xx netmask 255.255.255.0 set static IP
sudo dhclient -v view DHCP service info
dhclient -v -r release any address from interfaces
dhclient eth1 enable DHCP on an interface
route add default gw 192.168.x.x eth1 add default gateway
route -v show active routes
yum install epel-release Extra Packages for Enterprise Linux
And the file that maps the MAC to the nic name is found here (at least on CentOS)
“/etc/udev/rules.d/70-persistent-net.rules“
You'll need to take note of your current active nics MAC address, and change the name of the nic to match it- using the previously named nic that worked before (eth0 is this example) Use the dreaded VI editor form the shell, or WinSCP in,... Webmin....whatever you prefer. See the entry example below- there will most likely be 2 entries or more- one for the old nic, and one for the new active one. You can safely delete the old mac entry also...since it's a "tombstoned" device
# PCI device 0×8086:0x100f (e1000)SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:50:56:34:0f:38″, ATTR{type}==”1″, KERNEL==”eth*”, NAME=”eth0“
So that's it... hopefully this post saves somebody a little aggravation and time!