Importing existing VMFS datastores

So, one of my VMware homelab servers has decided that its built in storage ports are the work of the devil and it will no longer behave with the disks attached to them.

Handly enough I had a spare IBM M1015 HBA lying around and was able to stuff that in it and reconnect the storage to that.

So how to add the datastores back into VMware?

Thankfully that’s nice and easy.

ssh onto the VMware server, and get it to list mountable volumes that are not mounted with: –

>esxcfg-volume -l

Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 555ccf8d-1e83b310-841f-d050996fdfcc/MainStore
Can mount: Yes
Can resignature: Yes
Extent name: naa.50014ee2af9fd52b:1 range: 0 – 610303 (MB)

Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).
VMFS UUID/label: 58fa7c2e-4e0fb8d8-ceaf-d050996fdfcc/Datastore1
Can mount: Yes
Can resignature: Yes
Extent name: naa.5000cca249d38d3d:1 range: 0 – 3815423 (MB)

So, we can see the two lab datastores on the two disks.
Lets get those mounted and then we can get this host back into action.

> esxcfg-volume -M 555ccf8d-1e83b310-841f-d050996fdfcc
Mounting volume 555ccf8d-1e83b310-841f-d050996fdfcc
> esxcfg-volume -M 58fa7c2e-4e0fb8d8-ceaf-d050996fdfcc
Mounting volume 58fa7c2e-4e0fb8d8-ceaf-d050996fdfcc

That should be them back into action. Lets check.

>esxcli storage filesystem list
Mount Point Volume Name UUID Mounted Type Size Free
————————————————- ———– ———————————– ——- —— ————– ————-
/vmfs/volumes/f16a5c2f-024e8e4b NFS f16a5c2f-024e8e4b true NFS 15857184727040 6747773542400
/vmfs/volumes/555ccf8d-1e83b310-841f-d050996fdfcc MainStore 555ccf8d-1e83b310-841f-d050996fdfcc true VMFS-5 639950127104 423815544832
/vmfs/volumes/58fa7c2e-4e0fb8d8-ceaf-d050996fdfcc Datastore1 58fa7c2e-4e0fb8d8-ceaf-d050996fdfcc true VMFS-6 4000762036224 3874391851008
/vmfs/volumes/9d5d25e5-72a247b9-fc58-db0a033ab4bf 9d5d25e5-72a247b9-fc58-db0a033ab4bf true vfat 261853184 107995136
/vmfs/volumes/4ea4f4e0-ce3c1495-0118-faa5b7f0c306 4ea4f4e0-ce3c1495-0118-faa5b7f0c306 true vfat 261853184 110907392
/vmfs/volumes/555b82b5-b0de9b80-885e-f46d043c4066 555b82b5-b0de9b80-885e-f46d043c4066 true vfat 299712512 83746816

As we can see above, the datastores MainStore and Datastore1 are back in action and we should be able to use the vm’s on it.

Mine (vm’s) were in perfect condition and I was able to get my homelab Cloudera Hadoop cluster backup and running and upgrade it to CDH 5.12.0 to play with.

The Vmware 5.5 u3 update waltz in esxcli minor

So update 3 for 5.5 is out and you want to deploy it with esxcli, follow these steps and you will be updated in no time.

0) esxcli vm process list , to make sure the host is empty.

1) vim-cmd /hostsvc/maintenance_mode_enter

2) esxcli network firewall ruleset set -e true -r httpClient

3) ~ # esxcli software sources profile list -d | grep “ESXi-5.5.0-201509”
ESXi-5.5.0-20150902001-standard   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150901001s-standard  VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150901001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150902001-no-tools   VMware, Inc.  PartnerSupported

If this shows nothing or errors out, then the chances are that you are behind a proxy for HTTP traffic.
So use the proxy option to pass that to the esxcli command.
esxcli software sources profile list -d –proxy  | grep “ESXi-5.5.0-201509”

4) ~ # esxcli software profile update -d -p ESXi-5.5.0-20150902001-standard

Like step 3, if this fails, then do the proxy shuffle and you should be able to pull down the patch and update.

5) reboot.

6) vim-cmd /hostsvc/maintenance_mode_exit

And your done, and should be seeing this when you query the version.

~ # esxcli system version get
Product: VMware ESXi
Version: 5.5.0
Build: Releasebuild-3029944
Update: 3

Thank the makefile for small mercies

RHEL/CENTOS 6.7 it out, and at last the issues with anaconda crashing, due to crufty Intel gfx drivers in PXE boot, are resolved.

I’m still not taking xdriver=vesa rdblacklist=nouveau out of my kickstart after all the crap I’ve been though with that driver.

Little hint for the pxeboot kernel RedHat, it should only support basic vesa for xdriver. We don’t need anaconda to look pretty and hi-res. It just needs to be functional and when you stuff the kernel with crufty drivers, it is most certainly not functional.

Role models are important

So you have a quad core mac mini with 16GB of ram and you think to yourself “that would make a nice little vmware box, but I want to keep the OSX install on it and keep using it.”

Well that’s not a problem at all. Just bung vmware 5.5u2 on a usb stick and use the old option/alt key at poweron to access the EFI boot menu, then select the usb stick to boot from it (optionally use the ctrl key to set that as the default boot option) Then boot vmware and set it up to your tastes.

Once that’s done and you have added a Datastore (NFS ones are not supported for this), then you are ready to make the pRDM (physical Raw Disk mapping) file (so you can add the local disk to the apple osx VM we will later create).

Follow the steps at

For me the process looked like :-

Login to the mac mini esx server with ssh (you did enable ssh in the troubleshooting menu?)

(FYI, the volume”Lunny”, that is referenced below, is an ISCSI Lun that’s attached from my NAS. This is also key to getting the following part to work as it adds a form of persistent storage that VMware will work with.)

cd /dev/disks
/dev/disks # ls -l | grep APPLE
-rw——-    1 root     root     1000204886016 May 22 11:39 t10.ATA_____APPLE_HDD_HTS461001A9A662_____________________J80808AAAAAAAB

/dev/disks # vmkfstools -z /vmfs/devices/disks/t10.ATA_____APPLE_HDD_HTS461001A9A662_____________________J80808AAAAAAAB /vmfs/volumes/Lunny/MAC1TBRDS.vmdk

Now this mapping file is ready to use. So lets create our VM, login to vmware and create a new custom vm, set the guest OS type to other and the Version to 10.7, set the cpu settings to at 2 cores, set mem to 4GB, once your at the select a disk section use the “use and existing virtual disk” option so you can select that pRDM vmdk file.

Finnish up the vm creation and review your settings then finalise it. Now you have an OSX vm that will boot from your local disk and load your existing OSX install from that.

Congratulations you have re-purposed that mac mini and now it has a new role as a vmware server.

Mine is running my osx vm and also my vCenter vm. Adding a thunderbolt Ethernet port and trunking that together with the built in port makes the mac mini even more potent in the home lab.

The Devil is in the details

In case any of you were wondering, my lab setup is this :-

Asus RT-N66U router running Merlins firmware

Cisco SG300-28 this is the main switch for the lab as is IOS-lite feature set and its low cost make it compelling. It being a L3 managed switch is damn useful if you want to impliment VLANS and trunk network ports to servers and storage.

Late 2012 quad core Mac Mini (with thunderbolt 1GBe dongle) upgraded to 16GB of ram and running VMware 5.5u2. This boots vmware from USB and then gets its datastores over the network.

Whitebox home built server powered by a i7-3930K 6 core cpu and 64GB ram, this is the beast box as its water cooled and overclocked. Any VM’s that need serious amounts of per core speed get moved to this to run.

Whitebox home built eight core Avoton server (with 32gb of ram (ASRock C2750D4I motherboard). This boots vmware from USB and then gets its datastores over the network. This server board can goto 64GB’s of ram, but the 16GB sticks are a little pricey. What I love about this server is that its utterly quiet and so can be left on 24/7 without annoying the family 😉

Synology 1515+ NAS This provides the core storage for the Vmware lab (and a lot of the VM’s that run in it make use of various NFS exports from it). The VM’s are contained on ISCSI and NFS (with the synology plugin for vaai support) datastores.

What I love about this nas is that its quiet, expandable, upgradeable, easy to use and yet powerful under the hood. Best of all, it now supports Docker and aside from having to be damn cautious when running 3rd party Docker images due to Docker’s *maturing* security, I can run a lot of handy build software in light weight docker containers on the NAS. Like SVN, Bugzilla, Jenkins, Gitlab, etc.

There’s no MUG in VMUG Advantage

The VMUG Advantage sub from VMware is awesome. For only $200 you get access to loads of vmware’s products to play with and learn from.

At the moment the sub gets you access to :-

VMware vCenter Server™ 5 Standalone for vSphere 5
VMware vSphere®  with Operations Management™ Enterprise Plus
VMware vCloud Suite® Standard
VMware vRealize Operations Insight™
VMware vRealize Operations™ 6 Enterprise
VMware vRealize Log Insight™
VMware Horizon™ Advanced Edition
VMware vRealize™ Operations for Horizon®
VMware Virtual SAN™

And they add new and updated programs to it from time to time, so check out for more details.

I recently subbed to it so I could get my updated home lab setup with vmware 5.5u2 and keep my skills fresh and continue learning how to get the most out of their software.

My lab now consists of :-

A quad core mac mini (with 16GB of ram and two network ports (one built in  and one thunderbolt))
An eight core avoton server (with 32gb of ram (ASRock C2750D4I motherboard))
And an Intel Core i7-3930K 6 core powered whitebox server with 64GB of ram.

Storage is provided by a Synology NAS and a Freebsd zfs box, they provide ISCSI and NFS datastores for my vmware lab.

CRIU and Docker (and probably lxc as well)

I’m looking forward to grabbing CRIU and playing with it, as I’m looking to implement a solution to allow live migration of containers in our Linux environment.

Now as you know, Docker and lxc are already awesome, but those together with live migration is some weapons grade awesomesauce right there.

I’ll be updating the site with a how-to once I have a solution in place.