Review of ESXi 5 for virtualization

In my discussion of my home-based server, setting up an ESXi 5.0 server was one potential solution for my requirements.

I first heard about VMware back in 1998 or 1999 when reading a story about them in the Wall Street Journal (newspaper). I sensed immediately that they were onto something big. Turns out the company has executed well on bringing virtualization into the main stream.

I've been an on and off user of VMware products for the past 10 years or so, ranging from VMware Player to ESXi. So I was moderately familiar with the VMware user experience.

Installation

Unlike Window's Hyper-V and Linux' KVM which sort of run as services within a general purpose server, ESXi really "takes over" your server. The ESXi server is a hypervisor running on a minimal Linux OS, so I was able to install the ESXi .iso onto a bootable USB using UNetbootin.

Like Windows Server 2008 R2, licensing was a bit of a hassle. My understanding is ESXi 5.0 is free (as in beer) for servers up to 8 GB in RAM. I had a license key that I could have installed, but I couldn't figure out how to use it. But the software gave me 60 days of evaluation time, which was sufficient.

To actually use the ESXi server, you'll need to install the vSphere client on a separate client machine and connect to the ESXi server. This went without incident.

ESXi Storage

My HP MicroServer currently has three small (250 GB) drives. Two of the drives were holdovers from Windows client machines, and I was wiping the primary one (which came with the MicroServer) and installing the OSes on there.

So I was a little surprised when I was presented with one datastore which seemed to be a JBOD (Just a Bunch of Disks - RAID 0??) of my three disks.

For my immediate purposes, this didn't make me too happy. To keep testing consistent, I wanted to create VMs like I had created them under Windows 2008. In other words, I wanted to select exactly where to place the virtual disks.

I suppose I could have manually reset the storage ESXi was using, but I also wanted to see what the "default" behavior was, so I took what VMware gave me.

Since I used to work for a premier storage company where VMware was one of our biggest applications, I can understand ESXi has very sophisticated and capable storage options. Still, for my small endeavor, the default storage left me a little unsatisfied.

VM creation

VM creation was pretty easy -- after I resolved an issue of booting installation .iso files. You need to use the vSphere client to create guest VMs. Use the client to point to an .iso which can be booted. However, I had a "chicken-and-egg" problem with mounting the .iso. You can't mount the .iso until the VM has started. But if you start the VM without the .iso, you get a "No bootable media found" error.

The trick is to not mount the .iso, start the VM, and hit ESC (I believe) during the brief time when the VM is boot strapping and you can indicate how to boot. Then, when booting has suspended, you can add the .iso, and boot from the "CD". See here for another explanation of this :

http://www.petri.co.il/forums/showthread.php?t=26501

Cloning

I didn't see an extremely easy or obvious way to clone VMs. For example, VirtualBox and KVM have "Clone..." menu items. So for ESXi, to clone an image, I exported it as an OVF, and then re-imported it with "Deploy OVF Template".

Unfortunately, this meant a lot of network traffic, at least in my setup. I didn't have much of an "addressable datastore" on the ESXi server (large disks or SAN etc.) so I wound up exporting the OVF to my client. Then I re-imported the OVF from my client back to the MicroServer. This took about 20 minutes each way on my slow network. Again, ESXi I think can be very capable in working with sophisticated data stores, but not really suited for my simple environment.

vSphere client

Practically, to manage an ESXi server (at least in my small setup), you need to use the vSphere client. The client is quite sophisticated and offers an array of ways to manage your server and the VMs it houses.

However, I found the client produced quite a drag on my client computer. My Windows client Task Manager reported the vSphere client was consistently using 75 percent CPU. During the testing, I kept consoles open on up to two VMs, and all this rendering on the client may have been the cause of this.

Performance

I provide details on performance, but relative to Hyper-V and KVM I found ESXi generally had slightly better performance for CPU-intensive tasks, and middling performance for I/O tasks.

Categories: 

There is 1 Comment

Thanks. I will stick with xenserver for now. For me, I basically need a digital playground where I can't do too much damage to the system as a whole.