The past 18 months or so, I had been playing around with Sun’s VirtualBox on my primary machine. It allowed me to test out various Linux flavors, and work with open source tools not necessarily available for the host OS, Windows XP. The host hardware was sufficient to run two virtual machines at a time but having a 32-bit OS limited me to just 3.2GB RAM. This meant that the PC that was running near capacity a lot of the time, and that just didn’t suit me. Over time, I only fired up a session when needed.
I had toyed around with the idea of building a dedicated server to run VirtualBox on. There were essentially two downsides to this:
At my office, we use VMWare’s ESXi to host a number of our critical servers. It is an extremely stable, high quality product that handles everything we throw at it. Once I realized that ESXi itself was free, had an extremely light footprint, AND my knowledge would transfer from work to home and back, this option jumped to the top of my list. Some amount of Googling later, the decision was finalized- we would go with the VMware product.
Now, the fun would begin. The search for compatible, and cost effective, hardware.
Anyone that knows me very well knows I am very frugal with my spending. This is especially true with regard to my technology purchases. My hobby website, ArcadeCab, has always generated enough proceeds from affiliate sales and traffic to support itself plus fund all my PC-related purchases. (My wife knows this and never questions any NewEgg boxes that sporadically show on the porch.) I decided that $600 would be the upper limit for this machine, and hoped to keep it well under that. Now that there was a budget, I could begin the hunt.
My general requirements were: two hard drives to start with, 4GB RAM with room to expand, gigabit Ethernet, quad-core processor, and a case that would comfortably hold, and cool, 6-8 drives. Even though I was just starting with just two drives, during the machine’s lifespan of (hopefully) 5-7 years, more drives would get added.
Let me be direct: ESXi does not run on just any hardware. Ethernet hardware it is especially picky about. Fortunately there are resources to help you in your search, such as VMware’s own page and the Ultimate ESX Whitebox site.
My requirements, combined with the $600 limit, meant that I could not spend more than $150 on a CPU. This ruled out the majority of the Intel chips. My primary machine has the Intel Q6600 in it which I love, but that older chip still sells for more than $150. All the other machines I’ve built over the years have had AMD chips, and they always served me well, so I focused on the AMD quad family of chips. I think all the AMD chips had Virtualization Technology Support (mandatory for this application) so that made things easier, too.
My final selection was the AMD Athlon II X4 630 Propus. This chip runs at 2.8GHz x 4 cores and sold for a very low $99. Part of the reason it was that much less expensive was its lack of an L3 cache. This initially concerned me until research led me to understand the performance hit for my server would be minimal, and likely will never be noticeable. CPU-wise, this machine will be underutilized; the hard drive spindle count will be much more of the bottleneck.
Another plus with using an AMD chip is their stock coolers are plenty fine to use if you aren’t overclocking (I’m not). On an Intel, this might have added another $25 to the cost.
This was the single most complicated item to decide upon. Different chipsets played various degrees of well with ESXi. The most recommended options tended to be Intel motherboards, which didn’t help. It also seemed that the better the compatibility, the higher the price. Eventually I was able to find one with integrated video (a huge cost savings as well as very helpful in limiting both power usage and heat generation) that worked well with ESXi. I selected the GIGABYTE GA-MA785GM-US2H which supported up to 16GB RAM, natively supported five SATA drives, and handled all the AMD quad-cores, with some room to grow. It also ended up being a Micro ATX board, which tends to keep things a little cooler… and less expensive. ($80)
I used G.Skill before in my existing PC, and it has worked just fine, so I decided to ‘stick’ with it. Rather than paying the outrageous premium for a single 4GB stick, I purchased a pair of PC6400 2GBs. I still have room to add 4GB more. ($85)
I have two Western Digital Caviar Blue WD6400AAKS 640GB drives in my primary machine and absolutely LOVE them. Very fast, cool running, and quiet. So my primary drive in the new box would be one of these ($70). The second drive I wanted to be a 1TB drive and briefly considered a green drive. My research turned up that spindle count was more important than drive speed. But then I spotted the SAMSUNG Spinpoint F3. I have an F2 in my backup machine and it has performed very well, so I expected nothing less from the F3. It has met my expectations. ($75)
A little comparison shopping on NewEgg led me to the COOLER MASTER ELITE 335 RC-335-KKN1-GP Black case. Plenty of internal space, and very nice airflow. I have been extremely happy with it. ($50)
My primary machine uses an Antec EarthWatts power supply, and I have appreciated both how much more efficient it is as well as how quiet it is. So I decided to go with another 380 watt version for $45. Now, before you jump on me for not going with a much beefier 600 watt, or similar, supply, let me just say that with a more efficient power supply, you can get by with a “smaller” wattage. I have a Kill A Watt and know that my primary Q6600 quad, with video card, and LCD panel, running Quake uses a paltry 187 watts. I have since measured the new server and it is generally sipping just 98(!) watts, while running three virtual machines plus the OpenFiler appliance.
Although the motherboard had integrated Ethernet, I knew the Realtek chip was not compatible with ESXi. The Intel PWLA8391GT PRO/1000 GT PCI Network Adapter was mentioned often as very reliable and ESXi-approved, so I went with that for $28.
All together the system cost $540, comfortably under the $600 maximum. In all honesty, though, the sixty dollars ended up being spent to upgrade the entire network to gigabit, cabling included, so we might as well say the project cost $600. No use having gigabit on the two quads if the switches in between are not.
Nothing unusual with the system build. I did have to go into the BIOS (see below) to make a few changes, but everything else was your normal build.
I made three BIOS changes to the Gigabyte board. These mandatory changes were gleaned from my research. They were:
The only item minor issue I ran into when installing ESXi (which I’ll detail in my next post) is that the motherboard ended up not being able to boot from a USB thumbdrive. The original intention was to install on a 4GB thumbdrive and boot from it. That way I could limit the damage if the boot drive became corrupted. But this was not to be. After a number of hours of trial-and-error, and a fair amount of web research, I resigned myself to just installing ESXi on the 640GB drive and crossing my fingers. I might install a small boot drive down the road and shift the other drives down one slot, but right now everything works fine. Who knows, maybe someone who reads this will point out something missed by me, and USB actually can be booted from.
I’ve run with this hardware for the past five months. Even including the inability to boot from USB, I’ve not encountered any issue yet that makes me reconsider any of my hardware choices. The system is quieter than the AMD XP1600 single-core with four hard drives that sits next to it, and runs cooler. Power usage is also lower (98 watts at relative idle versus 133).
In one of my next posts, I plan to detail the installation of ESXi onto the server, some gotchas, and some information that might save you some time and pain.