PassMark Hard Drive Ratings

August 7, 2010

Hard drive ratings
As an addendum to the PassMark CPU Benchmark post, I decided to go back and look up the ratings of my home drives currently in service, to see how they compare to one another. Not wanting to spend too much time locating all the drive models, I only included the drives ordered through Newegg (purchase history is so useful!). PassMark has a nice look-up page to make hunting for your specific model(s) a quick process.

PassMark Hard Drive Benchmarks

The results, shown below, were interesting.

Model Capacity (GB) Drive Rating
Western Digital WD800JB 80 303
Seagate ST3160023A 160 310
Seagate ST3320620A 320 314
Seagate ST3300622A 300 332
Western Digital WD3200JB 320 338
Seagate ST3320620A 320 394
Western Digital WD6400AAKS 640 626
Samsung F2 HD103SI (green) 1,000 553
Samsung F3 HD103SJ 1,000 872

I have always been pleased with the speed, quietness and cool running of the WD640 Blue drives. I have two running in my primary machine and one in the ESXi server, and have never heard them running over the case fans. But I was pleasantly surprised at how much better performance the Samsung F3 provided. 872 versus 626 is a very significant 40% difference. I had already more or less decided the next ESXi drive to be added would be another F3 and this only bolstered that decision. The SSDs that are all the rage these days for boot drives started at about that point and worked their way up to scores around 1700. Of course, their capacity is much less and cost much, much more than the $70 I spent on the F3.

Temperature Readings
While we are talking hard drives, I’ll also mention the temperature readings of the non-ESXi drives.  I used System Information for Windows by Gabriel Topala, an excellent application that will get a future post dedicated to it. The two Western Digital 640’s in my primary box read 95° and 96° Fahrenheit respectively. The backup machine houses three Seagates and the Samsung green drive. The Seagates read between 107° and 109°. The green, not surprisingly, was a cooler 86° despite it being within one slot of the other drives.

0



Upgrading my home network to gigabit

August 3, 2010
Tags: ,

This time around I wanted to briefly discuss my network upgrade from 100Mbs wired to 1000Mbs, and to direct wiring more of the machines. This was a little project from this Spring.

Upgrading my home network
I had been running a basic wireless G 100Mbs router for a number of years. It served its purpose admirably. It always was easier to just go wireless than try to run a cable. Our now-retired TIVO had a wireless adapter, the arcade cabinet and laptops were wireless. The only PCs that were wired were the two under my desk. Speed was rarely an issue, though, as file transfers were infrequent, and web surfing doesn’t require much bandwidth. If I needed a faster connection for the laptop (moving files back and forth), a spare Cat5e cable plugged into the router worked just fine.

Once I decided to build the ESXi box, I knew the time was right to upgrade the network. Traffic would be heavier and faster network speeds would be helpful. Besides that fact, the primary PC had a gigabit NIC already, and the ESXi box would have one, too. Keeping the status quo would be like trying to race two Corvettes down a one lane, dirt road. A major bottleneck.

I wished to keep the total expenditure for the project under $80, a purely arbitrary number. This put some limitations on what I could use.

I initially debated moving to a new, wireless-N router with gigabit ports but that was quickly reconsidered. Not only did I not have any wireless-N devices, it was more money than the $80 budget would allow for.

After a bit of time watching SlickDeals, two inexpensive gigabit switches, the D-Link DGS-2205 ($22 AR) and TRENDnet TEG-S50G 10/100/1000Mbps GREENnet Switch ($15 AR), were ordered. Each had a rebate, one per household, so that is my excuse for the two different manufacturers. A StarTech ST1000BT32 1 Port PCI 10/100/1000 32 Bit Gigabit Ethernet Network Adapter Card ($14) that had been ordered at the same time as the ESXi parts replaced the slower one in the backup machine. An order from MonoPrice ($21) yielded me a box of goodies that included a variety of lengths, and colors, of cat5e cables. These ranged from 12″ to 75′. I knew it was going to be a lot easier to trace any wiring in the basement ceiling if they were not all basic-black. I stayed with Cat5e rather than moving to pricier Cat6.

In the end, my entire upgrade cost just $72, less than the cost of just a Wireless-N router. I was under budget, which always feels good. I also have a few extra Cat5e cables on hand now, if needed.

The Move
At that time, I had the backup machine under my work desk. This had begun to limit the leg room and the kids were starting to bump into it more frequently while sitting with me. My finished basement is a large square, with a utility closet in the middle making it more of a donut. We use the left side for the office, and the other side contained a half-wall of sturdy, completely filled shelves. The top shelf of one unit ended up being EXACTLY the right height I needed to house the backup machine and ESXi box. So two storage boxes were relocated from the spot, and the backup machine was moved. This necessitated a cable being run from the router at one end of the basement, up and across the drop ceiling, and dropped down above the shelf, to the PC’s location. This single line would be connected to a switch to allow for multiple devices.

APC UPS
The PC required an uninterruptible power supply (UPS) in case of short power outages or general power fluctuations. I moved my second 330W APC UPS from by the desk to a shelf beneath the newly relocated PC. I have owned a number of their UPSs over the years and they have never let me down. I did try a Belkin about seven years ago and it didn’t even compare; brownouts would still power off the PC. With APC’s products, I have never had that issue. They last between four and five years for me. When it is time for a replacement, the new one goes with the newest PC, and that machine’s UPS gets moved to the open spot. The spent battery gets taken to Lowes and dropped into their recycling box, the case goes into the garbage.


The switches

The TRENDnet switch was placed on my desk, beneath the router. The primary PC was plugged into it, as was the wireless router. The cable that ran to the other side of the room was also plugged into a port.

On the shelf across the room, the D-Link switch, was placed between the two PCs. One foot cables connected the machines to the switch. The cable from the other side found a port, too. The remaining two ports would be used for the arcade and the Blu-ray player, both on the main level. Both switches were plugged into their respective UPSs. A picture of the network that might make more sense is further down this post.

Running the cables
As previously mentioned, the basement has a drop ceiling. This made the cable runs a lot easier that it might have been otherwise. I selected a brightly colored 50′ cable and ran it from the server-side switch up to beneath the arcade cabinet (in the main level’s front room). A 5/8″ spade bit from above quickly made a hole through the carpet and floor. I pushed a straightened coat hanger down through the floor, attached the cat5 cable to the lower end of the hanger with black tape, then pulled the hanger back up through the floor, dragging the cable with it. Then I plugged the cable into the cabinet’s PC and I moved onto the other runs.

The run to the Blu-ray in the family room was a different colored 50′ cable. I decided to also string a 75′ cable along with it to the far wall of that room for future expansion. I just left it coiled up beneath the floor (the family room is above a crawl), with electrical tape covering the plug. It was not plugged into the switch.

The Finished Network
Below is the layout of the main, wired network.

Home Network Layout

Home Network Layout

Speeds
Before testing, I upgraded the NIC drivers in my primary PC and the backup machine. After that was completed, I tested the network throughput by moving around some 13GB home video files. For the large files such as that, the gigabit portion of network runs at 220-240Mbs. Batches of smaller files showed a slightly reduced rate. Definitely a step up from 70-75Mbs I get from the 100Mbs NIC in the arcade cabinet.

I have been quite happy with the increase in speed over the network. It was well worth the limited expenditure. If I had to do one thing differently, it might have been to buy an 8-port switch rather than the second five-port. Other than that, I have no regrets.

0

Installing ESXi without destroying the datastore

July 31, 2010
Tags: ,

Installing ESXi without destroying the datastore
I was perusing some ESXi links when I discovered an interesting thread in the VMware Community Discussion forums. The post discusses how you might install ESXi onto an existing drive that contains a datastore, without destroying the datastore. I could see this as a potential lifesaver down the road. My ESXi box has the boot drive containing both the installation and a datastore, so it is quite applicable to my situation.

0

PassMark CPU Benchmarks

July 28, 2010

CPU Comparisons
While researching CPUs for the ESXi Server, one of the sites I hit for information was the PassMark site. This site has a wealth of data comparing almost any CPU, video card or hard drive available… or previously available. They claim over 200,000 CPUs and 200,000 video cards benchmarked. It was very helpful when narrowing down the various CPU options to have this comparative data. My primary PC, sporting a Q6600, showed up with a score of 2,981. The AMD 630 I eventually chose was a slightly higher 3,180. This gave me a good sense that the new CPU would perform at least as well as the Intel chip I was so happy with.

Intel Q6600 versus the AMD 630

This also made me curious as to how my other desktop processors compared.  I have AMD’s XP 1800+ and XP1400+ in my other two machines, with scores of 335 and 290 respectively.  These scores are about one-tenth of my new processors’.  (Of course, the new chips also have four times the cores so it is not really a fair fight.)

Other CPU comparisons for my desktops

Something that caught my eye on this chart was the Intel Atom processor (at the extreme bottom on the picture above).  It shows a meager 316.  It is amusing to me that the Netbooks are being powered by chips no more powerful than ones I purchased seven and eight years ago.  But I certainly know you can squeeze plenty of performance out of these older chips, provided you understand their limitations.

If you are in the market for a new processor or video card, you might want to keep the PassMark site in mind.

0

My ESXi box: Part 2- ESXi installation

July 25, 2010

Review
In part one of the ESXi build I covered researching compatible hardware for the VMware machine. The post ended with an assembled PC. This post will cover the installation of ESXi 4.0 on the machine, the vSphere client, and the creation of a datastore.

Format the Hard drive(s)
Something that I forgot to mention during the hardware post is that I ended up needing to format the new drives as EXT3 rather than using the pre-formatted NTFS. When I initially tried to install ESXi, no drives were found, despite there being two physical drives present. I formatted the drives by simply booting up an Ubuntu Live CD that was lying around, and using gparted to format them. I initially split the first drive into two partitions (old habits die hard), but that ended up being undone by the ESXi installation. Just format the drive(s) as one partition of EXT3 or EXT4; Vmware will recognize it then and format it the way it wants.

Update Drivers/Download software
Now that you have the hardware assembled, it would be the time to update any drivers. I updated the Intel NIC’s driver on my Gigabyte motherboard.

Once you’ve updated the drivers, you’ll need the ESXi ISO image. Visit the VMWare site and register for your free download and license key. Write down the license key (in two locations, as it is important) from the top of the download page and then download the ISO. You might as well save yourself some time and download the vSphere client while you are here, too. Burn the ESXi ISO to CD. Choose a slower burn rate to ensure a good read.

The Installation
Change the PC’s BIOS, if needed, to boot from the newly created CD. Slap the ESXi install CD into the drawer, and restart the machine. Once it boots, you’ll see the following screen:

Press ENTER to begin the install, then PF11 to accept the EULA agreement. You’ll then be prompted to choose which drive you want to install ESXi on. I selected the first. The install will create some logical partitions behind the scenes, and then leave the rest for a VM datastore. Press Enter, then PF11to start the install. When it has finished, remove the CD and reboot the machine. I kept the boot order the same for the time being (CD, then HD). ESXi quickly boots up.

Press F2 to Customize the System. The first thing I did was create a nice long password. I also configured the management network and set a static IP address.

SSH
You will need to enable SSH on the server.

    1. Alt-F1 (Note: As pointed out below, you will not see your typing on this screen, just trust me, it is there).
    2. “unsupported” (Note: This is not echoed to the screen.)
    3. Enter the root password
    4. “vi /etc/inetd.conf”
    5. delete the “#” from ssh (i for insert, backspace over #, ESC, colon, wq, ENTER)
    6. “services.sh restart” (Note: if that doesn’t work, try: (ps aux | grep inetd and then kill -9 <PID>))

Relocate physical server
At this point, I no longer needed to have direct keyboard access to the new server, so I powered the machine down. After disconnecting the keyboard and mouse, the machine was moved across the basement and placed up on the shelf next to the backup PC that was humming away up there. I connected the power to the APC UPS, and ran a short Cat5e cable from the server to the gigabit switch located right there between the two PCs. I was able to then power up the machine and go back across the room to administer it from my Windows box.

vSphere Client
In order to administer the ESXi server, you will need to install the vSphere client on a Windows box on your network. It requires .NET 3.5, so you’ll need to grab that, too, if you don’t have at least that release installed. I installed the vSphere client on my primary PC. It took a while so be forewarned it might be best to go grab something to eat while it is doing its thing.

Once the client has installed, you can fire it up. You’ll get a logon prompt. Enter the IP address of your ESXI host (the static address from earlier), the username of “root” and your password.

The first time you start up the client, you will likely receive a security warning.  Click the checkbox, then click Ignore.

Once the client is open, you will see this screen:

Now is a great time to enter your ESXi license key.  Click the Configuration tab, then the Licensed Feature link on the left side, under Software.  Click Edit for license source.  Click Use Serial Number.  Type in the license key number you were given at the VMware download page, and follow the additional prompts to disable the ESX Server Evaluation.

Create datastore(s)
You will need to create a datastore to house your virtual machines, and to hold your installation ISOs.  You’ll be using the ISO files for machine installations as loading from an ISO is just so much faster than with a physical CD drive.

Select the “Configuration” tab from within the vSphere Client, and then click the “Create New Datastore” link to start the “Add Storage” wizard.

Leave the default “Disk/LUN” option selected. If you have more than one disk in your server, select which one you wish to use. If you only have one disk, or have selected the boot disk you installed ESXi on, you will see there are already a number of partitions taking up about 1GB of space. This is the ESXi installation so you don’t want to overwrite it! Select the “Use Free Space” option and click Next.

Enter a suitable name for your datastore (I used the unimaginative DataStore1 as mine), then click Next. On the next screen you’ll have to choose the block size for your datastore.

Choosing a block size when creating a VMFS datastore
I want to take a moment to discuss block sizes and how they impact the maximum size of a virtual disk. When you create a datastore, you specify the block size for that datastore. Once you create the datastore, there is no way to go back and change block sizes without reformatting (and losing any data) the datastore. The following are your choices:

    1MB block size – 256GB maximum file size
    2MB block size – 512GB maximum file size
    4MB block size – 1024GB maximum file size
    8MB block size – 2048GB maximum file size

I chose to stick with the default 1MB with both of my datastores. The next physical drive I add will likely be created as 4MB. I encountered an issue with needing more than 256GB of storage for my OpenFiler installation (fortunately I could just add multiple 256GB virtual disks to a single pool) and don’t want to run into a similar issue down the road. These maximum sizes are good to keep in mind when you add a new datastore.

Click Next. Check your settings. If all is well, click Finish and wait for ESXi to do its thing. Eventually the new datastore will appear in the host summary.

Network Settings
There are many ways the network can be set up. A production machine would have at least two physical NICs and VLANs. My home network does not require that redundancy, so I left the settings as default.

Network Time Configuration
You’ll want your server to have the accurate time so we’ll set that up.  Click Time Configuration.

Then click Properties.

Click Options.  Select NTP settings at the left, then click Add.  Type in your time server, or just use Microsoft’s.

Click OK.  Checkmark the Restart… , and then press Ok until you are back at the main configuration screen.  You’re now set on time.

Veeam FastSCP
You can browse datastores and move files to and from the server that way. I’ve found that to be the simplest (no extra software to worry about) but some ESX users have found that method to be slower than they would like. There is a free product available that my office uses, and I have installed at home as well. I’ve not much experience with it but figure it would be another option if you encounter slow transfers. It is Veeam FastSCP.

Backup your ESXi configuration
Now that you have your ESXi server set up, it might be nice to backup its configuration. In the event the ESXi host dies, you can quickly recover the configuration from the file you created. I cannot personally vouch for its effectiveness, but it cannot hurt to have this safety net in place. Download the Backup Configurator and install it. Run it and follow the prompts to create the file of your settings. Save this file somewhere safe (not on the ESXi box, obviously). It will be extremely small.

That’s about it for how I installed ESXi and set up the VSphere client.  I have greatly enjoyed having the server.  Having a dedicated machine available for running virtual machines is a liberating experience when you are used to VirtualBox on your primary PC.  I keep the VSphere client running all the time, and can pop into a running machine’s console at any time.  When done, I just close the console and the vm continues to run, consuming no resources on our PC.  It’s very nice.  The geek in me enjoys it.

3

My ESXi box: Part 1- The Hardware

July 20, 2010
Tags:

Overview
The past 18 months or so, I had been playing around with Sun’s VirtualBox on my primary machine. It allowed me to test out various Linux flavors, and work with open source tools not necessarily available for the host OS, Windows XP. The host hardware was sufficient to run two virtual machines at a time but having a 32-bit OS limited me to just 3.2GB RAM. This meant that the PC that was running near capacity a lot of the time, and that just didn’t suit me. Over time, I only fired up a session when needed.

I had toyed around with the idea of building a dedicated server to run VirtualBox on. There were essentially two downsides to this:

    1) VBox needs a guest OS to run on. I would have run it on top of an Ubuntu release, which wasn’t an issue unto itself. The problem was this would have required a monitor, keyboard and mouse to be hooked up to the PC- which was to be across the basement from my workspace.
    2) Oracle. Given its history, I’m not entirely confident VirtualBox will continue to be a free product.

At my office, we use VMWare’s ESXi to host a number of our critical servers. It is an extremely stable, high quality product that handles everything we throw at it. Once I realized that ESXi itself was free, had an extremely light footprint, AND my knowledge would transfer from work to home and back, this option jumped to the top of my list. Some amount of Googling later, the decision was finalized- we would go with the VMware product.

Now, the fun would begin. The search for compatible, and cost effective, hardware.

The Search
Anyone that knows me very well knows I am very frugal with my spending. This is especially true with regard to my technology purchases. My hobby website, ArcadeCab, has always generated enough proceeds from affiliate sales and traffic to support itself plus fund all my PC-related purchases. (My wife knows this and never questions any NewEgg boxes that sporadically show on the porch.) I decided that $600 would be the upper limit for this machine, and hoped to keep it well under that. Now that there was a budget, I could begin the hunt.

My general requirements were: two hard drives to start with, 4GB RAM with room to expand, gigabit Ethernet, quad-core processor, and a case that would comfortably hold, and cool, 6-8 drives. Even though I was just starting with just two drives, during the machine’s lifespan of (hopefully) 5-7 years, more drives would get added.

Let me be direct: ESXi does not run on just any hardware. Ethernet hardware it is especially picky about. Fortunately there are resources to help you in your search, such as VMware’s own page and the Ultimate ESX Whitebox site.

CPU
My requirements, combined with the $600 limit, meant that I could not spend more than $150 on a CPU. This ruled out the majority of the Intel chips. My primary machine has the Intel Q6600 in it which I love, but that older chip still sells for more than $150. All the other machines I’ve built over the years have had AMD chips, and they always served me well, so I focused on the AMD quad family of chips. I think all the AMD chips had Virtualization Technology Support (mandatory for this application) so that made things easier, too.

My final selection was the AMD Athlon II X4 630 Propus. This chip runs at 2.8GHz x 4 cores and sold for a very low $99. Part of the reason it was that much less expensive was its lack of an L3 cache. This initially concerned me until research led me to understand the performance hit for my server would be minimal, and likely will never be noticeable. CPU-wise, this machine will be underutilized; the hard drive spindle count will be much more of the bottleneck.

Another plus with using an AMD chip is their stock coolers are plenty fine to use if you aren’t overclocking (I’m not). On an Intel, this might have added another $25 to the cost.

The Motherboard
This was the single most complicated item to decide upon. Different chipsets played various degrees of well with ESXi. The most recommended options tended to be Intel motherboards, which didn’t help. It also seemed that the better the compatibility, the higher the price. Eventually I was able to find one with integrated video (a huge cost savings as well as very helpful in limiting both power usage and heat generation) that worked well with ESXi. I selected the GIGABYTE GA-MA785GM-US2H which supported up to 16GB RAM, natively supported five SATA drives, and handled all the AMD quad-cores, with some room to grow. It also ended up being a Micro ATX board, which tends to keep things a little cooler… and less expensive. ($80)

Memory
I used G.Skill before in my existing PC, and it has worked just fine, so I decided to ‘stick’ with it. Rather than paying the outrageous premium for a single 4GB stick, I purchased a pair of PC6400 2GBs. I still have room to add 4GB more. ($85)

Hard drives
I have two Western Digital Caviar Blue WD6400AAKS 640GB drives in my primary machine and absolutely LOVE them. Very fast, cool running, and quiet. So my primary drive in the new box would be one of these ($70). The second drive I wanted to be a 1TB drive and briefly considered a green drive. My research turned up that spindle count was more important than drive speed. But then I spotted the SAMSUNG Spinpoint F3. I have an F2 in my backup machine and it has performed very well, so I expected nothing less from the F3. It has met my expectations. ($75)

Case
A little comparison shopping on NewEgg led me to the COOLER MASTER ELITE 335 RC-335-KKN1-GP Black case. Plenty of internal space, and very nice airflow. I have been extremely happy with it. ($50)

Power Supply
My primary machine uses an Antec EarthWatts power supply, and I have appreciated both how much more efficient it is as well as how quiet it is. So I decided to go with another 380 watt version for $45. Now, before you jump on me for not going with a much beefier 600 watt, or similar, supply, let me just say that with a more efficient power supply, you can get by with a “smaller” wattage. I have a Kill A Watt and know that my primary Q6600 quad, with video card, and LCD panel, running Quake uses a paltry 187 watts. I have since measured the new server and it is generally sipping just 98(!) watts, while running three virtual machines plus the OpenFiler appliance.

Ethernet card
Although the motherboard had integrated Ethernet, I knew the Realtek chip was not compatible with ESXi. The Intel PWLA8391GT PRO/1000 GT PCI Network Adapter was mentioned often as very reliable and ESXi-approved, so I went with that for $28.

Total
All together the system cost $540, comfortably under the $600 maximum. In all honesty, though, the sixty dollars ended up being spent to upgrade the entire network to gigabit, cabling included, so we might as well say the project cost $600. No use having gigabit on the two quads if the switches in between are not.

Assembly
Nothing unusual with the system build. I did have to go into the BIOS (see below) to make a few changes, but everything else was your normal build.

BIOS changes
I made three BIOS changes to the Gigabyte board. These mandatory changes were gleaned from my research. They were:

    OnChip SATA Type = AHCI
    Onboard LAN = disable
    Virtualization = Enable

Hindsight
The only item minor issue I ran into when installing ESXi (which I’ll detail in my next post) is that the motherboard ended up not being able to boot from a USB thumbdrive. The original intention was to install on a 4GB thumbdrive and boot from it. That way I could limit the damage if the boot drive became corrupted. But this was not to be. After a number of hours of trial-and-error, and a fair amount of web research, I resigned myself to just installing ESXi on the 640GB drive and crossing my fingers. I might install a small boot drive down the road and shift the other drives down one slot, but right now everything works fine. Who knows, maybe someone who reads this will point out something missed by me, and USB actually can be booted from.

I’ve run with this hardware for the past five months. Even including the inability to boot from USB, I’ve not encountered any issue yet that makes me reconsider any of my hardware choices. The system is quieter than the AMD XP1600 single-core with four hard drives that sits next to it, and runs cooler. Power usage is also lower (98 watts at relative idle versus 133).

In one of my next posts, I plan to detail the installation of ESXi onto the server, some gotchas, and some information that might save you some time and pain.

0