Categories
computer

Writing Good Error Messages

I received this little note from my Mac today.

mac_low_battery_warning.png

This made me feel all warm and fuzzy inside despite the interruption of my work because it satisfies my general criteria for displaying error messages to users.

  1. A graphical severity indicator is given so I know whether or not to care.
  2. It provides a succinct, human-readable desciption of the issue. (No “ERROR CODE: 23DD8” crap which is meaningless to the user.)
  3. An immediate, resolvable course of action is given to the user. Providing this makes the user feel empowered and accomplished for acting. Neglecting this makes the user concerned and irritated.
  4. A description of future symptoms is given for when/if the user does not take the suggested course of action. This gives the user reason to do what you’re asking.
  5. It shut up about the issue when I clicked OK and let the failure happen like it told me it would. When I noticed my mouse wasn’t responding I immediately remembered why.

The dialog is in stark contrast to this nifty gem constantly pooping out of my Solaris kernel..08-21-07_1354.jpg

“Pin widgit 27 is EAPD capable.”

WTF??? What the heck is a “pin widgit” and why do I care if it’s “EAPD capable”? Is this even a bad thing? Do I need to do something here? What happens if I ignore this, which I most definitely will since I have clue what it’s talking about? Why does it tell me this every time I start the machine?

Criteria failure on all counts. Bad computer!

Categories
computer

Xserve w/Leopard Server (Mac OS X 10.5), First Impressions

picture-4.pngWe just picked up a refurbished 2.66GHz quad-core Xeon from Apple, which we’ll be using for internal infrastructure. (We’re in the process of migrating from a mix of Solaris and Linux). After about 8 hours of learning the ins and outs of Leopard Server over the weekend, we had the box running Open Directory (Kerberos and OpenLDAP), DNS, AFP, SMB, FTP, domain account and machine management, mobile home directories, MySQL, Software Update, Xgrid controller, Wikis, Blogs, iCal and VPN services, all tightly integrated with single sign-on (via Kerberos) into a sexy 1U package.

  • Xserve (refurbished discount, direct from Apple): ~$3K
  • 3 x 750GB Disks (Newegg): ~$450
  • 2 x Apple Drive Module (direct from Apple): ~$380
  • 2 x 2GB FB-DIMM RAM (Crucial): ~$300
  • Infrastructural sanity: priceless. (…or ~$4.5K after tax and random small stuff)

That’s some serious value considering how much of a PITA setting all this up can be in Linux (or whatever) without vendor support, and far cheaper than paying a Systems Administrator in the long run. The Server Admin and Workgroup Manager tools are pretty freakin’ usable, too, relative to the internal complexity of the system. I’m a happy camper for now… let’s see if it lasts.

Categories
computer

Sun Introduces Non-Native (Linux) Zone Support

The “What’s New” document for Solaris 10 8/07 states the update “..includes the tools necessary to install a CentOS 3.5 to 3.8 or Red Hat Enterprise Linux 3.5 to 3.8 inside a non-global zone. Machines running the Solaris OS in either 32-bit or 64-bit mode can execute 32-bit Linux applications.” Additionally, “DTrace can now be used in a non-global zone..” and ZFS gets even cooler. w00t.

Categories
computer music

Solaris 10 x64: Get Your Sound Card Working

When I built the $1000, 2TB file server, the on-board nVidia HD audio support of the Asus M2NPV-VM motherboard didn’t work out of the box. It’s a trivial fix using the recently open sourced OSS drivers from 4Front Technologies. (I also tested a Creative Audigy 2 PCI sound card using these drivers under Solaris 10 11/06, and the playback worked just as well.)

  1. Download the Solaris x64 .pkg.
  2. `pkgadd -d oss-solaris-v4.0-1004-i386.pkg`
  3. `osstest`. You should here the sweet sweet sound of.. umm.. sound, and the following printed on the console..

bash-3.00$ osstest
Sound subsystem and version: OSS 4.0 (build 1004/200707062145) (0x00040002)
Platform: SunOS/i86pc 5.10 Generic_118855-33

*** Scanning sound adapter #-1 ***
/dev/oss/hdaudio0/pcm0 (audio engine 0): nVidia HD Audio play-front output
Note! Device is in use (by PID 0/VMIX) but will try anyway
– Performing audio playback test…
OK OK OK
/dev/oss/hdaudio0/pcm1 (audio engine 1): nVidia HD Audio play-side output
– Performing audio playback test…
OK OK OK
/dev/oss/hdaudio0/pcm2 (audio engine 2): nVidia HD Audio play-center/LFE output
– Performing audio playback test…
OK OK OK
/dev/oss/hdaudio0/spdout0 (audio engine 3): nVidia HD Audio spdif output
– Performing audio playback test…
OK OK OK
/dev/oss/hdaudio0/pcmin0 (audio engine 4): nVidia HD Audio record input
– Skipping input only device

*** All tests completed OK ***

Categories
computer

OpenSolaris ZFS vs. Linux ext3 RAID5

Preston Says: I asked Dan McClary for a big favor recently: use his general UNIX knowledge and graduate-level statistics voodoo to produce a report highlighting performance characteristic differencess between OpenSolaris ZFS and Linux RAID5 on a common, COTS hardware platform. The following analysis is his work, reformatted to fit your screen. You may download the PDF, HTML, graphs and original TeX source here.

A Brief Comparison of I/O Performance for RAIDZ1 and RAID-5 Filesystems
Dan McClary
June 28, 2007

Introduction

The following is a description of results obtained benchmarking I/O performance for two OS/filesystem combinations running identical hardware. The hardware used in the tests is as follows:

  • Motherboard: Asus M2NPV-VM.
  • CPU: AMD Athlon 64 X2 4800+ Dual Core Processor. 2.5GHz, 2?512KB, 1GHz Bus
  • Memory: 4 x 1GB via OCZ OCZ2G8002GK DDR2-800 PC2-6400
  • Drives: 4 x 500GB Western Digital Caviar SE 16 WD5000AAKS 7200RPM 16MB Cache SATA 3.0Gb/s

The Linux/RAID-5 combination uses a stock Ubuntu Server Edition installation, running kernel 2.6.19-generic, with RAID-5 configured via mdadm and formatted ext3. The Solaris/RAID-Z1 configuration is a stock installation of Solaris Developer Express Edition with zpool managing the zfs-formatted RAID-Z1 drives. Block size for all relevant tests is 4096 bytes.

Basic I/O testing is conducted using bonnie++ (version 1.03a), tiobench (version 0.3.3-5), and a series of BASH-scripted operations. Tests focus on I/O throughput and CPU usage for operations either much larger than available memory, and very large numbers of operations on small files. All figures, unless otherwise noted, chart mean performance with 2% deviation for large-file operations and 5% for small-file operations. These bounds well-exceed the 95% confidence interval, implying a range of high significance.

Large-File Operations

In dealing sequential reads and writes, particularly of large files, the Solaris/RAID-Z1 configuration displays much higher throughput than the Ubuntu/RAID-5 combination. Latency and CPU usage, however, appear to be higher than in the Ubuntu configuration. The reasons for these disparities are not determinable from the tests concluded, though one might venture that the management algorithm used by ZFS and each systems caching policies may play a part.

picture-2.png

Figures 1, 2, and 3 summarize large-file writing performance in the bonnie++ suite. In large writes, Solaris-ZFS displays marginally higher throughput and occasionally lower CPU usage. However, the disparities are not great enough to make a strict recommendation based solely on large-file writing performance.

picture-4.png

picture-5.png

picture-6.png

picture-7.png

picture-8.png

picture-9.png

Figures 4 and 5 illustrate throughput and CPU usage while reading large files in the bonnie++ suite. Generally, results are consistent between platforms, with the Ubuntu configuration showing a slight edge when reading 15,680MB files (though with an associated drop in CPU efficiency).

tiobench results for random reads and writes given in Tables 3 and 4 show the Ubuntu/RAID-5 configuration displaying both higher throughput and greater CPU efficiency. However, these results seem somewhat questionable given the results in section §3.

picture-10.png

picture-11.png

Small-File Operations

In examining the performance of both configurations on small files, both in the bonnie++ suite and from shell-executed commands, the most obvious statement that can be made is that the Solaris configuration displays greater CPU usage. This, though, may not be indicative of poor performance. Instead, it may be the result of an aggressive caching or other kernel-level policies. A more detailed study would be required to determine both the causes and effects of this result. In each test, 102,400 files of either 8 or 4KB were created.

picture-12.png

picture-13.png

Figures 6(a)-6(c) and 7(a)-7(c) illustrate bonnie++ performance for both configurations. In contrast to the tiobench results, the Solaris configuration generally displays slightly higher throughput (on the order of 1-2MB/s) than its counterpart. However, as previously noted, CPU usage is much higher.

Finally, Tables 5-6 lists measured times as given by the standard Unix command time when measuring command execution. In these results, there are some surprises. The Ubuntu configuration performs somewhat faster when executing a large write (using the command dd). However, the Solaris configuration is much faster when dealing with 100,000 sequential 8KB files. For reference, all file creation is done via dd, copying by cp and deletion by rm.

picture-14.png

picture-15.png

Conclusions

Few overarching conclusions can be drawn from the limited results of this study. Certainly, there are situations in which the Solaris/RAID-Z1 configuration appears to outperform the Ubuntu/RAID-5 configuration. Many questions remain regarding the large discrepancy in CPU usage for small-file operations. Likewise, the Ubuntu/RAID-5 configuration appears to perform slightly better in certain situations, though not overwhelmingly so. At best, under these default configurations, one can say that overall the Solaris configuration performs no worse, and indicates that it might perform better under live operating conditions. The latter, though, is largely speculation.

Indeed, from the analyst’s point of view, both configurations show reasonable performance. The desire to deploy either configuration in an enterprise setting suggests that significant-factor studies and robust parameter designs be conducted on, if not both candidates, whichever is most likely to be deployed. These studies would provide insight into why the discrepancies in current study exist, and more importantly, achieve optimized performance in the presences of significant uncontrollable factors (e.g. variable request-load).

Preston Says: Thanks for the outstanding work, Dan!

Categories
computer

The $1,000 (USD), 2TB OpenSolaris File Server

images.jpeg

Here’s how to score a sweet OpenSolaris-compatible 2TB file server for $1000 (USD). I’m running Solaris Express Developer Edition on it with a ZFS RAIDZ1 file system.

  • Motherboard: Asus M2NPV-VM. A great, inexpensive mATX board w/4 SATA ports, 2 IDE, PCI-X, GigE, built-in nVidia graphics w/DVI and VGA outs, FireWire, and tons of USB love. Oh, and the official nVidia driver works like a charm. Sweet! (The only thing that hasn’t run out-of-the-box is the built-in audio controller. Oh well.) $100.
  • CPU: AMD Athlon 64 X2 4800+ Dual Core Processor. 2.5GHz, 2x512KB, 1GHz Bus. $130.
  • Memory: 4x1GB via OCZ OCZ2G8002GK DDR2-800 PC2-6400 kits. $175 total (after MIR).
  • Drives: 4x500GB Western Digital Caviar SE 16 WD5000AAKS 7200RPM 16MB Cache SATA 3.0Gb/s for the ZFS data volume. $450. (Old, small, slow and cheap PATA disks reused for the system volume. FMV ~$25.)
  • Optical Drive: Reused old typical IDE DVD-ROM. FMV ~$20.
  • Case: Reused typical ATX tower w/450 watt PSU. FMV ~$80.
  • Peripherals: Reused optical mouse and keyboard. FMV ~$20.

Total: $1,000. Nice!