Categories
computer

Keeping Multiple Development Machines Synchronized with Homeboy

Most of us developer types have at least two machines we use routinely, and managing that can be a chore. Specifically, I usually want to do the following every time I sit down at a machine to hack on something:

  • Keep config files like .bash_profile and .gitconfig  synchronized. (This requires scripting since Dropbox ignores hidden files.)
  • Patch all OS-level libraries using the native package management system on OSX, Ubuntu and CentOS.
  • Update my database daemons and other system services.
  • Upgrade development libraries.
  • Merge in ‘git pull origin master’ project source code for all my clones.

Doing all this every time I hop to a different machines was chore, so I wrote a few BASH scripts to help. More importantly, I recently packaged them into a public GitHub and released it! Homeboy is a set of small, plain BASH scripts. After following the simply installation instructions, you just run homeboy every morning:

$ homeboy

This will run the updates you specify, though I’ve only included the stuff I use regularly: namely brew (OSX), rvm (OSX and Linux), apt (Ubuntu), yum (Red Hat/CentOS) etc. I ask that you submit pull requests to add support for updating Perl, Python, MacPorts etc. The synchronization mechanism works by zipping the specified list of files into a .zip in a synchronized directory managed by Dropbox, SugarSync etc. “Pushing” your current set of files to Dropbox is done via:

$ homeboy-push

After pushing, the next time `homeboy’ is run on any configured machine, the .zip file will be unzipped into your home directory. It’s really not complicated, but saves time by having to make the same change a bunch of times across different machines and platforms, and having subtle differences.

When using the git options, homeboy assumes you have a single directory where all your clones are kept, such as ~/Developer/git. Every subdirectory that looks like a git clone will have ‘git pull origin master’ run inside it.

Pretty silly stuff, right? But hey, all I do is run `homeboy’ every morning I plan on doing development work on a machine, and sip on a cup coffee while everything is brought up to date. 🙂

Please help test and submit pull requests!

Categories
computer

Ubuntu 12.10 to 13.04 Server Upgrade Error

Are you Googl’ing around trying to figure out why you’re getting this error when trying to `do-release-upgrade’ your Ubuntu 12.10 system, even though you’re pretty much up to date?

root@mia:~# do-release-upgrade
Checking for a new Ubuntu release
Traceback (most recent call last):
File “/usr/bin/do-release-upgrade”, line 145, in <module>
fetcher.run_options += [“–mode=%s” % options.mode,
AttributeError: type object ‘DistUpgradeFetcherCore’ has no attribute ‘run_options’
Error in sys.excepthook:
Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/apport_python_hook.py”, line 137, in apport_excepthook
os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o640), ‘wb’) as f:
OSError: [Errno 2] No such file or directory: ‘/var/crash/_usr_bin_do-release-upgrade.0.crash’

Original exception was:
Traceback (most recent call last):
File “/usr/bin/do-release-upgrade”, line 145, in <module>
fetcher.run_options += [“–mode=%s” % options.mode,
AttributeError: type object ‘DistUpgradeFetcherCore’ has no attribute ‘run_options’
root@mia:~#
deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted
deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted

I just burned a couple hours trying to figure out why I only got this error on one specific server. It turns out that it’s a bug in the version of  the ubuntu-release-upgrader-core package, and you must be pulling updates from a “quantal-updates” repository to get the fixed package when you `apt-get upgrade’. To fix this, edit your /etc/apt/sources.list and make sure you have the following two lines:

deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted
deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted

After making sure you have these lines, get yourself updated again and re-try the release upgrade script:

apt-get update
apt-get upgrade
do-release-upgrade

…and you should be rollin’ towards Raring!

Categories
computer

OpenStack Diablo on Ubuntu 11.10 w/MySQL

If you’re installing the OpenStack cloud computing platform “Diablo” release on Ubuntu 11.10 server using the official guide and are using MySQL, don’t forget to:

sudo apt-get install python-mysqldb

The “glance” package (and probably others) are configured to use SQLite by default and do not automatically install this dependency by default. Unfortunately, the November 2011 version of the manual also fails to mention this and the log file (/var/log/glance/registry.log) prints a variety of “INFO [sqlalchemy.engine.base.Engine…” messages but no warnings or errors. I assume a corresponding comment applies to PostgreSQL as well but have not confirmed.

Categories
computer

Upgrading From Ruby 1.9.1 to Ruby 1.9.2

I’ve spent half the day so far inbeded in the furious stressful upgrade process of a handful of Ubuntu Linux 10.04 and Mac OS X Snow Leopard systems from Ruby 1.9.1 to Ruby 1.9.2. I haven’t even gotten to the Rails 3.0.0 stuff yet: just the baseline Ruby installation. I’ve gone through the upgrade process on both types of systems so far and the base issues have been the same. Here’s a common issue that many people are running into:

preston$ gem1.9

/opt/local/lib/ruby1.9/site_ruby/1.9.1/rubygems/source_index.rb:68:in `installed_spec_directories’: undefined method `path’ for Gem:Module (NoMethodError)
from /opt/local/lib/ruby1.9/site_ruby/1.9.1/rubygems/source_index.rb:58:in `from_installed_gems’
from /opt/local/lib/ruby1.9/site_ruby/1.9.1/rubygems.rb:883:in `source_index’
from /opt/local/lib/ruby1.9/site_ruby/1.9.1/rubygems/gem_path_searcher.rb:81:in `init_gemspecs’
from /opt/local/lib/ruby1.9/site_ruby/1.9.1/rubygems/gem_path_searcher.rb:13:in `initialize’
from /opt/local/lib/ruby1.9/site_ruby/1.9.1/rubygems.rb:841:in `new’
from /opt/local/lib/ruby1.9/site_ruby/1.9.1/rubygems.rb:841:in `block in searcher’
from <internal:prelude>:10:in `synchronize’
<…and so on…>

Assuming you’re upgrading from a previous Ruby installation, note that the “site_ruby” directories are no longer used, and will eff up your 1.9.2 installation if you fail to delete them after the install. On OS X, run:

sudo rm -rf /opt/local/lib/ruby1.9/site_ruby/

On Ubuntu Linux 10.04, run:

rm -rf /usr/local/lib/ruby/site_ruby/

…to correct this issue. All note that you may see errors such as this:

root@li92-132:~# rake –version

/usr/local/lib/ruby/1.9.1/rubygems.rb:340:in `bin_path’: can’t find executable rake for rake-0.8.7 (Gem::Exception)

from /usr/local/bin/rake:19:in `<main>’

…despite have a rake gem installed. Apparently 1.9.2 comes with a version of rake internally, but is unable to find it for some reason relating to the rake.gemspec file. Remove the file to fix this issue. On Ubuntu Linux 10.04, run:
rm /usr/local/lib/ruby/gems/1.9.1/specifications/rake.gemspec
Notice the “1.9.1” path of the PATH. Yeah.. it’s weird. But for compatibility reasons your 1.9.2 installation will continue to use a path with 1.9.1. To quote the Ruby 1.9.2 FAQ page:
The standard library is installed in /usr/local/lib/ruby/1.9.1
This version number is “library compatibility version”. Ruby 1.9.2 is mostly compatible with the 1.9.1, so its library is installed in the directory.
I’m sure there’s a wonderful technical reason for this, but it’s still misleading and confusing as hell. I ended up manually deleting a bunch of stuff I shouldn’t have because I thought I was innocently “cleaning up” after the old version. Whatever. Additional suggestions:
  • Just to keep things clean, you may also want to remove your old Ruby 1.8.x builds. (I recommend doing so unless you have older apps that haven’t moved to 1.9.x yet.)
  • Phusion Passenger seems to work fine on Ubuntu 10.04 with the latest version of Apache 2 as of this writing, though don’t forget to recompile, reinstall, reconfigure and restart apache2 when you do so.
  • Check if you still need rack v1.0.1 installed (for older Rails app) before nuking everything. 🙁
I need a beer!
Categories
computer

3D Desktop Ruby Applications On Linux

Over the past year I’ve put out a few working demos of how to develop full 3D, OpenGL-based OSX applications using Ruby. Most of the comments I’ve received have been positive, but I think the high learning overhead has been the prime limiting factor in addoption. I also decided to focus exclusively on Mac OS X, further limiting the potential audience.

I’m pleasing to learn that Martin “monkstone” Prout has successfully run the code contained within my Starfield.app–basically a folder of code on a OS X system that looks and behaves like an .exe does Windows–on a Kubuntu Linux system. I haven’t personally tried to replicate this myself due to a lack of time, but you can read how monkstone did it on his blog.

Categories
business computer music personal

The $1K CD/DVD/LightScribe Replicator: The DIY Guide To Manufacturing Your Own Discs For Less Than $1 Each

This do-it-yourself replicator features eight Lite-On CD/DVD burners. By flipping the disc over you can burn images onto the top using the drive lasers.

I’ve slowly updated components of The $1K Home Studio over the last few years, but have never had a low-cost, DIY solution for disc replication. After playing with external CD burners and evaluating various proprietary hardware options such as the Aleratec auto-flip burner , MicroBoard tower replicators amongst many others, I decided that the current commercial solutions are nice, but most definitely overpriced. So I decided to develop my own solution. This custom-built behemoth is built from common off-the-shelf (COTS) hardware from Fry’s Electronics and inexpensive commercial software. It costs less to own than commercially branded replicators, and also functions as a normal desktop computer since it runs Windows 7 and Linux. (I took care to also buy a Gigabyte-brand motherboard that supposedly supports the OSx86 (“hackintosh”) project, but have had little success with the installation.)

Hardware

  • Intel i5 750 64-bit CPU. (Features 4 cores.)
  • 4GB RAM.
  • 8 x (yes, eight) Lite-On CD/DVD 5.25″ SATA burner drives.
  • Gigabyte motherboard with lots of SATA ports.
  • Add-on SATA card. (Most motherboards won’t have enough connectors, especially if you have 8 x burners plus 4 x hard drives. 🙂 )
  • Big-ass power supply. (The first one I bought wouldn’t even boot the thing. I put in a monster and everything started working.)

Software

The point of all these burners is to burn simultaneously to all of them, but Windows 7 and OS X cannot do this out of the box. Only a small subset of CD/DVD burning software on the market supports parallel burning, and some only seems to support multiple burners for specific types of burns. What’s worked best for me so far is…

  • Nero Multimedia Suite 10 for concurrent audio and data burning with multiple burners. You don’t have a lot of easy-to-use alternatives here, and I’ve also noticed a few glitches with Nero. Keep your eye out for sales here and you can pick up a copy dirt cheap.
  • Acoustica CD/DVD Label Maker for concurrent LightScribe replication across multiple burners. Again, not a lot of options here. The free software from LightScribe.com does not support multiple burners, though some vendor-specific bundles seem to. (LaCie’s LightScribe software in particular appears to support simultaneous LightScribe burns, and they also have a Mac version. I would have went with a Mac-based solution, but 8 x USB 2.0 drives probably would not work so well.)
CDs burned with LightScribe technology. Discs come in many different colors.

I decided to create all my replicated discs using LightScribe technology. This allows me to flip LightScribe CD-Rs upside-down in the burner and use the laser to burn custom graphics onto the top of the disc. I also made the command decision to use COTS cd sleeves instead of CD Jewel cases or slimline cases. The plastic ones are more expensive, always crack, and are pretty much useless from the start since most people seem to rip their CDs nowadays anyway. Sleeves protect the disc, come in many colors, are far less expensive, even cheaper in bulk, and perhaps best of all can be printed on directly though ordinary laser and ink jet printer.

The system runs Windows and Ubuntu. Additional drives are interchanged using hot-swap SATA drive modules.

System Pros

  • Inexpensive initial fixed cost of hardware parts and software licenses.
  • Inexpensive variable cost per disc since LightScribe labeling uses the drive laser instead of ink. There are no costly consumables to replace. (Ordinary LightScribe media purchased in bulk works great.)
  • Quick data, audio and LightScribe replication using 8 concurrent burners.
  • Doable by anyone capable of building of PC with a little time can build one.
  • Functions beautifully as a normal desktop computer.

System Cons

  • Not completely automated like some commercial units because disc loading, unloading and flipping (if using LightScribe) is a manual process.
  • Still uses CD-Rs. These are not the same as commercially pressed mass media discs, but a lot cheaper.
  • (This one is only applicable to audio.) I’ve yet to find inexpensive parallel burning software that can handle DDP images. (The standard in “Red Book” audio CD mastering.)
  • Since LightScribe labeling uses the drive laser instead of ink, disc labels are grayscale only. (Note: You have a lot of options in disc color, though, so it’s not a big deal. Just use your creativity.)

Replication Process Overview

Label four empty CD pancakes to manage the assembly line replication process. If you don't you'll get your disc piles confused!

My primary purpose for this buildout is to replicate audio CDs as quickly as possible for Sonic Binge Records: the awesome music production company. In particular, I need to quickly replicate a pancakes worth (usually 25-50) of audio CDs as inexpensively as possible. After much trial and error with the process, this is what I’ve found works best.

  1. Create final CD master image. (For me that’s using WaveBurner on a Mac. For replication purposes it doesn’t really matter as long as the master is good.)
  2. Take four empty CD pancake containers and label them “Blank”, “Burned”, “Labeled”, and “Ready” to create an assembly line process. You can of course save these for future jobs.
  3. Use Nero Burning ROM to replicate batches of 8 at a time. When they’re done, be sure to put them in the “Burned” stack so you don’t get burned discs confused with “Blank” discs.
  4. While they’re burning, create a square grayscale graphic for LightScribe burning. (Free label creator software is available, though anything like Photoshop works too. I usually use a combination of Photoshop and Acoustica.)
  5. Use Acoustica to label batches of 8 at a time. Each batch will take a while. Full-disc burns seems to take around 30 minutes per batch: much longer than the data/audio side of a standard CD-R. Moved discs to the “Ready” pile when they’re done. (Note: The “Labeled” pile is for discs that have been LightScribe labeled but not burned with data or audio. You can end up in this situation when using multiple computers to do burning.)
  6. While they’re burning, use your favorite document application to design your printed CD sleeves. I’ve started buying color variety packs in bulk packs of 300 to keep options high and costs down.
  7. Bulk print the entire order of sleeves in a single run. As long as you can set the size of the feeder tray, your existing feeder should work fine. (CAUTION: remember that the “window” is made of plastic, and can melt if exposed to heat. Think twice before trying your laser printer. 🙂 )
  8. Take discs from your “Ready” pile (as they finish getting labeled) and slip them into sleeves to create the final product, suitable for general distribution. The imaging lasering adds a great, distinctive touch, and of course you can get as creative as you want with the sleeves, too.
  9. Done! (aka beer time.)

Costs

  • Fixed: ~$1K for the machine build, with about $400 of that just for the burners. I reused/reposed parts from old junker machines where I could, and could have saved some money by buying online. I was in a rush and just went to the store.
  • Variable: Roughly $0.40 – $1.00 per disc, depending on the disc quality, packaging, ink etc. you decide to use for each project. (All things considered, the $0.40 version looks pretty decent!)

Closing Thoughts

If you’re a musician without computer skills I would not recommend attempting this project, but if you feel fairly comfortable putting together machines, it’s honestly not that hard. It’s just a PC, after all. (Disclaimer: I do have a degree in Computer Science and Engineering, so my perspective of “not that hard” may be a bit skewed.)

I hope you’ve found this rough how-to guide both inspirational and informative. It’s very useful to have a replication machine handy, and if you’re actively working with people on projects intend for distribution it’s a great investment!

Please use this comments section for all your general comments and questions and I’d be happy to address them. Thanks for reading!

Categories
computer

Redmine w/OS X OpenLDAP, Parallels Server and JumpBox

OpenRain used a slew of crappy Trac sites for issue tracking until we switched to Redmine several days ago. The decision came because..

  • Redmine can authenticate off LDAP with trivial configuration.
  • Redmine has multi-project support out-of-the-box.
  • Redmine has some nifty Gantt chart and calendaring schwag and is generally better.
  • Parallels Server (for OS X) is finally available.
  • JumpBox has a beta Redmine VM image available.

If you’ve got an existing LDAP infrastructure, the whole shebang shouldn’t take more than an hour or two to set up.

  1. Install Parallels Server on your OS X Leopard server.
  2. Download the Redmine JumpBox. Generate a new MAC address and boot it. Do the one-page configuration thingy in your browser.
  3. Log into Redmine and create a new “Authentication Mode” set to LDAP. If you’re using the default OpenLDAP schema that ships with Leopard server, enter the attributes like so..redmine.png
  4. All your users should now be able to log into your Redmine JumpBox using their LDAP credentials! You’ll have to set up your projects, ACLs etc. within Redmine, but that’s some pretty hot shizzle to get running in such a small timeframe.

Mad props to Redmine, Parallels, JumpBox and Apple for further simplifying my business.

Categories
computer

Sun Introduces Non-Native (Linux) Zone Support

The “What’s New” document for Solaris 10 8/07 states the update “..includes the tools necessary to install a CentOS 3.5 to 3.8 or Red Hat Enterprise Linux 3.5 to 3.8 inside a non-global zone. Machines running the Solaris OS in either 32-bit or 64-bit mode can execute 32-bit Linux applications.” Additionally, “DTrace can now be used in a non-global zone..” and ZFS gets even cooler. w00t.

Categories
computer

OpenSolaris ZFS vs. Linux ext3 RAID5

Preston Says: I asked Dan McClary for a big favor recently: use his general UNIX knowledge and graduate-level statistics voodoo to produce a report highlighting performance characteristic differencess between OpenSolaris ZFS and Linux RAID5 on a common, COTS hardware platform. The following analysis is his work, reformatted to fit your screen. You may download the PDF, HTML, graphs and original TeX source here.

A Brief Comparison of I/O Performance for RAIDZ1 and RAID-5 Filesystems
Dan McClary
June 28, 2007

Introduction

The following is a description of results obtained benchmarking I/O performance for two OS/filesystem combinations running identical hardware. The hardware used in the tests is as follows:

  • Motherboard: Asus M2NPV-VM.
  • CPU: AMD Athlon 64 X2 4800+ Dual Core Processor. 2.5GHz, 2?512KB, 1GHz Bus
  • Memory: 4 x 1GB via OCZ OCZ2G8002GK DDR2-800 PC2-6400
  • Drives: 4 x 500GB Western Digital Caviar SE 16 WD5000AAKS 7200RPM 16MB Cache SATA 3.0Gb/s

The Linux/RAID-5 combination uses a stock Ubuntu Server Edition installation, running kernel 2.6.19-generic, with RAID-5 configured via mdadm and formatted ext3. The Solaris/RAID-Z1 configuration is a stock installation of Solaris Developer Express Edition with zpool managing the zfs-formatted RAID-Z1 drives. Block size for all relevant tests is 4096 bytes.

Basic I/O testing is conducted using bonnie++ (version 1.03a), tiobench (version 0.3.3-5), and a series of BASH-scripted operations. Tests focus on I/O throughput and CPU usage for operations either much larger than available memory, and very large numbers of operations on small files. All figures, unless otherwise noted, chart mean performance with 2% deviation for large-file operations and 5% for small-file operations. These bounds well-exceed the 95% confidence interval, implying a range of high significance.

Large-File Operations

In dealing sequential reads and writes, particularly of large files, the Solaris/RAID-Z1 configuration displays much higher throughput than the Ubuntu/RAID-5 combination. Latency and CPU usage, however, appear to be higher than in the Ubuntu configuration. The reasons for these disparities are not determinable from the tests concluded, though one might venture that the management algorithm used by ZFS and each systems caching policies may play a part.

picture-2.png

Figures 1, 2, and 3 summarize large-file writing performance in the bonnie++ suite. In large writes, Solaris-ZFS displays marginally higher throughput and occasionally lower CPU usage. However, the disparities are not great enough to make a strict recommendation based solely on large-file writing performance.

picture-4.png

picture-5.png

picture-6.png

picture-7.png

picture-8.png

picture-9.png

Figures 4 and 5 illustrate throughput and CPU usage while reading large files in the bonnie++ suite. Generally, results are consistent between platforms, with the Ubuntu configuration showing a slight edge when reading 15,680MB files (though with an associated drop in CPU efficiency).

tiobench results for random reads and writes given in Tables 3 and 4 show the Ubuntu/RAID-5 configuration displaying both higher throughput and greater CPU efficiency. However, these results seem somewhat questionable given the results in section §3.

picture-10.png

picture-11.png

Small-File Operations

In examining the performance of both configurations on small files, both in the bonnie++ suite and from shell-executed commands, the most obvious statement that can be made is that the Solaris configuration displays greater CPU usage. This, though, may not be indicative of poor performance. Instead, it may be the result of an aggressive caching or other kernel-level policies. A more detailed study would be required to determine both the causes and effects of this result. In each test, 102,400 files of either 8 or 4KB were created.

picture-12.png

picture-13.png

Figures 6(a)-6(c) and 7(a)-7(c) illustrate bonnie++ performance for both configurations. In contrast to the tiobench results, the Solaris configuration generally displays slightly higher throughput (on the order of 1-2MB/s) than its counterpart. However, as previously noted, CPU usage is much higher.

Finally, Tables 5-6 lists measured times as given by the standard Unix command time when measuring command execution. In these results, there are some surprises. The Ubuntu configuration performs somewhat faster when executing a large write (using the command dd). However, the Solaris configuration is much faster when dealing with 100,000 sequential 8KB files. For reference, all file creation is done via dd, copying by cp and deletion by rm.

picture-14.png

picture-15.png

Conclusions

Few overarching conclusions can be drawn from the limited results of this study. Certainly, there are situations in which the Solaris/RAID-Z1 configuration appears to outperform the Ubuntu/RAID-5 configuration. Many questions remain regarding the large discrepancy in CPU usage for small-file operations. Likewise, the Ubuntu/RAID-5 configuration appears to perform slightly better in certain situations, though not overwhelmingly so. At best, under these default configurations, one can say that overall the Solaris configuration performs no worse, and indicates that it might perform better under live operating conditions. The latter, though, is largely speculation.

Indeed, from the analyst’s point of view, both configurations show reasonable performance. The desire to deploy either configuration in an enterprise setting suggests that significant-factor studies and robust parameter designs be conducted on, if not both candidates, whichever is most likely to be deployed. These studies would provide insight into why the discrepancies in current study exist, and more importantly, achieve optimized performance in the presences of significant uncontrollable factors (e.g. variable request-load).

Preston Says: Thanks for the outstanding work, Dan!

Categories
computer

20 Steps To Xen: A Quick Xen Tutorial For Ubuntu 7.04 x64 Server

This is an abbreviated and simplified version of a more official document. Run these either as root or with the “sudo” command..

  1. apt-get install ubuntu-xen-desktop-amd64 to install a new Xen kernel and various other tools. Apparently we’re supposed to use the “server” version instead, but it didn’t show up in the repository. Oh well.
  2. apt-get install debootstrap to install the “debootstrap” system bootstraping tool for you.
  3. reboot into the new kernel.
  4. xm list to make sure Domain-0 shows up. Domain-0 represents the host system.
  5. Edit /etc/xen/xend-config.sxp and uncomment “(network-script network-bridge)”. Also comment out “(network-script network-dummy)”.
  6. xend restart to restart the xen daemon.
  7. mkdir -p /xen/slave1 to create a mountpoint for the slave system disk.
  8. dd if=/dev/zero of=/xen/slave1.ext3 bs=1M count=512 to create a 512MB “disk” as a normal file.
  9. mkfs.ext3 /xen/slave1.ext3 to create a file system in said empty file.
  10. mount -o loop -t ext3 /xen/slave1.ext3 /xen/slave1 to manually mount the new filesystem to its mount point.
  11. debootstrap –arch amd64 edgy /xen/slave1 to install a bare bones edgy system onto the new file system.
  12. cp -a /lib/modules/2.6.19-4-generic-amd64/ /xen/slave1/lib/modules/ to install the hosts kernel modules into the new system.
  13. Edit /xen/slave1/etc/network/interfaces. It should look similar to..auto lo
    iface lo inet loopback
    auto eth0
    iface eth0 inet static
    gateway 192.168.1.110
    address 192.168.1.111
    netmask 255.255.255.0
    .. 

    where “gateway” is the host machines IP address, and “address” is a unique IP address for the slave machine.

  14. Update /xen/slave1/etc/hostname with whatever you want its host name to be.
  15. Update /xen/slave1/etc/hosts with all your IP addresses.
  16. Update /xen/slave1/etc/fstab to mount stuff on boot, like so..proc /proc proc defaults 0 0
    /dev/hda1 / ext3 defaults,errors=remount-ro 0 1
     

     

  17. umount /xen/slave1 to unmount the file system.
  18. Create /etc/xen/edgy-guest.cfg to configure the host to start the guest, like so..

    kernel = “/boot/vmlinuz-2.6.19-4-generic-amd64”
    ramdisk = “/boot/initrd.img-2.6.19-4-generic-amd64”
    builder=’linux’
    memory = 512
    name = “edgy-slave1”
    vcpus = 1
    vif = [ ‘bridge=xenbr0’ ]
    disk = [ ‘file:/xen/slave1.ext3,ioemu:hda1,w’ ]
    root = “/dev/hda1 ro”
  19. xm create slave1-edgy.cfg to create and start the new domain.
  20. xm console slave1-edgy to establish a console to the new domain. (Hit CTRL+] to exit.)

You should now be able to log in via the xm console as root, and ping the guest on 192.168.1.111 (or whatever its IP address is). w00t!