(me@ram.org)
This document describes how I set up my Linux computing clusters for high-performance computing which I need for my research.
Use the information below at your own risk. I disclaim all responsibility for anything you may do after reading this HOWTO. The latest version of this HOWTO will always be available at http://www.ram.org/computing/linux/linux_cluster.html.
Unlike other documentation that talks about setting up clusters in a general way, this is a specific description of how our lab is setup and includes not only details the compute aspects, but also the desktop, laptop, and public server aspects. This is done mainly for local use, but I put it up on the web since I received several e-mail messages based on my newsgroup query requesting the same information. The main use as it stands is that it's a report on what kind of hardware works well with Linux and what kind of hardware doesn't.
This section covers the hardware choices I've made. Unless noted, assume that everything works really well.
Hardware installation is also fairly straight-forward unless otherwise noted, with most of the details covered by the manuals.
32 machines have the following setup each:
1 external server with the following setup:
4 desktops with the following setup:
2 desktops with the following setup:
2 desktops with the following setup:
Backup:
Monitors:
We use KVM switches with a cheap monitor to connect up and "look" at all the machines:
While this is a nice solution, I think it's kind of needless. What we need is a small hand held monitor that can plug into the back of the PC (operated with a stylus, like the Palm). I don't plan to use more monitor switches/KVM cables.
Networking is important:
Our vendor is Hard Drives Northwest ( http://www.hdnw.com). For each compute node in our cluster (containing two processors), we paid about $1500, including taxes. Generally, our goal is to keep each node to below $2000.00 (which is what our desktop machines cost).
Specfically we use 2.2.17-14 kernel based on the KRUD 7.0 distribution. We use our own software for parallising applications but have experimented with PVM and MPI. In my view, the overhead for these pre-packaged programs is too high.
Linux is freely copiable.
This section describes disk partitioning strategies.
farm/cluster machines:
hda1 - swap (2 * RAM)
hda2 - / (remaining disk space)
hdb1 - /maxa (total disk)
desktops (without windows):
hda1 - swap (2 * RAM)
hda2 - / (4 GB)
hda3 - /home (remaining disk space)
hdb1 - /maxa (total disk)
hdd1 - /maxb (total disk)
desktops (with windows):
hda1 - /win (total disk)
hdb1 - swap (2 * RAM)
hdb2 - / (4 GB)
hdb3 - /home (remaining disk space)
hdd1 - /maxa (total disk)
laptops (single disk):
hda1 - /win (half the total disk size)
hda2 - swap (2 * RAM)
hda3 - / (4 GB)
hda4 - /home (remaining disk space)
Install a minimal set of packages for the farm. Users are allowed to configure desktops as they wish.
I believe in having a completely distributed system. This means each machine contains a copy of the operating system. Installing the OS on each machine manually is cumbersome. To optimise this process, what I do is first set up and install one machine exactly the way I want to. I then create a tar and gzipped file of the entire system and place it on a CD-ROM which I then clone on each machine in my cluster.
The commands I use to create the tar file are as follows:
tar -czvlps --same-owner --atime-preserve -f /maxa/slash.tgz /
I use have a script called go
that takes a hostname and
IP address as its arguments and untars the slash.tgz
file on
the CD-ROM and replaces the hostname and IP address in the appropriate
locations. A version of the go
script and the input files for
it can be accessed at:
http://www.ram.org/computing/linux/linux/cluster/. This script
will have to be edited based on your cluster design.
To make this work, I also use Tom's Root Boot package
http://www.toms.net/rb/ to boot
the machine and clone the system. The go
script can be
placed on a CD-ROM or on the floppy containing Tom's Root Boot package
(you need to delete a few programs from this package since the floppy
disk is stretched to capacity).
More conveniently, you could burn a bootable CD-ROM containing
Tom's Root Boot package, including the go
script, and the tgz
file containing the system you wish to clone. You can also edit Tom's
Root Boot's init scripts so that it directly executes the go
script (you will still have to set IP addresses if you don't use
DHCP).
Thus you can develop a system where all you have to do is insert a CDROM, turn on the machine, have a cup of coffee (or a can of coke) and come back to see a full clone. You then repeat this process for as many machines as you have. This procedure has worked extremely well for me and if you have someone else actually doing the work (of inserting and removing CD-ROMs) then it's ideal.
If you have DHCP set up, then you don't need to reset the IP
address and that part of it can be removed from the go
script.
DHCP has the advantage that you don't muck around with IP addresses at all provided the DHCP server is configured appropriately. It has the disadvantage that it relies on a centralised server (and like I said, I tend to distribute things as much as possible). Also, linking hardware ethernet addresses to IP addresses can make it inconvenient if you wish to replace machines or change hostnames routinely.
This section is still being developed as the usage on my cluster evolves, but so far we tend to write our own sets of message passing routines to communicate between processes on different machines.
Many applications, particularly in the computational genomics areas, are massively and trivially parallelisable, meaning that perfect distribution can be achieved by spreading tasks equally across the machines (for example, when analysing a whole genome using a single gene technique, each processor can work on one gene at a time independent of all the other processors).
So far we have not found the need to use a professional queing system, but obviously that is highly dependent on the type of applications you wish to run.
The following people have been helpful in getting this HOWTO done:
The following documents may prove useful to you---they are links to sources that make use of high-performance computing clusters: