Linux Clustering with openMosix

The hardest thing is to go to sleep at night, when there are so many urgent things needing to be done. A huge gap exists between what we know is possible with today's machines and what we have so far been able to finish.

-Donald E. Knuth

Introduction

Supercomputer is a generic term that refers to a computer that can perform far better than an ordinary computer. Clustering technologies allow two or more networked systems (called "nodes") to combine their computing resources.Software is an integral part of any cluster. Support for clustering can be built directly into the operating system or may sit above the operating system at the application level, often in user space. The primary drawback of second approach is that they require specially designed software, written with explicit PVM (Parallel Virtual Machine) or MPI(Messaging Passing Interface) support. When clustering support is part of the operating system, all nodes in the cluster need to have identical or nearly identical kernels; this is called a single system image (SSI).Therefore, there is no needto change or even link applications with any special library. openMosix is a typical example of SSI. The simplest approach is a symmetric cluster in which each node can function as an individual computer. A typical setup is shown in Figure 1.

Figure 1

Overview of openMosix

The openMosixprojectoriginated as a fork from the earlier MOSIX (Multicomputer Operating System for Unix) project.Mosix started in 1981at the Hebrew university of Jerusalem as a research project. Mosix was basically developed on BSD system. It was ported to Linux in 1999. In 2002 Moshe Bar, the Mosix project co-manager started the openMosix project after the Mosix project lead opted for a non-GPL license. The openMosix Project was officially closed on March 1, 2008. Source code and mail archives continue to be available from SourceForge.The original MOSIX project is still quite active under the direction of Amnon Barak. MOSIX Version 2 (MOSIX2) is a viable alternative that can be obtained at no cost for educational purposes.Basically, the openMosix software includes both a set of kernel patches and support tools. The patches extend the kernel to provide support for moving processes among machines in the cluster. Typically, process migration is totally transparent to the user.

Cluster Planning

The project is aimed at building an openMosix cluster and demonstrating its capabilities. The following methodology has been adopted.

  • Installing a basic cluster requires at least 2 network connected machines. The project usestwo computers connected using Fast Ethernet (100 Mbps).
  • The project uses the Red Hat Linux 9 distribution. Its default kernel is 2.4.20-8 which is at a level with openMosix-2.4.26 we have used later. The installation instruction can be found in [7].
  • The openMosix is considered stable on Linux kernel 2.4.x for the x86 architecture. The porting to Linux 2.6 kernel remained in the alpha stage. The LinuxPMI project is continuing development of the former openMosix code. It has not yet released stable patches for 2.6 series.
  • The 2.4 series of Linux kernel does not support SATA drives. The installation of Red Hat Linux 9 needs the PCs wit PATA IDE Drives. The beta version i.e. openmosix-kernel-2.6.15-openmosixbeta.i686 also has some issues with some SATA. For 2.6 series kernel, the MOSIX2 is not freely available over the internet.
  • An openMosix enable live CD, named bccd-2.2.1c14-bloat has been downloaded and checked. BCCD was developed by Paul Gray as an educational toolto facilitate instruction of parallel computing aspects and paradigms. It uses openMosix-2.4.26. It is a non-destructive overlay on top of the currently hardware.

Installing Binary openMosix Packages

  • Binary and source RPMs are also available at Because of availability of SMP capable processors in VLSI Lab openmosix-kernel-smp-2.4.24-openmosix2.i686.rpm has been used. As an alterativeopenmosix-kernel-2.4.26-openmosix1.i686.rpm has also been installed. After downloading use rpm –ivh pakage.rpm command as a root to install.
  • The kernel has been installed in the /boot directory and appropriate options have been made in the grub menu.

Installing openMosix by Recompiling

Despite its large code base (over seven million lines of code), the Linux kernel is the most flexible operating system that has ever been created. By customizing the kernel for some specific environment, it is possible to create something that is both smaller and faster than the kernel provided by most Linux distributions [6].

  • The set of patches for 2.4.26 kernel version was downloaded from SourceForge site. The kernel was downloaded from The kernel source was copied in /usr/src and compiled. The command session is given below.

[root@tux root]# cd /usr/src/

[root@tux src]# tar xjvf linux-2.4.26.tar.bz2

[root@tux src]# cd linux-2.4.26

[root@tux linux-2.4.26]# cat openMosix-2.4.26-1.bz2 | bzip2 -d | patch -p1 -l

patching file arch/i386/config.in

patching file arch/i386/defconfig

patching file arch/i386/kernel/entry.S

.

.

.

patching file net/sunrpc/sched.c

patching file net/sunrpc/svc.c

patching file openMosix_MAINTAINERS

  • The next step is to create the appropriate configuration file. The output concerning the openMosic menus using the commands make menuconfig is shown in Figure 2.Configuration parameters are arranged in groups by functionality.

Figure 2

  • Alternatively make xconfigcan be used, which requires X windows and TCL/TK libraries. The main window is shown in Figure 3. The openMosix menu window is shown in Figure 4.
  • After configuration, it is time to make the kernel. It uses make dep, make clean, make bzImage, make modules, make modules_install commands. These commands take a while and produce a lot of output, which has been omitted here. Further details can be found in [4], [6] and [7].The minimum options are shown in Figure 2 and 3.

Figure 3

Figure 4

  • As currently installed, the next reboot will give the option of starting openMosix but it won't be the default kernel.

Configuring openMosix

While the installation will take care of the stuff that can be automated, there are a few changes that have to do manually to get openMosix running. These are very straightforward and given below.

  • The openMosixuses UDP ports in the 5000-5700 range, UDP port 5428, and TCP ports 723 and 4660. It will also need to allow any other related traffic such as NFS or SSH traffic. The firewall was configured to allow all such traffic.
  • The openMosix userland tools are available at SourceForge site. The openmosix-tools-0.3.6-2 has been installed. These are command line tools for managing and monitoring the openMosix cluster.The openmosixview-1.5 has also been installed. It is a GUI frontend toopenmosix-tools mentioned above.
  • The openMosix needs to know about the other machines in the cluster. For small, static clusters, it is easier to edit /etc/hosts files for each cluster. A typical example is shown below.

127.0.0.1

192.168.1.1om1

  • The configuration for /etc/openmosix.map is shown below. For a simple cluster, this file can be very short. Its simplest form has one entry for each machine. In this format, each entry consists of three fields—a unique device node number (starting at 1) for each machine, the machine's IP address, and a 1 indicating that it is a single machine.

1192.168.1.11

2192.168.1.21

  • It is also possible to have a single entry for a range of machines that have contiguous IP addresses. In that case, the first two fields are the same—the node number for the first machine and the IP address of the first machine. The third field is the number of machines in the range. The address can be an IP number or a device name from your /etc/hosts file. For example, consider the following entry.
  • There is also a configuration file /etc/openmosix/openmosix.config. This file is heavily commented, so it should be clear what you might need to change, if anything. It can be ignored for most small clusters using a map file.

OpenMosix Up and Running

After configuration, all node are needed to be up and running openMosix. The steps are shown described as under.

  • The setpe command can be used to manually configure a node. As root, use /sbin/setpe -w -f /etc/openmosix.mapto start openMosix with a specific configuration file.
  • The openMosixView extends the basic functionality of the user tools while providing a spiffy X-based GUI. However, the basic user tools must be installed for openMosixView to work. openMosixView is actually seven applications that can be invoked from the main administration application.
  • Once installed, we are basically ready to run.The main applications window is shown in Figure 5.This view displays information for each of the two nodes in the cluster. The first column displays the node's status by node number. The background colour is green if the node is available or red if it is unavailable. The second column, buttons with IP numbers, allows to configure individual systems

Figure 5

Testing openMosix

The openMosix cluster was put o task using a CPU stress test. The openMosixView provides a number of additional tools. These include a 3D process viewer (3dmosmon), a data collection daemon (openMosixcollector), an analyzer (openMosixanalyzer), an application for viewing process history (openMosixHistory), and a migration monitor and controller (openMosixmigmon) that supports drag-and-drop control on process migration. Figure 6 shows a pictorial viewof a migration monitor and controller (openMosixmigmon) that supports drag-and-drop control on process migration.

Figure 6

  • The Figures 7 through 9 shows the output ofopenMosixanalyzer and openMosixHistory for the tested load.

Figure 7

Figure 8

Figure 9

Conclusion

The openMosix is a powerful solution for intelligently distributing work across a cluster of Linux machines. In best-case scenarios, openMosix scales almost linearly with the CPU horsepower of the cluster, and openMosix has a very low remote execution overhead to boot. To develop a cluster on 2.6 kernel series unfinished patches by LinuxPMI project can be used.A fully functional Linux workstation or cluster node can easily run without hard drives, CD-ROMs, or floppies, which saves administration time and maintenance. Using Pre eXecution Environment (PXE) capable NIC, a diskless client can be built using Trivial File Transfer Protocol (TFTP). Moreover a heterogeneous cluster with coMosix and openMosix can be built that allows the Windows machine to run openMosix enabled Linux kernel. The Windows machines act as a cluster agent.

openMosix has a lot to recommend it. Not having to change your application code is probably the biggest advantage. As a control mechanism, it provides both transparency to the casual user and a high degree of control for the more experienced user. With precompiled kernels, setup is very straightforward and goes quickly.

Resources

[1] Daniel Robbins,”Advantages of openMosix on IBM xSeries”, Three part seriesavailableat:

[2] Kris Buytaert, “The openMosix HOWTO”, Available at:-

[3] Mulyadi Santosa and Andreas Schaefer. “Build a heterogeneous cluster with coLinux and openMosix”, Available at:-

[4] Joseph D.Sloan. “High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI”, O’Reilly Media, 2004.

[5] Charles Bookman, “Linux Clustering: Building and Maintaining Linux Clusters”, New Riders Publishing, 2002.

[6] Greg Kroah-Hartman. “Linux Kernel in a Nutshell”O’Reilly Media, 2007. Available online at:-

[7] Michael Jang“Mastering Red Hat Linux 9” SYBEX Inc, USA, 2002.

1