[Pipet Devel] Linux Clusters vs SMP

Lapointe, David David.Lapointe at umassmed.edu
Mon Dec 6 13:21:40 EST 1999


There is an interesting discussion about SMP vs Beowulf going on on
bionet.software.

Here's an attachment: I missed the first article by David Mathog.

 <<smpbeowulf.txt>> 

David Lapointe, Ph.D.
Research Computing Manager
6-5141
"What we obtain too cheap, we esteem too lightly." - T. Paine

-------------- next part --------------
From: wrp at alpha0.bioch.virginia.edu (William R. Pearson)
Subject: Re: SMP vs. Beowulf?
Newsgroups: bionet.software
Date: 02 Dec 1999 15:18:22 -0500
Organization: University of Virginia


We have not looked into SMP vs Beowulf exhaustively, but we have quite
a bit of experience.

(1) SMP is far easier to configure and run than PVM (or MPI or
    others). You just run the program; if its threaded SMP, it runs
    faster.  SMP programs are also much easier to develop and debug.

(2) Our current PVM implementation is not as CPU efficient as spawning
    a bunch of threaded fasta33_t runs when the algorithm is fast.
    For Smith-Waterman, which is compute bound, they are equally
    efficient.  In line with point (1), I think it is easier to
    improve the performance of an SMP program.  I don't think this is
    an inherent shortcoming of PVM, but reflect the fact that our PVM
    implementaion (and very primitive scheduling system) was build
    when machines and interconnections were much slower.

(3) However, we have not yet found a version of Linux Pthreads that
    works 100% of the time.  With the kernal and C-libraries that we
    use, we see failures which are almost certainly caused by Linux
    Pthreads.  (We never see them in any other environment, and we
    don't see them unthreaded.)  Linux PVM is very reliable.

So we use both.  We use PVM for genome-vs-genome Smith-Waterman
searches, and we use SMP threaded versions for our WWW
server. Starting up PVM (or any other system that spawns large numbers
of jobs on other machines) has a high overhead, which isn't worth the
cost when the search will be done in a few minutes - we don't see
nearly as much overhead with SMP machines.  But large SMP machines are
considerably more expensive.  A cost-effective solution is a WWW
server that sends its searches to a bank of 1-CPU or 2-CPU machines.

Bill Pearson

############
From: Tim Cutts <timc at chiark.greenend.org.uk>
Subject: Re: SMP vs. Beowulf?
Newsgroups: bionet.software
Date: 03 Dec 1999 11:30:28 +0000 (GMT)
Organization: Linux Unlimited

William R. Pearson <wrp at alpha0.bioch.virginia.edu> wrote:
>
>We have not looked into SMP vs Beowulf exhaustively, but we have quite
>a bit of experience.
>
>(1) SMP is far easier to configure and run than PVM (or MPI or
>    others). You just run the program; if its threaded SMP, it runs
>    faster.  SMP programs are also much easier to develop and debug.

There are a couple of points to make here.  1)  MPI is far more
efficient than PVM.  No-one should be using PVM these days.  2) MPI is
more flexible than threads in that an MPI version of a program can still
be run on an SMP machine, as well as on a distributed network.

Programs like BLAST and FASTA have a problem in that their I/O
requirements are large, and this can be a real performance problem on a
distributed network.

For example, you could think of implementing your parallel program by
giving each MPI process part of the database to work on.  The problem
there is that you have a large overhead in getting the database to the
processor.  Ethernet is too slow, and will destroy any performance gain
from the parallel code.

A better solution, easier to implement, and probably more useful for
most purposes, is a workstation farm with each node having a local copy
of all the target databases, and run normal single threaded blast on
each.  For large scale work, you typically want to blast lots of
sequences against several databases, so such coarse grained
parallelisation is fine.  You just need some way of distributing the
blast jobs to your farm.  You can either do this with some fairly
trivial perl scripting, or you can use some more flexible commercial
offering.  I can highly recommend platform computing's LSF package.
It's expensive, but it extremely good at managing workstation farms, in
particular with cycle stealing from machines when they're idle.

Using LSF at the University of Cambridge, I got 100 %CPU utilisation on
a 20 workstation farm.  These were interactive workstations too; people
doing NMR spectrum assignment at the workstations weren't even aware
their machines were also performing highly CPU intensive analysis jobs
in the background.  Efficient use of the workstations like this
ultimately saved money, since they realised that they no longer needed
to buy further machines.

Tim.

##########
From: Piotr Kozbial <piotrk at ibb.waw.pl>
Subject: Re: SMP vs. Beowulf?
Newsgroups: bionet.software
Date: Sat, 04 Dec 1999 15:11:40 +0100
Organization: http://news.icm.edu.pl/
Reply-To: piotrk-NO at SPAMM-ibb.waw.pl

There are other kinds of Linux clusters. You can read discussion about
"Choosing the Right Cluster System"
http://slashdot.org/article.pl?sid=99/11/12/0354238

For example (posted by SEWilco):

Beowulf is one of a family of parallel programming API tools. Programs
must use the API to accomplish parallel programming. 
http://cesdis.gsfc.nasa.gov/linux/beowulf/beowulf.html
           
SCI is fast hardware with support for distributed shared memory,
messaging, and data transfers. Again, if you don't use the API then no
gain. 
http://nicewww.cern.ch/~hmuller/sci.htm
            
DIPC is distributed System V IPC. Programs which use the IPC API can be
converted to DIPC easily, such as just by adding the DIPC flag to the
IPC call. 
http://wallybox.cei.net/dipc/dipc.html
            
MOSIX is the most general-purpose. Processes are scattered across a
cluster automatically without having to modify the programs. No API
needed other than usual Unix-level process use. Allows parallel
execution of any program, although full use requires a parallel program
design.
http://www.cnds.jhu.edu/mirrors/mosix/


More information about the Pipet-Devel mailing list