[Bioclusters] Request for discussions-How to build a biocluster Part 2 (hardware)

Sylvain Foisy bioclusters@bioinformatics.org
Thu, 2 May 2002 13:56:04 -0400


Hi,

So,let's start. A reminder: this is coming from a total newbie at this 
BioCluster stuff. it is also to serve as the seed of a tutorial site for 
building the beast.

THE HARDWARE

-> CPU: I have seen some debate on this list about PIII vs Athlon as 
CPUs, especially about power requirements as well as the heat outputs. 
Since our cluster is to be small (at first; we do have some ambitions!), 
it has got to have the more bangs for the bucks that we can get. To try 
to minimize heat output problems, we are inclined to choose processors 
that are not pushing the current speed limit. We are looking into 
dual-processor nodes and head with speed of 1.2-1.5GHz. Which one would 
be the a reasonable choice:

	a) PIII/512kb cache (Tualatin) at 1.4 GHz
	b) Athlon MP at 1.2-1.5GHz

-> RAM: We are inclined to load the max possible for our selected 
motherboards (Tyan) at 4Gig. DDR-RAM of course. Not much debate here.

-> HD space: If I read the list archive right, space is needed so that 
parts of GenBank (or any other DB) can reside in each node for BLAST to 
do its thing. The choice is either to use built-in ATA-100 interface to 
40GB 7.2K drives or optional SCSI interface for a 36GB 10K drive(our 
selected options right now). Since (unless I am mistaken) BLAST could be 
slowed down by I/O issues, I would be more willing to buy SCSI even if 
it is more expensive. Another question : how much space would the head 
needs?

->Mass storage: we are thinking about a rackmounted NAS solution, with 
about 360-480 GB of space. I think that is is the most cost-effective 
method and an easy expendable one too. Anybody has any experience with 
particular NAS?

->Interface card: Yeah, Myrinet would be great. Back to Earth: we are 
thinking either 100Base-T or Gigabit Ethernet (most probably the first 
one). Can we work with the built-in NIC interface of the motherboard or 
should we get PCI NIC cards? What features would a NIC card need to be a 
useful solution (for example: net boot of nodes from the head)? 
Evidently, the head is going to need 2 cards.

->Networking gear: all this will be linked together to the head via a 
rackmounted switch. I heard great things about the Cisco stuff. How 
about 3Com? Other choices out there?

->Enclosures: all rackmounted with 2U enclosures, except for the head 
who is going to be a tower enclosure with DVD/CD-RW drive, possibly a 
DLT tape drive too. We intend to start with 8 nodes and planning to 
build it up to 32 or 48 enclosures (total of 96 node processors).

->Physical installations: we are no engineers so what would we need, in 
AC and electrical power  inputs? We are planning to retrofit a former 
office into a server room. Anybody with similar experience?

This is open for helpful and constructive discussion

Sylvain

++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Sylvain Foisy, Ph. D.
Manager
BIONEQ - Le Reseau quebecois de bioinformatique
Genome-Quebec
Tel.: (514) 343-6111 poste 5188
E-mail: foisys@medcn.umontreal.ca
++++++++++++++++++++++++++++++++++++++++++++++++++++++++