[Bioclusters] Random bits: GigE copper eval; biocluster pictures & IBM storage
problem
Chris Dagdigian
bioclusters@bioinformatics.org
Mon, 15 Apr 2002 13:41:55 -0400
Hi folks,
Several unrelated bits today...
(1) Slashdot linked to a recent eval of gigabit-over-copper HBA cards
which was great reading for me since it looks as though I'm going to
have to build a box capable of doing NAT and intrusion detection
between at least 2 and possibly more copper gigabit network links.
The slashdot story is at
http://slashdot.org/articles/02/04/14/2124257.shtml?tid=126
The actual review is at http://www.cs.uni.edu/~gray/gig-over-copper/
Any comments? experiences with fiber vs copper for GigE? My current
project is going to use lots of fiber and copper gig connections going
into an ExtremeNetworks Alpine 3808 switch so I may be in a position to
try some experiments over the next month or two.
(2) Harvard gave me permission to publicize the pictures I've been
taking during the hardware build process at the new Bauer Center for
Genomics Research (http://cgr.harvard.edu) . Basically they are just
getting started and I've been involved in helping sort out the initial
research computing infrastructure which basically boils down to: 4TB
NetApp NAS + 60 CPU Linux cluster running Platform LSF + 360-tape
SAN-attached AIT tape library robot and a bunch of misc. support
systems. The datacenter is still under construction so I've been
building this stuff in an office over the last two weeks.
Pictures from this effort along with pictures from the Vertex Pharma
VAMPIRE cluster and Steven Brenner's system at Berkeley are all online
at http://gw.sonsorol.net:8080/gallery/bioclusters
That site may not be super reliable as gw.sonsorol.net is hanging off a
cable modem instead of a true dedicated link.
(3) IBM storage problem
http://www.storage.ibm.com/hardsoft/products/fast200/fast200.htm
I'm currently trying to figure out why a fibre-channel FastT200 storage
server from IBM is performing slower than the internal SCSI-based
ServerRAID disks. I'm looking for pointers that would help me sort this
issue out as well as any and all info on what sorts of real world IO
performance I should actually get out of a FastT200 system with a single
raid controller and single fibre-channel connection.
The server in question is an IBM x340 server. It has 3 internal SCSI
disks at RAID5 and has a Qlogic 2200 FC HBA that allows it to be
connected straight into the FastT200 storage controller. The FC array
has a single shelf with 10 drives which have been split into two RAID5
volumes. There is nothing redundant about it: single controller, single
connection to the host and no switch or SAN stuff in the middle.
The central problem is that running iozone and bonnie on both the SCSI
and FC volumes shows that the SCSI arrays is significantly faster than
the volumes that are mounted via the fibre-channel connection.
Anyone seen anything similar?