Ivo, et al. I just saw this link on the Alinka Clustering Newletter and thought I'd drop my 2 cents in. > > For IDE solution, 8 x 120GB is ideal because you can use 2 channels from > > I have one general question to the list: what are the pros and cons of > SCSI versus IDE? Who of you uses IDE file servers or IDE RAID arrays? > How big? How reliable? How fast? Any problems? Warnings? Comments? > > Ivo There's a linux-ide-array mailing list at firstname.lastname@example.org where we discuss a lot about IDE RAID arrays. We here at the Sloan Digital Sky Survey have ~18.5TB of IDE RAID arrays on 14 machines. The largest array we've built is 1.68TB, but you can build larger using 160GB drives. Note that you can't have a single FS larger than 2TB - that's a limitation of the kernel. The CDF experiment here at Fermi just purchased 32TB of IDE RAID machines (15 2.2TB machines) The best write/read speeds we're getting is ~124MB/s and 212MB/s, respectively on a RAID50 array. This is block transfers using bonnie++ with 2GB filesizes (physical RAM is 890MB). Some other "special" things needed to be done to achieve these speeds (see links below). The 3ware IDE RAID controllers have most of all the same features that the SCSI RAID controllers have, with their latest firmware revision (7.4), like background scrubbing, scheduled data integrity checking, etc. Of course, hot swap is a given. We're using 2.4.18 with XFS support. I wrote a technical note explaining what I did. It's slightly out-of-date, but should give you a good idea of what I've done to get the results that I have: http://home.fnal.gov/~yocum/storageServerTechnicalNote.html The CDF people wrote up their own experiences: http://mit.fnal.gov/~msn/cdf/caf/server_evaluation.html Hope that helps. Cheers, Dan -- Dan Yocum Sloan Digital Sky Survey, Fermilab 630.840.6509 email@example.com, http://www.sdss.org SDSS. Mapping the Universe.