[Bioclusters] 3ware 7850 RAID 5 performance

Vsevolod Ilyushchenko bioclusters@bioinformatics.org
Tue, 27 Aug 2002 10:37:33 -0400


Joe,

Thank you for your answers.

>   I went to the 3ware site and couldnt find the 7850.  I did find the
> 7500-8.  Is this the card?  I am looking at
> http://www.3ware.com/products/pdf/7500SelectionGuide7-26.pdf .  The
> parallel-ata sheet on http://www.3ware.com/products/parallel_ata.asp
> states # RAID 0, 1, 10, 5 and JBOD as the options.  RAID5 will always be
> slower than RAID1, and RAID1 will always be slower than RAID0.  My guess
> is the numbers they are quoting are JBOD reads and writes.  JBOD (aka
> Just a Bunch Of Disks) dont generally require the parity computation,
> the data layout and other processing which limits the performance of
> RAID.

7850 is an older model number; this card is currently known as 7500-8.

BTW - what does JBOD mean in this context? How is it different from
RAID 0?

>   RAID on these systems are going to be limited to the speed of the
> slowest disk.  If the disk is in PIO modes rather than UDMA modes, then
> I could imagine that you have that sort of write speed. 

How would I check that?

 >  It is also
 > possible, that if you are using a journaling file system such as XFS,
> and you are pointing your log to write to a single disk somewhere else,
> that is likely to be your bottleneck.

The filesystem is a simple ext3.

>   Which file system are you using?  What is the nature of your test
> (large block reads/writes), and specifically how are you testing? 

Testing was done with bonnie++.

 > What
> is the machine the card is plugged into? 

The machine has two 1.26 Ghz CPUs and 2 Gb of RAM, so the file size used 
in bonnie++ testing was 4 Gb.

  What is the reported speed for
> 
> 	hdparm -tT /dev/raid_device
> 
> where /dev/raid_device is the device that appears to be your big single
> disk.  Are you using LVM?  Software RAID atop a JBOD? ???

Hdparms numbers are surprisingly high:

/dev/sda1:
  Timing buffer-cache reads:   128 MB in  0.49 seconds =261.22 MB/sec
  Timing buffered disk reads:  64 MB in  1.89 seconds = 33.86 MB/sec

No software RAID is used, just the card's RAID 5.

> If you run the following, how long does it take?
> 
> 	/usr/bin/time --verbose dd if=/dev/zero of=big bs=10240000 count=100
> 
> On my wimpy single spindle file system, this takes 42 wall clock
> seconds, and 7 system seconds.  This corresponds to a write speed of
> about 24.4 MB/s.

13 wall clock and 7 system seconds.

However, writing a 4 Gb file took 6 minutes total!!! (And only 30 system 
seconds.) Another data point: writing a 2 Gb file took 1:52 total and 14 
system seconds. Something is very wrong here.

> If you run the following after creating the 1 GB file, how long does it
> take?
> 
> 	/usr/bin/time --verbose md5sum big
> 
> On the same wimpy single spindle file system, this takes 50 seconds for
> a read of about 20 MB/s. 

34 seconds for the 1 Gb file, 2:26 for the 4-Gb file. This scales 
reasonably.

> Using hdparm, I find
> 
>     [landman@squash /work3]# hdparm -tT /dev/hdb
>     
>     /dev/hdb:
>      Timing buffer-cache reads:   128 MB in  0.71 seconds =180.28 MB/sec
>      Timing buffered disk reads:  64 MB in  2.75 seconds = 23.27 MB/sec
> 
> If you could report some of these, it might give us more of a clue.

Thanks,
Simon

-- 
Simon (Vsevolod ILyushchenko)   simonf@cshl.edu
http://www.simonf.com          simonf@simonf.com

Even computers need analysts these days!
				("Spider-Man")