On Fri, 24 Sep 2004, James Cuff wrote: >On Fri, 2004-09-24 at 14:32, James Cuff wrote: >> On Fri, 2004-09-24 at 14:21, Dan Bolser wrote: >> > On Fri, 24 Sep 2004, elijah wright wrote: >> >> > Seeker 1...Seeker 3...Seeker 2...start 'em...done...done...done... >> > -------Sequential Output-------- ---Sequential Input-- >> > --Random-- >> > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- >> > --Seeks--- >> > Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU >> > 100 4240 25.9 5437 2.1 4780 2.4 19595 100.1 566942 99.7 2023.0 10.1 >> > >> > With default file size (104857600). The CPU is 'amazingly high' during >> > reads. >> >> I think your file size is to small - 100MB will fit in the UBC. Once >> written according to these figures you are seeing ca 600MB/s reads for >> getchar ops. The block ops must not be htting the buffer cache >> correctly so go back down to 2MB/s. This seems pretty poor even for > >Doh! That should have been 20MB/s for getchar and 566MB/s for blocks. >Sorry - I said I had too much coffee :-) Anyway the take home message >is still the same - make the file size at least 2x the amount of memory >in your server. I don't know why I just deleted the results from a much bigger file... Perhaps too much caffine has a lot to answer for.. (it is the biggest single cause of a anxiety in the UK). I am re-running the tests on my local SCSI and on the NFS (from a 'farm' machine)... I will post the results when they come in. > >Cheers, > >j. > > >_______________________________________________ >Bioclusters maillist - Bioclusters@bioinformatics.org >https://bioinformatics.org/mailman/listinfo/bioclusters >