Angulo, David wrote: > you say your point stands. I say it does not. Please compare the actual MTBF figures. Hi David: The MTBF of the system is related to the MTBFs of all of the components. If the MTBFs of the disks are so large that the power supplies and RAID card or other components MTBFs are lower, the latter will dominate the MTBF. Take 5 of these units. I am seeing MTBFs quoted as 10000 to 100000 hours for the enclosures. For laughs, lets take 20000 hours. There are 8760 hours per year. 5 of these units would consume 43800 operational hours per year. For a 20000 hour MTBF for these units you should expect a rough failure rate of about 1 enclosure failing every 5 months or so. You can ameliorate some of this by building mirror images of these units. Then you need to worry about the other MTBFs which might not be so well documented. Lets for the moment stipulate that the disks themselves are infinitely reliable (they are not, but that is not the point) with zero failure rate. The other elements of the equation are not as reliable and will fail. Things like power supplies have MTBFs ranging from 10000 through 100000 hours. What are the MTBFs of the cables, the USB2 ports, etc? Is there data on this? The issue at the end of the day is that what you dont expect is usually what bites your data. Limiting the maximum damage it can do (N+1 supplies, multiple redundant nets, ...) before you can service it is one of your few options. Joe -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics LLC, email: landman at scalableinformatics.com web : http://www.scalableinformatics.com phone: +1 734 786 8423 fax : +1 734 786 8452 or +1 866 888 3112 cell : +1 734 612 4615