On Tue, 20 Apr 2010, Ted Unangst wrote:

It's not about writing too often, it's about the performance hit doing
a read/modify/write when there's no free blocks.  Like the 4k sector
problem, but potentially even worse.

On the other hand, it depends on how much writing your server will do
in service.  If you aren't writing large files, you won't notice much
difference, and the benefit of ultra fast random access is more than
worth it.



Right now, the machines I am working on are mail gateways. They'll need to do frequent small writes as mail is shuffled between various queues. As long as we keep up with incoming mail, we're fine-- this is less of an issue now that spamd turns away most connections before they submit any data for processing.

We were looking for a general answer, though, since the same strategy is used to deploy machines for other purposes (databases, web servers, routers, etc), although any application that requires lots of storage will probably get a big disk (or more likely, NFS to a big disk) specifically for that purpose.

Thanks for the answers, everyone.  I have some good ideas to look into.

Dan

Reply via email to