Hi Sue,

We recently switched to new database server hardware for our production
environment at the end of May 2014.  This included lots of changes though:
 faster CPUs with better cache, more memory (and faster too, 1866 MHz,
whee!), but also SSDs for our main storage.

So far, we're still testing the waters and the extra RAM on the new server
allows us to keep the whole database in memory.  Other tests we've done as
far as reading from SSD storage shows significant improvements; so if we
ever cross the threshold of not having enough memory and we go to disk,
having SSDs should keep things nice and speedy.  For us, this was more
about future proofing our new hardware with good resources from the
beginning.

As an intermediate step, I've heard good suggestions about splitting up
disk storage for the database hardware and putting key things on SSD vs.
traditional drives.  Like leaving the auditor tables in regular drives, but
put metabib (and indexes) into SSD for faster reads.  This allows people to
potentially add SSDs to their existing hardware and not put the whole
database onto SSD, but go incrementally forward.

The other thing we have to do soon is finish our analysis on the number of
reads/writes we're actually performing against our database hardware.
 That'll allow us to track and project the potential longevity of the SSDs
(which have limited number of reads/writes before failure).

That all said, I'm still super happy with our choice so far to switch to
SSDs.  It can only be better from here.

-- Ben

PS:  Another short story -- We once built a test "server" (okay, it was a
laptop) with a consumer SSD (256 GB) and put our whole production database
on it.  The laptop had 2 CPU, 16 GB of memory and our DB is easily 100+ GB
of storage presently.  So, it would not fit our entire DB into memory, and
thus would have to read directly off the disk.  Even with those limited
resources and running Evergreen locally, searches on the laptop to
localhost were exceeding our past production environment (on traditional
hardware) by at least 4 or 5 times faster.  So SSDs let us get by with
limited memory and not being stuck with slower speed from traditional hard
disks.  To me, this was a good live proof of potential performance if the
speed bottleneck was the disk storage.




On Wed, Jul 23, 2014 at 12:20 PM, Sue Ciani <sci...@cwmars.org> wrote:

>  Has anyone implemented solid state drives on their database servers?  If
> so, what was your experience? Did it increase response time?  Did you put
> them only on your database server?
>
>
>
>
>
>
>
>
>
> Susan Ciani
>
> Systems & Networking Manager
>
> C/W MARS, INC
>
> 67 Millbrook Street
>
> Suite 201
>
> Worcester, MA 01605
>
> 508-755-3323 ext 18
>
> Fax: 508-755-3721
>
> [image: logo]
>
>
>



-- 
Benjamin Shum
Evergreen Systems Manager
Bibliomation, Inc.
24 Wooster Ave.
Waterbury, CT 06708
203-577-4070, ext. 113

Reply via email to