Thanks for the feedback guys. I'm looking forward to the day when we
upgrade to SSDs.
For future reference, the bonnie++ numbers I was referring to are:
Size: 63G
Sequential Output:
396505 K/sec
% CPU 21
Sequential Input:
401117 K/sec
% CPU 21
Hi Dave,
Database disk performance has to take into account IOPs, and IMO, over MBPs,
since it’s the ability of the disk subsystem to write lots of little bits
(usually) versus writing giant globs, especially in direct attached storage
(like yours, versus a SAN). Most db disk benchmarks rev
On Sat, Mar 19, 2016 at 4:29 AM, Scott Marlowe wrote:
> Given the size of your bonnie test set and the fact that you're using
> RAID-10, the cache should make little or no difference. The RAID
> controller may or may not interleave reads between all four drives.
> Some do, some don't. It looks to
On Thu, Mar 17, 2016 at 2:45 PM, Dave Stibrany wrote:
> I'm pretty new to benchmarking hard disks and I'm looking for some advice on
> interpreting the results of some basic tests.
>
> The server is:
> - Dell PowerEdge R430
> - 1 x Intel Xeon E5-2620 2.4GHz
> - 32 GB RAM
> - 4 x 600GB 10k SAS Seag
Hey Mike,
Thanks for the response. I think where I'm confused is that I thought
vendor specified MBps was an estimate of sequential read/write speed.
Therefore if you're in RAID10, you'd have 4x the sequential read speed and
2x the sequential write speed. Am I misunderstanding something?
Also, wh
Sorry for the delay, long work day!
Ok, I THINK I understand where you’re going. Do it this way:
4 drives in Raid10 = 2 pairs of mirrored drives, aka still 2 active drives (2
are failover). They are sharing the 12gbps SAS interface, but that speed is
quite irrelevant…it’s just a giant pipe