Greg Smith wrote:
On 05/09/2011 11:13 PM, Shaun Thomas wrote:
Take a look at /proc/sys/vm/dirty_ratio and
/proc/sys/vm/dirty_background_ratio if you have an older Linux
system, or /proc/sys/vm/dirty_bytes, and
/proc/sys/vm/dirty_background_bytes with a newer one.
On older systems for instance,
2011/5/10 Greg Smith :
> On 05/09/2011 11:13 PM, Shaun Thomas wrote:
>>
>> Take a look at /proc/sys/vm/dirty_ratio and
>> /proc/sys/vm/dirty_background_ratio if you have an older Linux system, or
>> /proc/sys/vm/dirty_bytes, and /proc/sys/vm/dirty_background_bytes with a
>> newer one.
>> On older s
On May 9, 2011, at 4:50 PM, Merlin Moncure wrote:
hm, if it was me, I'd write a small C program that just jumped
directly on the device around and did random writes assuming it wasn't
formatted. For sequential read, just flush caches and dd the device
to /dev/null. Probably someone will sugge
On Mon, May 9, 2011 at 10:32 PM, Chris Hoover wrote:
> So, does anyone have any suggestions/experiences in benchmarking storage
> when the storage is smaller then 2x memory?
Try writing a small python script (or C program) to mmap a large chunk
of memory, with MAP_LOCKED, this will keep it in RAM
On 2011-05-09 22:32, Chris Hoover wrote:
The issue we are running into is how do we benchmark this server,
specifically, how do we get valid benchmarks for the Fusion IO card?
Normally to eliminate the cache effect, you run iozone and other
benchmark suites at 2x the ram. However, we can't
On 05/09/2011 11:13 PM, Shaun Thomas wrote:
Take a look at /proc/sys/vm/dirty_ratio and
/proc/sys/vm/dirty_background_ratio if you have an older Linux system,
or /proc/sys/vm/dirty_bytes, and /proc/sys/vm/dirty_background_bytes
with a newer one.
On older systems for instance, those are set to
> How many times was the kernel tested with this much memory, for example
> ? (never??)
This is actually *extremely* relevant.
Take a look at /proc/sys/vm/dirty_ratio and /proc/sys/vm/dirty_background_ratio
if you have an older Linux system, or /proc/sys/vm/dirty_bytes, and
/proc/sys/vm/dirty_b
On Mon, 9 May 2011, David Boreham wrote:
On 5/9/2011 6:32 PM, Craig James wrote:
Maybe this is a dumb question, but why do you care? If you have 1TB RAM
and just a little more actual disk space, it seems like your database will
always be cached in memory anyway. If you "eliminate the cach ef
Craig James wrote:
Maybe this is a dumb question, but why do you care? If you have 1TB
RAM and just a little more actual disk space, it seems like your
database will always be cached in memory anyway. If you "eliminate
the cach effect," won't the benchmark actually give you the wrong
real-li
On 5/9/2011 6:32 PM, Craig James wrote:
Maybe this is a dumb question, but why do you care? If you have 1TB
RAM and just a little more actual disk space, it seems like your
database will always be cached in memory anyway. If you "eliminate
the cach effect," won't the benchmark actually give y
2011/5/9 Chris Hoover:
I've got a fun problem.
My employer just purchased some new db servers that are very large. The
specs on them are:
4 Intel X7550 CPU's (32 physical cores, HT turned off)
1 TB Ram
1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
3TB Sas Array (48 15K 146GB spi
2011/5/9 Chris Hoover :
> I've got a fun problem.
> My employer just purchased some new db servers that are very large. The
> specs on them are:
> 4 Intel X7550 CPU's (32 physical cores, HT turned off)
> 1 TB Ram
> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
> 3TB Sas Array (48 15
On 05/09/2011 04:32 PM, Chris Hoover wrote:
So, does anyone have any suggestions/experiences in benchmarking
storage when the storage is smaller then 2x memory?
If you do the Linux trick to drop its caches already mentioned, you can
start a database test with zero information in memory. In t
On 5/9/2011 3:11 PM, Merlin Moncure wrote:
The problem with bonnie++ is that the results aren't valid, especially
the read tests. I think it refuses to even run unless you set special
switches.
I only care about writes ;)
But definitely, be careful with the tools. I tend to prefer small
prog
On Mon, May 9, 2011 at 3:59 PM, David Boreham wrote:
>
>> hm, if it was me, I'd write a small C program that just jumped
>> directly on the device around and did random writes assuming it wasn't
>> formatted. For sequential read, just flush caches and dd the device
>> to /dev/null. Probably some
On 05/09/2011 03:32 PM, Chris Hoover wrote:
So, does anyone have any suggestions/experiences in benchmarking storage
when the storage is smaller then 2x memory?
We had a similar problem when benching our FusionIO setup. What I did
was write a script that cleared out the Linux system cache bef
On May 9, 2011, at 1:32 PM, Chris Hoover wrote:
> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
Be careful here. What if the entire card hiccups, instead of just a device on
it? (We've had that happen to us before.) Depending on how you've done your
raid 10, either all your parit
hm, if it was me, I'd write a small C program that just jumped
directly on the device around and did random writes assuming it wasn't
formatted. For sequential read, just flush caches and dd the device
to /dev/null. Probably someone will suggest better tools though.
I have a program I wrote ye
On Mon, May 9, 2011 at 3:32 PM, Chris Hoover wrote:
> I've got a fun problem.
> My employer just purchased some new db servers that are very large. The
> specs on them are:
> 4 Intel X7550 CPU's (32 physical cores, HT turned off)
> 1 TB Ram
> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a ra
I've got a fun problem.
My employer just purchased some new db servers that are very large. The
specs on them are:
4 Intel X7550 CPU's (32 physical cores, HT turned off)
1 TB Ram
1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
3TB Sas Array (48 15K 146GB spindles)
The issue we are
20 matches
Mail list logo