On Fri, Aug 20, 2010 at 06:53:44AM +0200, Sander wrote:
> Chris Mason wrote (ao):
> > On Fri, Aug 06, 2010 at 01:55:21PM +0200, Jens Axboe wrote:
> > > Also, I didn't see Chris mention this, but if you have a newer intel box
> > > you can use hw accellerated crc32c instead. For some reason my test
Chris Mason wrote (ao):
> On Fri, Aug 06, 2010 at 01:55:21PM +0200, Jens Axboe wrote:
> > Also, I didn't see Chris mention this, but if you have a newer intel box
> > you can use hw accellerated crc32c instead. For some reason my test box
> > always loads crc32c and not crc32c-intel, so I need to d
On Mon, Aug 09, 2010 at 04:45:45PM +0200, Freek Dijkstra wrote:
> Hi all,
>
> Thanks a lot for the great feedback from before the weekend. Since one
> of my colleagues needed the machine, I could only do the tests today.
>
> In short: just installing 2.6.35 did make some difference, but I was
> m
Hi all,
Thanks a lot for the great feedback from before the weekend. Since one
of my colleagues needed the machine, I could only do the tests today.
In short: just installing 2.6.35 did make some difference, but I was
mostly impressed with the speedup gained by the hardware acceleration of
the cr
On 08/08/2010 03:18 AM, Andi Kleen wrote:
> Jens Axboe writes:
>>
>> Also, I didn't see Chris mention this, but if you have a newer intel box
>> you can use hw accellerated crc32c instead. For some reason my test box
>> always loads crc32c and not crc32c-intel, so I need to do that manually.
>
>
Jens Axboe writes:
>
> Also, I didn't see Chris mention this, but if you have a newer intel box
> you can use hw accellerated crc32c instead. For some reason my test box
> always loads crc32c and not crc32c-intel, so I need to do that manually.
I have a patch for that, will post it later: autoloa
On Fri, Aug 06, 2010 at 01:55:21PM +0200, Jens Axboe wrote:
> On 2010-08-05 16:51, Chris Mason wrote:
> > And then we need to setup a fio job file that hammers on all the ssds at
> > once. I'd have it use adio/dio and talk directly to the drives. I'd do
> > something like this for the fio job fil
On 2010-08-05 16:51, Chris Mason wrote:
> And then we need to setup a fio job file that hammers on all the ssds at
> once. I'd have it use adio/dio and talk directly to the drives. I'd do
> something like this for the fio job file, but Jens Axboe is cc'd and he
> might make another suggestion on
On Thu, Aug 05, 2010 at 11:21:06PM +0200, Freek Dijkstra wrote:
> Chris Mason wrote:
>
> > Basically we have two different things to tune. First the block layer
> > and then btrfs.
>
>
> > And then we need to setup a fio job file that hammers on all the ssds at
> > once. I'd have it use adio/d
On 5 August 2010 22:21, Freek Dijkstra wrote:
> Chris, Daniel and Mathieu,
>
> Thanks for your constructive feedback!
>
>> On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote:
>>> ZFS BtrFS
>>> 1 SSD 256 MiByte/s 256 MiByte/s
>>> 2 SSDs 505 MiByte/s
Chris, Daniel and Mathieu,
Thanks for your constructive feedback!
> On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote:
>> ZFS BtrFS
>> 1 SSD 256 MiByte/s 256 MiByte/s
>> 2 SSDs 505 MiByte/s 504 MiByte/s
>> 3 SSDs 736 MiByte/s 756 MiBy
Hello,
freek.dijks...@sara.nl (Freek Dijkstra) writes:
> [...]
>
> Here are the exact settings:
> ~# mkfs.btrfs -d raid0 /dev/sdd /dev/sde /dev/sdf /dev/sdg \
> /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm \
> /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds
> nodes
On 5 August 2010 15:05, Freek Dijkstra wrote:
> Hi,
>
> We're interested in getting the highest possible read performance on a
> server. To that end, we have a high-end server with multiple solid state
> disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose
> that. Unfortunately
On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote:
> Hi,
>
> We're interested in getting the highest possible read performance on a
> server. To that end, we have a high-end server with multiple solid state
> disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose
> t
Hi,
We're interested in getting the highest possible read performance on a
server. To that end, we have a high-end server with multiple solid state
disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose
that. Unfortunately, there seems to be an upper boundary in the
performance o
15 matches
Mail list logo