Re: Poor read performance on high-end server

2010-08-05 Thread Daniel J Blueman
On 5 August 2010 22:21, Freek Dijkstra wrote: > Chris, Daniel and Mathieu, > > Thanks for your constructive feedback! > >> On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote: >>>              ZFS             BtrFS >>> 1 SSD      256 MiByte/s     256 MiByte/s >>> 2 SSDs     505 MiByte/s

Re: Poor read performance on high-end server

2010-08-05 Thread Freek Dijkstra
Chris, Daniel and Mathieu, Thanks for your constructive feedback! > On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote: >> ZFS BtrFS >> 1 SSD 256 MiByte/s 256 MiByte/s >> 2 SSDs 505 MiByte/s 504 MiByte/s >> 3 SSDs 736 MiByte/s 756 MiBy

Re: Poor read performance on high-end server

2010-08-05 Thread Mathieu Chouquet-Stringer
Hello, freek.dijks...@sara.nl (Freek Dijkstra) writes: > [...] > > Here are the exact settings: > ~# mkfs.btrfs -d raid0 /dev/sdd /dev/sde /dev/sdf /dev/sdg \ > /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm \ > /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds > nodes

Re: wanted X found X-1 but you got X-2

2010-08-05 Thread adi
Hi Miao, ok, i'll try: mkfs.ext3 /dev/sdaN »used it as root-file-system quite a while« tune2fs -O extents,uninit_bg,dir_index /dev/sdaN (converted to ext4) »used it as root file system quite another while« btrfs-convert /dev/sdaN (converted to btrfs) »used it as root filesystem with "-compress"

Re: Poor read performance on high-end server

2010-08-05 Thread Daniel J Blueman
On 5 August 2010 15:05, Freek Dijkstra wrote: > Hi, > > We're interested in getting the highest possible read performance on a > server. To that end, we have a high-end server with multiple solid state > disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose > that. Unfortunately

Re: Poor read performance on high-end server

2010-08-05 Thread Chris Mason
On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote: > Hi, > > We're interested in getting the highest possible read performance on a > server. To that end, we have a high-end server with multiple solid state > disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose > t

Re: Raid0 with btrfs

2010-08-05 Thread Hubert Kario
On Thursday 05 August 2010 16:15:22 Leonidas Spyropoulos wrote: > Hi all, > > I want to make a btrfs raid0 on 2 partitions of my pc. > Until now I am using the mdadm tools to make a software raid of the 2 > partitions /dev/sde2, /dev/sdd2 > and then mkfs.etx4 the newly created /dev/md0 device. > F

Raid0 with btrfs

2010-08-05 Thread Leonidas Spyropoulos
Hi all, I want to make a btrfs raid0 on 2 partitions of my pc. Until now I am using the mdadm tools to make a software raid of the 2 partitions /dev/sde2, /dev/sdd2 and then mkfs.etx4 the newly created /dev/md0 device. >From performance point of view is it better to keep the configuration of mdadm

Poor read performance on high-end server

2010-08-05 Thread Freek Dijkstra
Hi, We're interested in getting the highest possible read performance on a server. To that end, we have a high-end server with multiple solid state disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose that. Unfortunately, there seems to be an upper boundary in the performance o

[PATCH] btrfs: when mkfs.btrfs, add control to the multiple device which have mounted

2010-08-05 Thread Wang Shaoyan
If some devices were formatted as a fs, and mounted to some directory, mkfs.btrfs do no check to devices except the first one. Then even if the device is mounted, we can also mkfs. I think when a device is in use, we can not mkfs. But there are some other problems, for examle when we do offline fs