On 5 August 2010 22:21, Freek Dijkstra wrote:
> Chris, Daniel and Mathieu,
>
> Thanks for your constructive feedback!
>
>> On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote:
>>> ZFS BtrFS
>>> 1 SSD 256 MiByte/s 256 MiByte/s
>>> 2 SSDs 505 MiByte/s
Chris, Daniel and Mathieu,
Thanks for your constructive feedback!
> On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote:
>> ZFS BtrFS
>> 1 SSD 256 MiByte/s 256 MiByte/s
>> 2 SSDs 505 MiByte/s 504 MiByte/s
>> 3 SSDs 736 MiByte/s 756 MiBy
Hello,
freek.dijks...@sara.nl (Freek Dijkstra) writes:
> [...]
>
> Here are the exact settings:
> ~# mkfs.btrfs -d raid0 /dev/sdd /dev/sde /dev/sdf /dev/sdg \
> /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm \
> /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds
> nodes
Hi Miao,
ok, i'll try:
mkfs.ext3 /dev/sdaN
»used it as root-file-system quite a while«
tune2fs -O extents,uninit_bg,dir_index /dev/sdaN (converted to ext4)
»used it as root file system quite another while«
btrfs-convert /dev/sdaN (converted to btrfs)
»used it as root filesystem with "-compress"
On 5 August 2010 15:05, Freek Dijkstra wrote:
> Hi,
>
> We're interested in getting the highest possible read performance on a
> server. To that end, we have a high-end server with multiple solid state
> disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose
> that. Unfortunately
On Thu, Aug 05, 2010 at 04:05:33PM +0200, Freek Dijkstra wrote:
> Hi,
>
> We're interested in getting the highest possible read performance on a
> server. To that end, we have a high-end server with multiple solid state
> disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose
> t
On Thursday 05 August 2010 16:15:22 Leonidas Spyropoulos wrote:
> Hi all,
>
> I want to make a btrfs raid0 on 2 partitions of my pc.
> Until now I am using the mdadm tools to make a software raid of the 2
> partitions /dev/sde2, /dev/sdd2
> and then mkfs.etx4 the newly created /dev/md0 device.
> F
Hi all,
I want to make a btrfs raid0 on 2 partitions of my pc.
Until now I am using the mdadm tools to make a software raid of the 2
partitions /dev/sde2, /dev/sdd2
and then mkfs.etx4 the newly created /dev/md0 device.
>From performance point of view is it better to keep the configuration of mdadm
Hi,
We're interested in getting the highest possible read performance on a
server. To that end, we have a high-end server with multiple solid state
disks (SSDs). Since BtrFS outperformed other Linux filesystem, we choose
that. Unfortunately, there seems to be an upper boundary in the
performance o
If some devices were formatted as a fs, and mounted to some directory,
mkfs.btrfs do no check to devices except the first one. Then even if
the device is mounted, we can also mkfs. I think when a device is in
use, we can not mkfs.
But there are some other problems, for examle when we do offline fs
10 matches
Mail list logo