On Sat, Jan 30, 2010 at 7:36 AM, jim owens wrote:
> So Josef Bacik has sent patches to btrfs and btrfs-progs that
> allow you to see raid-mode data and metadata adjusted values
> with btrfs-ctrl -i instead of using "df".
>
> These patches have not been merged yet so you will have to pull
> them an
On Sat, Jan 30, 2010 at 7:36 AM, jim owens wrote:
> So Josef Bacik has sent patches to btrfs and btrfs-progs that
> allow you to see raid-mode data and metadata adjusted values
> with btrfs-ctrl -i instead of using "df".
>
> These patches have not been merged yet so you will have to pull
> them an
On Sat, Feb 6, 2010 at 5:16 AM, Goffredo Baroncelli wrote:
>> anyone on when/why to use different RAID geometries for data & metadata?
>>
>
> I expected that the size of data and meta-data are different by several order
> of magnitude. So I can choice different trade-off between
> space/speed/reli
On Sat, Feb 6, 2010 at 5:10 AM, Daniel J Blueman
wrote:
> These proc entries affect just array reconstruction, not general I/O
> performance/throughput, so affect just an edge-case of applications
> requiring maximum latency/minimum throughout guarantees.
although i'd 1st seen the perf hit at the
i've a 4 drive array connected via a PCIe SATA card.
per OS (opensuse) default, md RAID I/O performance was being limited by,
cat /proc/sys/dev/raid/speed_limit_min
1000
cat /proc/sys/dev/raid/speed_limit_max
20
changing,
echo "dev.raid.speed_limit_min=10" >> /etc/sysctl.c
anyone on when/why to use different RAID geometries for data & metadata?
On Sun, Jan 24, 2010 at 8:38 AM, 0bo0 <0.bugs.onl...@gmail.com> wrote:
> hi
>
> On Sun, Jan 24, 2010 at 3:28 AM, RK wrote:
>> try this article "Linux Don't Need No Stinkin' Z
i created a array,
mkfs.btrfs -L TEST -m raid10 -d raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd
btrfs-show
Label: TEST uuid: 85aa9ac8-0089-4dd3-b8b2-3c0cbb96c924
Total devices 4 FS bytes used 28.00KB
devid3 size 931.51GB used 2.01GB path /dev/sdc
On Fri, Jan 29, 2010 at 3:46 PM, jim owens wrote:
> but it is the only method
> that can remain accurate under the mixed raid modes possible
> on a per-file-basis in btrfs.
can you clarify, then, the intention/goal behind cmason's
"df is lying. The total bytes in the FS include all 4 drives. I
> For me, it looks as if 2.03GB is way smaller than 931.51GB (2 << 931), no?
> Everything seems to be fine here.
gagh! i "saw" TB, not GB. 8-/
> And regarding your original mail: it seems that df is still lying about the
> size of the btrfs fs, check
> http://www.mail-archive.com/linux-btrfs
On Mon, Jan 25, 2010 at 10:19 AM, Goffredo Baroncelli
wrote:
>> /dev/sda /mnt btrfs
>> device=/dev/sdb,device=/dev/sdc,device=/dev/sdd 1 2
>
> Yes; it works for me.
thanks for the confirmation.
verifying, with that in /etc/fstab, after boot i see,
mount | grep sda
/dev/sda
On Sun, Jan 24, 2010 at 3:35 PM, Leszek Ciesielski wrote:
>> how would that, then, get handled for automount @ boot via fstab? i
>> guess that the scan needs to get done as well ...
>> --
>
> Please see this discussion:
> http://thread.gmane.org/gmane.comp.file-systems.btrfs/4126/focus=4187
Than
noticing from above
>> ... size 931.51GB used 2.03GB ...
'used' more than the 'size'?
more confused ...
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.h
hi
On Sun, Jan 24, 2010 at 12:02 AM, Goffredo Baroncelli
wrote:
> On Sunday 24 January 2010, 0bo0 wrote:
>> after a simple reboot,
> ^^
> Have you do
>
> # btrfsctl -a
>
> before mounting the filesystem ? This command scans all the block devices
&
hi
On Sun, Jan 24, 2010 at 3:28 AM, RK wrote:
> try this article "Linux Don't Need No Stinkin' ZFS: BTRFS Intro &
> Benchmarks"
> http://www.linux-mag.com/id/7308/3/
> , there is a benchmark table and speed analysis (very informative), but
> all the benchmarks are done with same -m and -d mkfs.bt
after a simple reboot,
btrfs-show
Label: TEST uuid: 2ac85206-2d88-47d7-a1e7-a93d80b199f8
Total devices 4 FS bytes used 28.00KB
devid1 size 931.51GB used 2.03GB path /dev/sda
devid2 size 931.51GB
I created a btrfs RAID-10 array across 4-drives,
mkfs.btrfs -L TEST -m raid10 -d raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd
btrfs-show
Label: TEST uuid: 2ac85206-2d88-47d7-a1e7-a93d80b199f8
Total devices 4 FS bytes used 28.00KB
devid1 size 931.51GB us
16 matches
Mail list logo