Roman Mamedov posted on Sun, 09 Feb 2014 04:10:50 +0600 as excerpted:
> If you need to perform a btrfs-specific operation, you can easily use
> the btrfs-specific tools to prepare for it, specifically use "btrfs fi
> df" which could give provide every imaginable interpretation of free
> space esti
Johan Kröckel posted on Sat, 08 Feb 2014 12:09:46 +0100 as excerpted:
> Ok, I did nuke it now and created the fs again using 3.12 kernel. So far
> so good. Runs fine.
> Finally, I know its kind of offtopic, but can some help me interpreting
> this (I think this is the error in the smart-log which
kernel 3.12.7, python 2.7.6-5, debian testing/unstable, bedup installed as per
pip install --user bedup
I tried installing the git version, but the error is the same:
Anyway, with the other bedup, I get:
gargamel:/mnt/dshelf2/backup# bedup show
Traceback (most recent call last):
File "/usr/loca
2014-02-08 23:46 GMT+08:00 Wang Shilong :
> From: Wang Shilong
>
> This reverts commit 41ce9970a8a6a362ae8df145f7a03d789e9ef9d2.
> Previously i was thinking we can use readonly root's commit root
> safely while it is not true, readonly root may be cowed with the
> following cases.
>
> 1.snapshot s
On Feb 8, 2014, at 7:21 PM, Chris Murphy wrote:
> we don't have a top level switch for variable raid on a volume yet
This isn't good wording. We don't have a controllable way to set variable raid
levels. The interrupted convert model I'd consider not controllable.
Chris Murphy--
To unsubscri
On Feb 8, 2014, at 6:55 PM, Roman Mamedov wrote:
>
> Not sure what exactly becomes problematic if a 2-device RAID1 tells the user
> they can store 1 TB of their data on it, and is no longer lying about the
> possibility to store 2 TB on it as currently.
>
> Two 1TB disks in RAID1.
OK but whil
On Sun, 09 Feb 2014 00:17:29 +0100
Kai Krakow wrote:
> "Dear employees,
>
> Please keep in mind that when you run out of space on the fileserver
> '\\DepartmentC', when you free up space in the directory '\PublicStorage7'
> the free space you gain on '\StorageArchive' is only one third of the
On Sun, 09 Feb 2014 00:32:47 +0100
Kai Krakow wrote:
> When I started to use unix, df returned blocks, not bytes. Without your
> proposed patch, it does that right. With your patch, it does it wrong.
It returns total/used/available space that is usable/used/available by/for
user data. Whether t
On Feb 8, 2014, at 3:01 PM, Hendrik Friedel wrote:
> Hello,
>
>> Ok.
>> I think, I do/did have some symptoms, but I cannot exclude other reasons..
>> -High Load without high cpu-usage (io was the bottleneck)
>> -Just now: transfer from one directory to the other on the same
>> subvolume (from /
Roman Mamedov schrieb:
> UNIX 'df' and the 'statfs' call on the other hand should keep the behavior
> people are accustomized to rely on since 1970s.
When I started to use unix, df returned blocks, not bytes. Without your
proposed patch, it does that right. With your patch, it does it wrong.
cwillu schrieb:
> Everyone who has actually looked at what the statfs syscall returns
> and how df (and everyone else) uses it, keep talking. Everyone else,
> go read that source code first.
>
> There is _no_ combination of values you can return in statfs which
> will not be grossly misleading
Roman Mamedov schrieb:
>> It should show the raw space available. Btrfs also supports compression
>> and doesn't try to be smart about how much compressed data would fit in
>> the free space of the drive. If one is using RAID1, it's supposed to fill
>> up with a rate of 2:1. If one is using compr
Everyone who has actually looked at what the statfs syscall returns
and how df (and everyone else) uses it, keep talking. Everyone else,
go read that source code first.
There is _no_ combination of values you can return in statfs which
will not be grossly misleading in some common scenario that s
In case we do not refill, we can overwrite cur pointer from prio_head
by one from not prioritized head, what looks as something that was
not intended.
This change make we always take works from prio_head first until it's
not empty.
Signed-off-by: Stanislaw Gruszka
---
I found this by reading cod
On Sat, 08 Feb 2014 22:35:40 +0100
Kai Krakow wrote:
> Imagine the future: Btrfs supports different RAID levels per subvolume. We
> need to figure out where to place a new subvolume. I need raw numbers for
> it. Df won't tell me that now. Things become very difficult now.
If you need to perfor
Hello,
Ok.
I think, I do/did have some symptoms, but I cannot exclude other reasons..
-High Load without high cpu-usage (io was the bottleneck)
-Just now: transfer from one directory to the other on the same
subvolume (from /mnt/subvol/A/B to /mnt/subvol/A) I get 1.2MB/s instead
of > 60.
-For so
Martin Steigerwald schrieb:
> While I understand that there is *never* a guarentee that a given free
> space can really be allocated by a process cause other processes can
> allocate space as well in the mean time, and while I understand that its
> difficult to provide an accurate to provide exac
Chris Murphy schrieb:
>
> On Feb 6, 2014, at 11:08 PM, Roman Mamedov wrote:
>
>> And what
>> if I am accessing that partition on a server via a network CIFS/NFS share
>> and don't even *have a way to find out* any of that.
>
> That's the strongest argument. And if the user is using
> Explore
Hugo Mills schrieb:
> On Sat, Feb 08, 2014 at 05:33:10PM +0600, Roman Mamedov wrote:
>> On Fri, 07 Feb 2014 21:32:42 +0100
>> Kai Krakow wrote:
>>
>> > It should show the raw space available. Btrfs also supports compression
>> > and doesn't try to be smart about how much compressed data would f
Hello, David, Fengguang, Chris.
On Fri, Feb 07, 2014 at 01:13:06PM -0800, David Rientjes wrote:
> On Fri, 7 Feb 2014, Fengguang Wu wrote:
>
> > On Fri, Feb 07, 2014 at 02:13:59AM -0800, David Rientjes wrote:
> > > On Fri, 7 Feb 2014, Fengguang Wu wrote:
> > >
> > > > [1.625020] BTRFS: selfte
Hello,
I have a large file system that has been growing. We've resized it a
couple of times with the following approach:
lvextend -L +800G /dev/raid/virtual_machines
btrfs filesystem resize +800G /vms
I think the FS started out at 200G, we increased it by 200GB a time or
two, then by 80
Hi,
I added a 2nd device and 'btrfs balance' crashed (kernel oops) half way
through, now I can only read the fs from a rawhide livedvd, but even
that can't fix the fs (finish balance, or remove 2nd device to try
again). I'd be grateful for any advice on getting back to a working
btrfs filesy
On 02/07/2014 05:40 AM, Roman Mamedov wrote:
> On Thu, 06 Feb 2014 20:54:19 +0100
> Goffredo Baroncelli wrote:
>
[...]
Even I am not entirely convinced, I update the Roman's PoC in order
to take in account all the RAID levels.
The filesystem test is composed by 7 51GB disks. Here my "df" result
Test for a btrfs data corruption when using compressed files/extents.
Under certain cases, it was possible for reads to return random data
(content from a previously used page) instead of zeroes. This also
caused partial updates to those regions that were supposed to be filled
with zeroes to save r
From: Wang Shilong
Btrfs send is assuming readonly root won't change, let's skip readonly root.
Signed-off-by: Wang Shilong
---
fs/btrfs/inode.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 1af34d0..e8dfd83 100644
--- a/fs/btrfs/inode.c
+++
From: Wang Shilong
This reverts commit 41ce9970a8a6a362ae8df145f7a03d789e9ef9d2.
Previously i was thinking we can use readonly root's commit root
safely while it is not true, readonly root may be cowed with the
following cases.
1.snapshot send root will cow source root.
2.balance,device operatio
When using a mix of compressed file extents and prealloc extents, it
is possible to fill a page of a file with random, garbage data from
some unrelated previous use of the page, instead of a sequence of zeroes.
A simple sequence of steps to get into such case, taken from the test
case I made for x
On 02/07/2014 05:40 AM, Roman Mamedov wrote:
> On Thu, 06 Feb 2014 20:54:19 +0100
> Goffredo Baroncelli wrote:
>
[...]
Even I am not entirely convinced, I update the Roman's PoC in order
to take in account all the RAID levels.
I performed some tests with 7 48.8GB disks. Here my "df" results
Pr
> If you disable CONFIG_BTRFS_FS_RUN_SANITY_TESTS, does it still crash?
Good idea! I've queued test jobs for that config. However sorry that
I'll be offline for the next 2 days. So please expect some delays.
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-btrf
On Sat, Feb 08, 2014 at 05:33:10PM +0600, Roman Mamedov wrote:
> On Fri, 07 Feb 2014 21:32:42 +0100
> Kai Krakow wrote:
>
> > It should show the raw space available. Btrfs also supports compression and
> > doesn't try to be smart about how much compressed data would fit in the
> > free
> > spa
On Fri, 07 Feb 2014 21:32:42 +0100
Kai Krakow wrote:
> It should show the raw space available. Btrfs also supports compression and
> doesn't try to be smart about how much compressed data would fit in the free
> space of the drive. If one is using RAID1, it's supposed to fill up with a
> rate
On Fri, 7 Feb 2014 12:08:12 +0600
Roman Mamedov wrote:
> > Earlier conventions would have stated Size ~900GB, and Avail ~900GB. But
> > that's not exactly true either, is it?
>
> Much better, and matching the user expectations of how RAID1 should behave,
> without a major "gotcha" blowing up in
Ok, I did nuke it now and created the fs again using 3.12 kernel. So
far so good. Runs fine.
Finally, I know its kind of offtopic, but can some help me
interpreting this (I think this is the error in the smart-log which
started the whole mess)?
Error 1 occurred at disk power-on lifetime: 2576 hour
Duncan <1i5t5.dun...@cox.net> schrieb:
[...]
Difficult to twist your mind around that but well explained. ;-)
> A snapshot thus looks much like a crash in terms of NOCOW file integrity
> since the blocks of a NOCOW file are simply snapshotted in-place, and
> there's already no checksumming or fi
Thanks for the review, Dave!
Comments inline.
On 02/07/2014 11:49 PM, Dave Chinner wrote:
On Fri, Feb 07, 2014 at 06:14:45PM +0100, Koen De Wit wrote:
Tests Btrfs filesystems with all possible metadata block sizes, by
setting large extended attributes on files.
Signed-off-by: Koen De Wit
The
Tests Btrfs filesystems with all possible metadata block sizes, by
setting large extended attributes on files.
Signed-off-by: Koen De Wit
---
v1->v2:
- Fix indentation: 8 spaces instead of 4
- Move _scratch_unmount to end of loop, add _check_scratch_fs
- Sending failure messages of
36 matches
Mail list logo