Thanks for the review, Dave!
Comments inline.
On 02/07/2014 11:49 PM, Dave Chinner wrote:
On Fri, Feb 07, 2014 at 06:14:45PM +0100, Koen De Wit wrote:
Tests Btrfs filesystems with all possible metadata block sizes, by
setting large extended attributes on files.
Signed-off-by: Koen De Wit
Tests Btrfs filesystems with all possible metadata block sizes, by
setting large extended attributes on files.
Signed-off-by: Koen De Wit koen.de@oracle.com
---
v1-v2:
- Fix indentation: 8 spaces instead of 4
- Move _scratch_unmount to end of loop, add _check_scratch_fs
-
Duncan 1i5t5.dun...@cox.net schrieb:
[...]
Difficult to twist your mind around that but well explained. ;-)
A snapshot thus looks much like a crash in terms of NOCOW file integrity
since the blocks of a NOCOW file are simply snapshotted in-place, and
there's already no checksumming or file
Ok, I did nuke it now and created the fs again using 3.12 kernel. So
far so good. Runs fine.
Finally, I know its kind of offtopic, but can some help me
interpreting this (I think this is the error in the smart-log which
started the whole mess)?
Error 1 occurred at disk power-on lifetime: 2576
On Fri, 7 Feb 2014 12:08:12 +0600
Roman Mamedov r...@romanrm.net wrote:
Earlier conventions would have stated Size ~900GB, and Avail ~900GB. But
that's not exactly true either, is it?
Much better, and matching the user expectations of how RAID1 should behave,
without a major gotcha
On Fri, 07 Feb 2014 21:32:42 +0100
Kai Krakow hurikhan77+bt...@gmail.com wrote:
It should show the raw space available. Btrfs also supports compression and
doesn't try to be smart about how much compressed data would fit in the free
space of the drive. If one is using RAID1, it's supposed to
On Sat, Feb 08, 2014 at 05:33:10PM +0600, Roman Mamedov wrote:
On Fri, 07 Feb 2014 21:32:42 +0100
Kai Krakow hurikhan77+bt...@gmail.com wrote:
It should show the raw space available. Btrfs also supports compression and
doesn't try to be smart about how much compressed data would fit in
If you disable CONFIG_BTRFS_FS_RUN_SANITY_TESTS, does it still crash?
Good idea! I've queued test jobs for that config. However sorry that
I'll be offline for the next 2 days. So please expect some delays.
Thanks,
Fengguang
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
On 02/07/2014 05:40 AM, Roman Mamedov wrote:
On Thu, 06 Feb 2014 20:54:19 +0100
Goffredo Baroncelli kreij...@libero.it wrote:
[...]
Even I am not entirely convinced, I update the Roman's PoC in order
to take in account all the RAID levels.
I performed some tests with 7 48.8GB disks. Here my
When using a mix of compressed file extents and prealloc extents, it
is possible to fill a page of a file with random, garbage data from
some unrelated previous use of the page, instead of a sequence of zeroes.
A simple sequence of steps to get into such case, taken from the test
case I made for
From: Wang Shilong wangsl.f...@cn.fujitsu.com
This reverts commit 41ce9970a8a6a362ae8df145f7a03d789e9ef9d2.
Previously i was thinking we can use readonly root's commit root
safely while it is not true, readonly root may be cowed with the
following cases.
1.snapshot send root will cow source
Test for a btrfs data corruption when using compressed files/extents.
Under certain cases, it was possible for reads to return random data
(content from a previously used page) instead of zeroes. This also
caused partial updates to those regions that were supposed to be filled
with zeroes to save
On 02/07/2014 05:40 AM, Roman Mamedov wrote:
On Thu, 06 Feb 2014 20:54:19 +0100
Goffredo Baroncelli kreij...@libero.it wrote:
[...]
Even I am not entirely convinced, I update the Roman's PoC in order
to take in account all the RAID levels.
The filesystem test is composed by 7 51GB disks.
Hi,
I added a 2nd device and 'btrfs balance' crashed (kernel oops) half way
through, now I can only read the fs from a rawhide livedvd, but even
that can't fix the fs (finish balance, or remove 2nd device to try
again). I'd be grateful for any advice on getting back to a working
btrfs
Hello,
I have a large file system that has been growing. We've resized it a
couple of times with the following approach:
lvextend -L +800G /dev/raid/virtual_machines
btrfs filesystem resize +800G /vms
I think the FS started out at 200G, we increased it by 200GB a time or
two, then by
Hello, David, Fengguang, Chris.
On Fri, Feb 07, 2014 at 01:13:06PM -0800, David Rientjes wrote:
On Fri, 7 Feb 2014, Fengguang Wu wrote:
On Fri, Feb 07, 2014 at 02:13:59AM -0800, David Rientjes wrote:
On Fri, 7 Feb 2014, Fengguang Wu wrote:
[1.625020] BTRFS: selftest: Running
Hugo Mills h...@carfax.org.uk schrieb:
On Sat, Feb 08, 2014 at 05:33:10PM +0600, Roman Mamedov wrote:
On Fri, 07 Feb 2014 21:32:42 +0100
Kai Krakow hurikhan77+bt...@gmail.com wrote:
It should show the raw space available. Btrfs also supports compression
and doesn't try to be smart about
Chris Murphy li...@colorremedies.com schrieb:
On Feb 6, 2014, at 11:08 PM, Roman Mamedov r...@romanrm.net wrote:
And what
if I am accessing that partition on a server via a network CIFS/NFS share
and don't even *have a way to find out* any of that.
That's the strongest argument. And
Martin Steigerwald mar...@lichtvoll.de schrieb:
While I understand that there is *never* a guarentee that a given free
space can really be allocated by a process cause other processes can
allocate space as well in the mean time, and while I understand that its
difficult to provide an accurate
Hello,
Ok.
I think, I do/did have some symptoms, but I cannot exclude other reasons..
-High Load without high cpu-usage (io was the bottleneck)
-Just now: transfer from one directory to the other on the same
subvolume (from /mnt/subvol/A/B to /mnt/subvol/A) I get 1.2MB/s instead
of 60.
-For
On Sat, 08 Feb 2014 22:35:40 +0100
Kai Krakow hurikhan77+bt...@gmail.com wrote:
Imagine the future: Btrfs supports different RAID levels per subvolume. We
need to figure out where to place a new subvolume. I need raw numbers for
it. Df won't tell me that now. Things become very difficult
In case we do not refill, we can overwrite cur pointer from prio_head
by one from not prioritized head, what looks as something that was
not intended.
This change make we always take works from prio_head first until it's
not empty.
Signed-off-by: Stanislaw Gruszka stf...@wp.pl
---
I found this
Everyone who has actually looked at what the statfs syscall returns
and how df (and everyone else) uses it, keep talking. Everyone else,
go read that source code first.
There is _no_ combination of values you can return in statfs which
will not be grossly misleading in some common scenario that
Roman Mamedov r...@romanrm.net schrieb:
It should show the raw space available. Btrfs also supports compression
and doesn't try to be smart about how much compressed data would fit in
the free space of the drive. If one is using RAID1, it's supposed to fill
up with a rate of 2:1. If one is
cwillu cwi...@cwillu.com schrieb:
Everyone who has actually looked at what the statfs syscall returns
and how df (and everyone else) uses it, keep talking. Everyone else,
go read that source code first.
There is _no_ combination of values you can return in statfs which
will not be grossly
Roman Mamedov r...@romanrm.net schrieb:
UNIX 'df' and the 'statfs' call on the other hand should keep the behavior
people are accustomized to rely on since 1970s.
When I started to use unix, df returned blocks, not bytes. Without your
proposed patch, it does that right. With your patch, it
On Feb 8, 2014, at 3:01 PM, Hendrik Friedel hend...@friedels.name wrote:
Hello,
Ok.
I think, I do/did have some symptoms, but I cannot exclude other reasons..
-High Load without high cpu-usage (io was the bottleneck)
-Just now: transfer from one directory to the other on the same
On Sun, 09 Feb 2014 00:32:47 +0100
Kai Krakow hurikhan77+bt...@gmail.com wrote:
When I started to use unix, df returned blocks, not bytes. Without your
proposed patch, it does that right. With your patch, it does it wrong.
It returns total/used/available space that is usable/used/available
On Sun, 09 Feb 2014 00:17:29 +0100
Kai Krakow hurikhan77+bt...@gmail.com wrote:
Dear employees,
Please keep in mind that when you run out of space on the fileserver
'\\DepartmentC', when you free up space in the directory '\PublicStorage7'
the free space you gain on '\StorageArchive' is
On Feb 8, 2014, at 6:55 PM, Roman Mamedov r...@romanrm.net wrote:
Not sure what exactly becomes problematic if a 2-device RAID1 tells the user
they can store 1 TB of their data on it, and is no longer lying about the
possibility to store 2 TB on it as currently.
Two 1TB disks in RAID1.
On Feb 8, 2014, at 7:21 PM, Chris Murphy li...@colorremedies.com wrote:
we don't have a top level switch for variable raid on a volume yet
This isn't good wording. We don't have a controllable way to set variable raid
levels. The interrupted convert model I'd consider not controllable.
2014-02-08 23:46 GMT+08:00 Wang Shilong wangshilong1...@gmail.com:
From: Wang Shilong wangsl.f...@cn.fujitsu.com
This reverts commit 41ce9970a8a6a362ae8df145f7a03d789e9ef9d2.
Previously i was thinking we can use readonly root's commit root
safely while it is not true, readonly root may be
kernel 3.12.7, python 2.7.6-5, debian testing/unstable, bedup installed as per
pip install --user bedup
I tried installing the git version, but the error is the same:
Anyway, with the other bedup, I get:
gargamel:/mnt/dshelf2/backup# bedup show
Traceback (most recent call last):
File
Johan Kröckel posted on Sat, 08 Feb 2014 12:09:46 +0100 as excerpted:
Ok, I did nuke it now and created the fs again using 3.12 kernel. So far
so good. Runs fine.
Finally, I know its kind of offtopic, but can some help me interpreting
this (I think this is the error in the smart-log which
Roman Mamedov posted on Sun, 09 Feb 2014 04:10:50 +0600 as excerpted:
If you need to perform a btrfs-specific operation, you can easily use
the btrfs-specific tools to prepare for it, specifically use btrfs fi
df which could give provide every imaginable interpretation of free
space estimate
35 matches
Mail list logo