Dear All,
i was wondering what happened with the patch posted by Andrea Mazzoleni
back in
Februrary 2014 (this Thread:
http://thread.gmane.org/gmane.linux.kernel/1654735).
Why wash´t it added to the code? Something missing/wrong?
In my opinion the posted patch is awesome and would enable a
'btrfs fi df' needs exactly one arguments as mount option,
and due to the introduce of human readable options, some check is not
valid now, the new optind can point to the ending NULL string.
For example, you can run 'btrfs fi df' without any argument, and it will
error as ERROR: can't access
Can I downgrade the kernel from 3.17.1 to latest 3.10 if I have a
btrfs partition formatted and used on 3.17.1?
I mean, is there something that could go wrong with the fs if suddenly
I use an older kernel?
I want to downgrade because last night we had some 1200 oops's in 1
hour on the 3.17
'btrfs fi df' needs exactly one arguments as mount option,
but as 3.17 we can run 'btrfs fi df' without any argument,
and it will error as ERROR: can't access '%s' which means
the argument number does not do what it should.
The bug is caused by manually modify the optind and use check_argc_max()
Move the logic from the snapshot creation ioctl into send. This avoids
doing the transaction commit if send isn't used, and ensures that if
a crash/reboot happens after the transaction commit that created the
snapshot and before the transaction commit that switched the commit
root, send will not
If right after starting the snapshot creation ioctl we perform a write against a
file followed by a truncate, with both operations increasing the file's size, we
can get a snapshot tree that reflects a state of the source subvolume's tree
where
the file truncation happened but the write operation
Regression test for a btrfs issue where if right after the snapshot
creation ioctl started, a file write followed by a file truncate
happened, with both operations increasing the file's size, the created
snapshot would capture an inconsistent state of the file system tree.
That state reflected the
David Sterba posted on Mon, 20 Oct 2014 18:34:03 +0200 as excerpted:
On Thu, Oct 16, 2014 at 01:33:37PM +0200, David Sterba wrote:
I'd like to make it default with the 3.17 release of btrfs-progs.
Please let me know if you have objections.
For the record, 3.17 will not change the defaults.
Goffredo Baroncelli posted on Mon, 20 Oct 2014 22:21:04 +0200 as
excerpted:
On 10/20/2014 07:37 PM, Robert White wrote:
On 10/18/2014 04:41 PM, Russell Coker wrote:
[...]
Also you said that you are using a 32bit user space copied from
another server under a 64bit kernel. Is the ls command a
On Tue, 21 Oct 2014, Zygo Blaxell zblax...@furryterror.org wrote:
On Mon, Oct 20, 2014 at 04:38:28AM +, Duncan wrote:
Russell Coker posted on Sat, 18 Oct 2014 14:54:19 +1100 as excerpted:
# find . -name *546
./1412233213.M638209P10546 # ls -l ./1412233213.M638209P10546 ls:
cannot
On Tue, 21 Oct 2014 09:50:37 + (UTC)
Duncan 1i5t5.dun...@cox.net wrote:
(FWIW I wish that mount option would just go away as it would definitely
remove an invitation to a Russian roulette party with their data for the
unwary, but I suppose there's someone paying some bills somewhere that
I've just upgraded the Dom0 (NFS server) from 3.16.3 to 3.16.5 and it all
works.
Prior to upgrading the Dom0 I had the same problem occur with different file
names. All the names in question were truncated names of files that exist.
It seems that 3.16.3 has a bug with NFS serving files with
Ronny Egner posted on Tue, 21 Oct 2014 06:28:34 + as excerpted:
Dear All,
i was wondering what happened with the patch posted by Andrea Mazzoleni
back in Februrary 2014 (this Thread:
http://thread.gmane.org/gmane.linux.kernel/1654735).
Why wash´t it added to the code? Something
On 2014-10-21 05:29, Duncan wrote:
David Sterba posted on Mon, 20 Oct 2014 18:34:03 +0200 as excerpted:
On Thu, Oct 16, 2014 at 01:33:37PM +0200, David Sterba wrote:
I'd like to make it default with the 3.17 release of btrfs-progs.
Please let me know if you have objections.
For the record,
Cristian Falcas posted on Tue, 21 Oct 2014 11:13:48 +0300 as excerpted:
Can I downgrade the kernel from 3.17.1 to latest 3.10 if I have a btrfs
partition formatted and used on 3.17.1?
I mean, is there something that could go wrong with the fs if suddenly I
use an older kernel?
I want to
Roman Mamedov posted on Tue, 21 Oct 2014 16:16:11 +0600 as excerpted:
On Tue, 21 Oct 2014 09:50:37 + (UTC)
Duncan 1i5t5.dun...@cox.net wrote:
(FWIW I wish that mount option would just go away as it would
definitely remove an invitation to a Russian roulette party with their
data for
Russell Coker posted on Tue, 21 Oct 2014 21:13:29 +1100 as excerpted:
I don't know what
space_cache is about, is that something the kernel adds automatically?
Yes, space_cache is the default.
Apparently early in space_cache history you had to mount with space_cache
once, and the kernel
On 21/10/2014 2:02 μμ, Austin S Hemmelgarn wrote:
On 2014-10-21 05:29, Duncan wrote:
David Sterba posted on Mon, 20 Oct 2014 18:34:03 +0200 as excerpted:
On Thu, Oct 16, 2014 at 01:33:37PM +0200, David Sterba wrote:
I'd like to make it default with the 3.17 release of btrfs-progs.
Please let
Thank you for your answer.
I will reformat the disk with a 3.10 kernel in the meantime, because I
don't have any rpms for 3.16 now.
On Tue, Oct 21, 2014 at 2:26 PM, Duncan 1i5t5.dun...@cox.net wrote:
Cristian Falcas posted on Tue, 21 Oct 2014 11:13:48 +0300 as excerpted:
Can I downgrade the
Duncan posted on Tue, 21 Oct 2014 11:01:50 as excepted:
Ronny Egner posted on Tue, 21 Oct 2014 06:28:34 + as excerpted:
Dear All,
i was wondering what happened with the patch posted by Andrea Mazzoleni
back in Februrary 2014 (this Thread:
On 10/21/2014 01:13 AM, Cristian Falcas wrote:
Can I downgrade the kernel from 3.17.1 to latest 3.10 if I have a
btrfs partition formatted and used on 3.17.1?
I went back from 3.17.0 to 3.16.3 when 3.17 acted flaky, and since then
gone up to 3.16.5 with nice results. 3.17.2 is, I think,
On 10/21/2014 03:13 AM, Russell Coker wrote:
On Tue, 21 Oct 2014, Robert White rwh...@pobox.com wrote:
What happens if you stop the Xen domain for the mail server and then
mount the disks into a native 64bit environment and then ls the file name?
The filesystem in question is NFS mounted from
On 10/21/2014 06:18 AM, Cristian Falcas wrote:
Thank you for your answer.
I will reformat the disk with a 3.10 kernel in the meantime, because I
don't have any rpms for 3.16 now.
Don't bother reformatting (yet). The on-disk layout is stable between
the releases. It should run fine and all
On 10/21/2014 06:18 AM, Cristian Falcas wrote:
Thank you for your answer.
I will reformat the disk with a 3.10 kernel in the meantime, because I
don't have any rpms for 3.16 now.
More concisely: Don't use 3.10 BTRFS for data you value. There is a
non-trivial chance that the problems you
On 10/21/2014 03:42 AM, Russell Coker wrote:
I've just upgraded the Dom0 (NFS server) from 3.16.3 to 3.16.5 and it all
works.
Prior to upgrading the Dom0 I had the same problem occur with different file
names. All the names in question were truncated names of files that exist.
It seems that
I will start investigating how can we build our own rpms from the 3.16
sources. Until then we are stuck with the ones from the official repos
or elrepo. Which means 3.10 is the latest for el6. We used this until
now and seems we where lucky enough to not hit anything bad.
We upgraded to 3.17
On 2014-10-21 11:34, Cristian Falcas wrote:
I will start investigating how can we build our own rpms from the 3.16
sources. Until then we are stuck with the ones from the official repos
or elrepo. Which means 3.10 is the latest for el6. We used this until
now and seems we where lucky enough to
On Oct 21, 2014, at 9:18 AM, Cristian Falcas cristi.fal...@gmail.com wrote:
Thank you for your answer.
I will reformat the disk with a 3.10 kernel in the meantime, because I
don't have any rpms for 3.16 now.
If you've formatted with features in common between 3.10 and 3.17, I don't
think
On Oct 21, 2014, at 11:34 AM, Cristian Falcas cristi.fal...@gmail.com wrote:
I will start investigating how can we build our own rpms from the 3.16
sources. Until then we are stuck with the ones from the official repos
or elrepo. Which means 3.10 is the latest for el6. We used this until
now
On Oct 21, 2014, at 12:19 PM, Chris Murphy li...@colorremedies.com wrote:
On Oct 21, 2014, at 11:34 AM, Cristian Falcas cristi.fal...@gmail.com wrote:
I will start investigating how can we build our own rpms from the 3.16
sources. Until then we are stuck with the ones from the official
On Tue, Oct 21, 2014 at 5:29 AM, Duncan 1i5t5.dun...@cox.net wrote:
David Sterba posted on Mon, 20 Oct 2014 18:34:03 +0200 as excerpted:
On Thu, Oct 16, 2014 at 01:33:37PM +0200, David Sterba wrote:
I'd like to make it default with the 3.17 release of btrfs-progs.
Please let me know if you
On 10/21/2014 11:50 AM, Duncan wrote:
Goffredo Baroncelli posted on Mon, 20 Oct 2014 22:21:04 +0200 as
excerpted:
[...]
Could this be related to the inode overflow in 32 bit system (see
inode_cache options) ? If so running a 64bit ls -i should work
Good point. Russell might just
When you say el6 you mean el7 right? The last kernel for el7 is
3.10.x
But Redhat lie a little with kernel version numbers. They say you have a
3.10 kernel, but I think they backport a lot from newers kernels.
Probably the btrfs of redhat el7 is not really a btrfs from 3.10, maybe
is btrfs
I'm rebuilding now the 3.16.6 version from fedora for el6 (I had to
make some small modification: remove perl-carp dependency and some
compiler flag). And it's for el6, so we have only elrepo with a newer
kernel.
Is it safe to install the kernel without recompiling it first for the
new platform?
FYI - after a failed disk and replacing it I've run a balance; it took
almost 3 weeks to complete, for 120 GBs of data:
# time btrfs balance start -v /home
Dumping filters: flags 0x7, state 0x0, force is off
DATA (flags 0x0): balancing
METADATA (flags 0x0): balancing
SYSTEM (flags 0x0):
On Mon, Oct 20, 2014 at 6:12 PM, Greg KH gre...@linuxfoundation.org
wrote:
On Mon, Oct 20, 2014 at 01:22:22PM +0100, Filipe Manana wrote:
May I suggest porting the following commit to 3.14 too?
Regression test for a btrfs clone ioctl issue where races between
a clone operation and concurrent target file reads would result in
leaving stale data in the page cache. After the clone operation
finished, reading from the clone target file would return the old
and no longer valid data. This
On 21.10.2014 20:59, Tomasz Chmielewski wrote:
FYI - after a failed disk and replacing it I've run a balance; it took
almost 3 weeks to complete, for 120 GBs of data:
Looks normal to me. Last time I started a balance after adding 6th
device to my FS, it took 4 days to move 25GBs of data. Some
Hello,
I would like to ask if the balance time is related to the number of
snapshot or if this is related only to data (or both).
I currently have about 4TB of data and around 5k snapshots. I'm thinking
of going raid1 instead of single. From the numbers I see this seems
totally impossible
Hello,
the version 3.17 of btrfs-progs has been released.
on a system with 3-disk raid1 and 4 and 5-disk raid10 fs,
btrfs filesystem show now stalls for approx. half a minute after the
listing, just before the version information. During that time, it
often prints something like
[...]
page:ea00088aa1c0 count:4 mapcount:0 mapping:88009901e2d8 index:0x0
flags: 0x2ffc000806(error|referenced|private)
page dumped because: VM_BUG_ON_PAGE(!PageLocked(page))
[ cut here ]
kernel BUG at mm/filemap.c:747!
invalid opcode: [#1] PREEMPT SMP
Hello,
one more thing: I just overwrote part of one disk.
btrfs filesystem show could be more helpful diagnosing this:
# btrfs fi sh
Label: 'BTRFSROOT' uuid: d877125e-9b8d-47ea-b57b-7411292fd26c
Total devices 1 FS bytes used 2.91GiB
devid1 size 29.44GiB used 5.04GiB
Any reproducer?
Thanks,
Qu
Original Message
Subject: [3.18rc1] btrfs triggering vm bug_on
From: Dave Jones da...@redhat.com
To: Linux Kernel linux-ker...@vger.kernel.org
Date: 2014年10月22日 05:57
page:ea00088aa1c0 count:4 mapcount:0 mapping:88009901e2d8 index:0x0
flags:
On Wed, Oct 22, 2014 at 08:50:57AM +0800, Qu Wenruo wrote:
Any reproducer?
Thanks,
Qu
Original Message
Subject: [3.18rc1] btrfs triggering vm bug_on
From: Dave Jones da...@redhat.com
To: Linux Kernel linux-ker...@vger.kernel.org
Date: 2014年10月22日 05:57
That's an unmanageably large and probably pointless number of snapshots
guys.
I mean 150 is a heck of a lot, and 5000 is almost unfathomable in terms
of possible usefulness.
Snapshots are cheap but they aren't free.
Each snapshot is effectively stapling down one version of your entire
On Oct 21, 2014, at 4:14 PM, Piotr Pawłow p...@siedziba.pl wrote:
On 21.10.2014 20:59, Tomasz Chmielewski wrote:
FYI - after a failed disk and replacing it I've run a balance; it took
almost 3 weeks to complete, for 120 GBs of data:
Looks normal to me. Last time I started a balance after
It _looks_ like a hard out-of-ram event and not necessarily a filesystem
implementation problem.
One of the systems I work with has zero swap but still allows overcommit
in the VM. It would do things like this all the time back in development
(and I suspect it still does but the developers
Pre-Script :: this is not an 3.18 problem. I traced it out in 3.16.
On 10/21/2014 06:55 PM, Robert White wrote:
It _looks_ like a hard out-of-ram event and not necessarily a filesystem
implementation problem.
One of the systems I work with has zero swap but still allows overcommit
in the VM.
Rich Freeman posted on Tue, 21 Oct 2014 12:40:01 -0400 as excerpted:
On Tue, Oct 21, 2014 at 5:29 AM, Duncan 1i5t5.dun...@cox.net wrote:
David Sterba posted on Mon, 20 Oct 2014 18:34:03 +0200 as excerpted:
On Thu, Oct 16, 2014 at 01:33:37PM +0200, David Sterba wrote:
I'd like to make it
Chris Murphy posted on Tue, 21 Oct 2014 12:07:27 -0400 as excerpted:
One thing I wonder, if going back to kernel 3.14 (or even 3.10), which
btrfs-progs to use? Is it OK to use 3.17?
The goal is to have userspace entirely backward compatible (well, to the
last incompatible device format
On Tue, Oct 21, 2014 at 06:10:27PM -0700, Robert White wrote:
That's an unmanageably large and probably pointless number of
snapshots guys.
I mean 150 is a heck of a lot, and 5000 is almost unfathomable in
terms of possible usefulness.
Snapshots are cheap but they aren't free.
This could
Robert White posted on Tue, 21 Oct 2014 18:10:27 -0700 as excerpted:
Each snapshot is effectively stapling down one version of your entire
metadata tree, right? So imagine leaving tape spikes (little marks on
the floor to keep track of where something is so you can put it back)
for the last
52 matches
Mail list logo