Hi Tomasz,
On 2014/12/20 8:28, Tomasz Chmielewski wrote:
Get this BUG with 3.18.1 (pasted at the bottom of the email).
Below all actions from creating the fs to BUG. I did not attempt to reproduce.
I tried to reproduce this problem and have some questions.
# mkfs.btrfs /dev/vdb
Btrfs v3.17.
On Mon, Jan 5, 2015 at 6:59 AM, Austin S Hemmelgarn
wrote:
> Secondly, I would highly recommend not using ANY non-cluster-aware FS on top
> of a clustered block device like RBD
For my use-case, this is just a single server using the RBD device. No
clustering involved on the BTRFS side of thing.
On 2015/01/06 21:54, Dongsheng Yang wrote:
> In function qgroup_excl_accounting(), we need to WARN when
> qg->excl is less than what we want to free, same to child
> and parents. But currently, for parent qgroup, the WARN_ON()
> is located after freeing qg->excl. It will WARN out even we
> free it
Hi Yang,
On 2015/01/05 15:16, Dongsheng Yang wrote:
> Hi Josef and others,
>
> This patch set is about enhancing qgroup.
>
> [1/3]: fix a bug about qgroup leak when we exceed quota limit,
> It is reviewd by Josef.
> [2/3]: introduce a new accounter in qgroup to close a window where
>
Hi Naota,
On 2015/01/06 1:01, Naohiro Aota wrote:
> After submit_one_bio(), `bio' can go away. However submit_extent_page()
> leave `bio' referable if submit_one_bio() failed (e.g. -ENOMEM on OOM).
> It will cause invalid paging request when submit_extent_page() is called
> next time.
>
> I repro
This test is motivated by an fsync issue discovered in btrfs.
The issue was that after fsyncing an inode that got its link count
decremented, and the new link count is greater than zero, after the
fsync log replay the inode's parent directory metadata became
inconsistent - it had a wrong i_size whi
If we have an inode (file) with a link count greater than 1, remove
one of its hard links and, fsync the inode, power fail/crash and
then replay the fsync log on the next mount, we end up getting the
parent directory's metadata inconsistent - its i_size still reflects
the deleted hard link. This pr
Very often our extent buffer's header generation doesn't match the current
transaction's id or it is also referenced by other trees (snapshots), so
we don't need the corresponding block group cache object. Therefore only
search for it if we are going to use it, so we avoid an unnecessary search
in
On Mon, Dec 29, 2014 at 10:32:00AM +0100, Martin Steigerwald wrote:
> Am Sonntag, 28. Dezember 2014, 21:07:05 schrieb Zygo Blaxell:
> > On Sat, Dec 27, 2014 at 08:23:59PM +0100, Martin Steigerwald wrote:
> > > My simple test case didn´t trigger it, and I so not have another twice 160
> > > GiB avai
On Tue, Jan 06, 2015 at 11:43:22PM +1100, Chris Samuel wrote:
> On Tue, 6 Jan 2015 10:47:00 PM Chris Samuel wrote:
>
> > On Mon, 5 Jan 2015 06:21:52 PM Lennart Poettering wrote:
> >
> > > It should be easy to initialize it to the mtime when the inode is
> > > first created...
> >
> > This I agre
On Mon, Jan 05, 2015 at 06:21:52PM +0100, Lennart Poettering wrote:
> btrfs' btrfs_inode_item structure contains a field for the birth time
> of a file, .otime. This field could be quite useful, and I'd like to
> make use of it. I can query it with the BTRFS_IOC_TREE_SEARCH ioctl
> from userspace,
Hi,
BTRFS check on /dev/sdc1 reveals everything looks ok:
# btrfs check /dev/sdc1
Checking filesystem on /dev/sdc1
UUID: 26ed1033-429a-444f-97cc-ce8103db4c39
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
found 195515710524 bytes used err is 0
tota
Hi,
Try to mount with -o recovery with either kernel (newer is pretty much
always better). If that doesn't work then you should try upgrading
btrfs-progs to 3.18 (released dozens of hours ago) and run 'btrfs
check' on the volume and report the results. I don't recommend using
--repair option ju
Hi,
[32079.815291] BTRFS info (device sdd1): disk space caching is enabled
[32082.419524] BTRFS: sdd1 checksum verify failed on 588447744 wanted
F90C810B found 6E0D3115 level 0
[32114.418433] BTRFS: sdd1 checksum verify failed on 588447744 wanted
F90C810B found 6E0D3115 level 0
[32125.951446] BT
On Mon, Jan 05, 2015 at 05:03:29PM +0900, Satoru Takeuchi wrote:
> >> - failrec = (struct io_failure_record *)state->private;
> >> + failrec = (struct io_failure_record *)(unsigned
> >> long)state->private;
> >
> > We're always using the 'private' data to store a pointer to
> > '
In function qgroup_excl_accounting(), we need to WARN when
qg->excl is less than what we want to free, same to child
and parents. But currently, for parent qgroup, the WARN_ON()
is located after freeing qg->excl. It will WARN out even we
free it normally.
This patch move this WARN_ON() before free
On Tue, 6 Jan 2015 10:47:00 PM Chris Samuel wrote:
> On Mon, 5 Jan 2015 06:21:52 PM Lennart Poettering wrote:
>
> > It should be easy to initialize it to the mtime when the inode is
> > first created...
>
> This I agree with, well worth doing anyway.
>
> I'll see if I can knock up a patch.
Sad
On Tue, Jan 06, 2015 at 11:42:07AM +0100, Jiri Kosina wrote:
> On Mon, 5 Jan 2015, David Sterba wrote:
>
> > > Remove the function btrfs_reada_detach() that is not used anywhere.
> > >
> > > This was partially found by using a static code analysis program called
> > > cppcheck.
> > >
> > > Sign
On Mon, 5 Jan 2015 06:21:52 PM Lennart Poettering wrote:
> Is this on purpose, or simply an oversight?
The only hint I can see that it's deliberate is the comment in fs/btrfs/send.c
that says:
/* TODO Add otime support when the otime patches get into upstream */
However...
> It should be eas
On Mon, 5 Jan 2015, David Sterba wrote:
> > Remove the function btrfs_reada_detach() that is not used anywhere.
> >
> > This was partially found by using a static code analysis program called
> > cppcheck.
> >
> > Signed-off-by: Rickard Strandqvist
>
> No please, this function is part of publ
The newly introduced search_chunk_tree_for_fs_info() won't count devid 0
in fi_arg->num_devices, which will cause buffer overflow since later
get_device_info() will fill di_args with devid.
This can be trigger by fstests/btrfs/069 and any operations needs to
iterate over all the devices like 'fi s
Original Message
Subject: Re: RFE: per-subvolume timestamp that is updated on every
change to a subvolume
From: Qu Wenruo
To: Lennart Poettering ,
Date: 2015年01月06日 14:02
Original Message
Subject: RFE: per-subvolume timestamp that is updated on every ch
22 matches
Mail list logo