On Tue, Jun 4, 2013 at 4:32 PM, Dan Carpenter dan.carpen...@oracle.com wrote:
On Mon, Jun 03, 2013 at 09:58:17PM +0800, Peng Tao wrote:
int libcfs_kkuc_msg_put(struct file *filp, void *payload)
{
struct kuc_hdr *kuch = (struct kuc_hdr *)payload;
+ ssize_t count = kuch-kuc_msglen;
On Tue, Jun 4, 2013 at 6:23 PM, Peng Tao bergw...@gmail.com wrote:
On Tue, Jun 4, 2013 at 4:32 PM, Dan Carpenter dan.carpen...@oracle.com
wrote:
On Mon, Jun 03, 2013 at 09:58:17PM +0800, Peng Tao wrote:
int libcfs_kkuc_msg_put(struct file *filp, void *payload)
{
struct kuc_hdr *kuch
On Mon, Jun 03, 2013 at 03:55:55PM -0400, J??rn Engel wrote:
Actually, when I compare the two invocations, I prefer the
list_for_each_entry_del() variant over list_pop_entry().
while ((ref = list_pop_entry(prefs, struct __prelim_ref, list))) {
list_for_each_entry_del(ref,
Quoting Christoph Hellwig (2013-06-04 10:48:56)
On Mon, Jun 03, 2013 at 03:55:55PM -0400, J??rn Engel wrote:
Actually, when I compare the two invocations, I prefer the
list_for_each_entry_del() variant over list_pop_entry().
while ((ref = list_pop_entry(prefs, struct
Greetings all,
when testing drive failures, I occasionally hit the following hang:
# Block group is being cached-in by caching_thread()
# caching_thread() experiences an error, e.g., in btrfs_search_slot,
because of drive failure:
ret = btrfs_search_slot(NULL, extent_root, key, path, 0,
On 06/04/13 16:53, Chris Mason wrote:
Quoting Christoph Hellwig (2013-06-04 10:48:56)
On Mon, Jun 03, 2013 at 03:55:55PM -0400, J??rn Engel wrote:
Actually, when I compare the two invocations, I prefer the
list_for_each_entry_del() variant over list_pop_entry().
while ((ref =
On Tue, 4 June 2013 22:09:13 +0200, Arne Jansen wrote:
On 06/04/13 16:53, Chris Mason wrote:
Quoting Christoph Hellwig (2013-06-04 10:48:56)
On Mon, Jun 03, 2013 at 03:55:55PM -0400, J??rn Engel wrote:
Actually, when I compare the two invocations, I prefer the
list_for_each_entry_del()
A user reported that fsck was complaining about unresolved refs for some
snapshots. You can reproduce this by doing
mkfs.btrfs /dev/sdb
mount /dev/sdb /mnt
btrfs subvol snap /mnt/ /mnt/a
btrfs subvol snap /mnt/ /mnt/b
btrfs subvol del /mnt/a
umount /mnt
btrfsck /dev/sdb
and you'd get this
Hello,
how can I recover from running btrfsck --init-csum-tree on my 2TB
btrfs? Every attempt to read a file results in no checksum being
found, followed by a checksum mismatch which leads to the data block
being zeroed out (see label zeroit in inode.c). My current fix is
simply skipping the
Hi gang,
I finally sat down to fix that readdir hang that has been in the back
of my mind for a while. I *hope* that the fix is pretty simple: just
don't manufacture a fake f_pos, I *think* we can abuse f_version as an
indicator that we shouldn't return entries. Does this look reasonable?
We
The only time we need to advance f_pos is after we've successfully given
a result to userspace via filldir. This simplification gets rid of the
is_curr variable used to update f_pos for the delayed item readdir
entries.
Signed-off-by: Zach Brown z...@redhat.com
---
fs/btrfs/delayed-inode.c | 5
To work around bugs in userspace btrfs_real_readdir() sets f_pos to an
offset that will prevent any future entries from being returned once the
last entry is hit. Over time this supposedly impossible offset was
decreased from the initial U64_MAX to INT_MAX to appease 32bit
userspace.
I'd like to use the currently unused next/prev arguments to
__btrfs_lookup_delayed_item() in a future patch. I noticed that the
code could be simplified.
We don't need to use rb_next() or rb_prev() to walk back up the tree
once we've failed to find the key at a leaf. We can record the most
This just moves some duplicated code into a helper. I couldn't bring
myself to add another copy in an upcoming patch.
The delayed_root BUG() in __btrfs_remove_delayed_item() wasn't needed.
The pointer deref will oops later if its null.
And now the remaining BUG() is in one place! :)
Just call btrfs_put_delayed_items() for each list rather than having two
list arguments and duplicated code.
list_for_each_entry_safe() can handle an empty list.
We don't have to conditionally use and tear down the lists if we always
initialize them to be empty. They're only populated when
On every readdir call all the delayed items for the dir are put on a
private list with a held reference. If they're outside the f_pos values
that this readdir call ends up using they're just dropped and removed
from the list. We can make some tiny changes to cut down on this
overhead.
First,
Quoting Zach Brown (2013-06-04 18:17:54)
Hi gang,
I finally sat down to fix that readdir hang that has been in the back
of my mind for a while. I *hope* that the fix is pretty simple: just
don't manufacture a fake f_pos, I *think* we can abuse f_version as an
indicator that we shouldn't
On Tue, Jun 04, 2013 at 07:16:53PM -0400, Chris Mason wrote:
Quoting Zach Brown (2013-06-04 18:17:54)
Hi gang,
I finally sat down to fix that readdir hang that has been in the back
of my mind for a while. I *hope* that the fix is pretty simple: just
don't manufacture a fake f_pos, I
On tue, 4 Jun 2013 15:17:55 -0700, Zach Brown wrote:
The only time we need to advance f_pos is after we've successfully given
a result to userspace via filldir. This simplification gets rid of the
is_curr variable used to update f_pos for the delayed item readdir
entries.
On tue, 4 Jun 2013 16:26:57 -0700, Zach Brown wrote:
On Tue, Jun 04, 2013 at 07:16:53PM -0400, Chris Mason wrote:
Quoting Zach Brown (2013-06-04 18:17:54)
Hi gang,
I finally sat down to fix that readdir hang that has been in the back
of my mind for a while. I *hope* that the fix is
I have seen a lot of boilerplate code that either follows the pattern of
while (!list_empty(head)) {
pos = list_entry(head-next, struct foo, list);
list_del(pos-list);
...
}
or some variant thereof.
With this patch in, people can use
Signed-off-by: Joern Engel jo...@logfs.org
---
fs/btrfs/backref.c | 15 +++
fs/btrfs/compression.c |4 +---
fs/btrfs/disk-io.c |6 +-
fs/btrfs/extent-tree.c | 17 +++--
fs/btrfs/extent_io.c|8 ++--
fs/btrfs/inode.c| 16
On Tue, 4 June 2013 14:44:35 -0400, Jörn Engel wrote:
Or while_list_drain?
Not sure if the silence is approval or lack of interest, but a new set
of patches is posted. By playing around with the implementation a
bit, I have actually found a variant that makes the object code
shrink. Not one
Hi-
I'm pretty reliably triggering the following bug after powercycling an
active btrfs + ceph workload and then trying to remount. Is this a known
issue?
sage
2013-06-04T18:54:28.532988-07:00 plana71 kernel: [ 39.311120] [
cut here ]
24 matches
Mail list logo