Hello Rodrigo,
On Wed, Jun 26, 2013 at 5:44 PM, Rodrigo Dias Cruz wrote:
> I had the very same problem some days ago.
>
> I have not yet found out how to fix the broken btrfs filesystem. However, I
> have been able to recover all my files from the filesystem and copy them to
> a brand new ext4 f
good catch.
Reviewed-by: Anand Jain
On 06/30/2013 07:51 PM, Filipe David Borba Manana wrote:
The uuid_unparse() call in btrfs_scan_one_device() was
a no-op.
Signed-off-by: Filipe David Borba Manana
---
volumes.c |2 --
1 file changed, 2 deletions(-)
diff --git a/volumes.c b/volume
Shridhar Daithankar posted on Mon, 01 Jul 2013 08:20:19 +0530 as
excerpted:
> On Sunday, June 30, 2013 01:53:48 PM Garry T. Williams wrote:
[discussing fragmentation]
>
>> ~/.kde/share/apps/nepomuk/repository/main/data/virtuosobackend
>
> damn!
>
> # filefrag soprano-virtuoso.db soprano-virtuo
On Sun, Jun 30, 2013 at 02:01:17PM -0400, Josef Bacik wrote:
> On Sun, Jun 30, 2013 at 11:02:10PM +0800, Liu Bo wrote:
> > On Sun, Jun 30, 2013 at 07:22:00AM -0400, Josef Bacik wrote:
> > > On Sun, Jun 30, 2013 at 10:25:05AM +0200, Jan Schmidt wrote:
> > > > On 30.06.2013 05:17, Josef Bacik wrote:
On lör, 2013-06-29 at 03:08 -0600, cwillu wrote:
>
> Not sure I entirely follow: mounting with -o degraded (not -o
> recovery) is how you're supposed to mount if there's a disk missing.
What I'm wondering about is why btrfsck segfaults, why it won't claim
which drive is supposedly "corrupt" in a
On Tue, Jun 04, 2013 at 06:17:54PM -0400, Zach Brown wrote:
> Hi gang,
>
> I finally sat down to fix that readdir hang that has been in the back
> of my mind for a while. I *hope* that the fix is pretty simple: just
> don't manufacture a fake f_pos, I *think* we can abuse f_version as an
> indica
Quoting Josef Bacik (2013-07-01 08:54:35)
> On Tue, Jun 04, 2013 at 06:17:54PM -0400, Zach Brown wrote:
> > Hi gang,
> >
> > I finally sat down to fix that readdir hang that has been in the back
> > of my mind for a while. I *hope* that the fix is pretty simple: just
> > don't manufacture a fake
On Sun, Jun 30, 2013 at 11:35 AM, Mitch Harder
wrote:
> There's been a parallel effort to incorporate a general set of lz4
> patches in the kernel.
>
> I see these patches are currently queued up in the linux-next tree, so
> we may see them in the 3.11 kernel.
>
> It looks like lz4 and lz4hc will
Lets try and be consistent with every other alloc() type function.
Signed-off-by: Josef Bacik
---
fs/btrfs/ctree.c | 26 +++---
1 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 7921e1d..32e30ad 100644
--- a/fs/btrfs/
On Mon, Jul 01, 2013 at 09:30:43AM -0400, Josef Bacik wrote:
> Lets try and be consistent with every other alloc() type function.
>
> Signed-off-by: Josef Bacik
Ignore this, I didn't compile it and I should have, and I'm just going to delete
this function anyway. Thanks,
Josef
--
To unsubscrib
For partial extents, snapshot-aware defrag does not work as expected,
since
a) we use the wrong logical offset to search for parents, which should be
disk_bytenr + extent_offset, not just disk_bytenr,
b) 'offset' returned by the backref walking just refers to key.offset, not
the 'offset' stor
This is actually from Zach Brown's idea.
Instead of ulist of array+rbtree, here we introduce ulist of list+rbtree,
memory is dynamic allocation and we can get rid of memory re-allocation dance
and special care for rbtree node's insert/delete for inline array, so it's
straightforward and simple.
S
This testscript creates reflinks to files on different subvolumes,
overwrites the original files and reflinks, and moves reflinked files
between subvolumes.
Originally submitted as testcase 302, changes are made based on comments
from Eric: http://oss.sgi.com/archives/xfs/2013-03/msg00231.html
Hello Liu,
> This is actually from Zach Brown's idea.
>
> Instead of ulist of array+rbtree, here we introduce ulist of list+rbtree,
> memory is dynamic allocation and we can get rid of memory re-allocation dance
> and special care for rbtree node's insert/delete for inline array, so it's
> straig
On Sun, 30 Jun 2013 18:14:32 +0100, Miguel Negrão wrote:
[...]
> Now I was testing retrieving a snapshot back from the backup disk to the
> original partition:
>
> If I do
>
> btrfs subvolume delete @-backupOld
>
> and then do
>
> btrfs send
> -p /media/miguel/huge/backups/@/@-backupNew
> /me
Thanks Eric for reviewing and improving testcases btrfs/306, 309, 310
and 311 !
I had just one comment: on line 70 the output was redirected to
$seqres.fll instead of $seqres.full. Corrected patch below.
# Check if creating a sparse copy ("reflink") of a file on btrfs
# expectedly fails when
> > code. It's all lightly tested with xfstests but it wouldn't surprise
> > me if I missed something so review is appreciated.
*mmm, hmmm*
> One of these patches is making new entries not show up in readdir. This was
> discovered while running stress.sh overnight, it complained about files not
On Monday, July 01, 2013 09:10:41 AM Duncan wrote:
> > But in general, how to find out most fragmented files and folders?
> > mouting with autodefrag is a serious degradation..
>
> It is? AFAIK, all the autodefrag mount option does is scan files for
> fragmentation as they are written and queue a
On Sun, Jun 30, 2013 at 09:55:26AM +0200, Stefan Paletta wrote:
> This gives EXDEV for clone operations that btrfs could otherwise execute and
> with slight change of circumstances will actually execute fine.
>
> Imagine we have a btrfs on /dev/mapper/foobar with subvols /foo and /bar.
> Let’s als
Quoting Zach Brown (2013-07-01 12:10:02)
> > > code. It's all lightly tested with xfstests but it wouldn't surprise
> > > me if I missed something so review is appreciated.
>
> *mmm, hmmm*
>
> > One of these patches is making new entries not show up in readdir. This was
> > discovered while run
With this fix the lzo code behaves like the zlib code by returning an
error
code when compression does not help reduce the size of the file.
This is currently not a bug since the compressed size is checked again
in
the calling method compress_file_range.
Signed-off-by: Stefan Agner
---
fs/bt
There is another bug in the tree mod log stuff in that we're calling
tree_mod_log_free_eb every single time a block is cow'ed. The problem with this
is that if this block is shared by multiple snapshots we will call this multiple
times per block, so if we go to rewind the mod log for this block we
On Mon, Jul 01, 2013 at 10:13:26PM +0800, Liu Bo wrote:
> For partial extents, snapshot-aware defrag does not work as expected,
> since
> a) we use the wrong logical offset to search for parents, which should be
>disk_bytenr + extent_offset, not just disk_bytenr,
> b) 'offset' returned by the b
Previously we held the tree mod lock when adding stuff because we use it to
check and see if we truly do want to track tree modifications. This is
admirable, but GFP_ATOMIC in a critical area that is going to get hit pretty
hard and often is not nice. So instead do our basic checks to see if we d
On Mon, Jul 01, 2013 at 05:19:21PM +0200, Koen De Wit wrote:
> This testscript creates reflinks to files on different subvolumes,
> overwrites the original files and reflinks, and moves reflinked files
> between subvolumes.
>
> Originally submitted as testcase 302, changes are made based on commen
On Mon, Jul 01, 2013 at 05:58:44PM +0200, Koen De Wit wrote:
> Thanks Eric for reviewing and improving testcases btrfs/306, 309, 310 and
> 311 !
>
> I had just one comment: on line 70 the output was redirected to $seqres.fll
> instead of $seqres.full. Corrected patch below.
>
> # Check if creatin
On Mon, Jul 01, 2013 at 10:14:44PM +0800, Liu Bo wrote:
> This is actually from Zach Brown's idea.
Thanks for giving this a try.
> Instead of ulist of array+rbtree, here we introduce ulist of list+rbtree,
> memory is dynamic allocation and we can get rid of memory re-allocation dance
> and specia
Sirs,
my recently slowing file system is now going read only after trying a
defrag or other operation. I'm wondering whether this is the result of
a hardware failure or a btrfs or some other issue. Output of dmesg:
127.750401] DR0: DR1: DR2:
28 matches
Mail list logo