On Fri, Apr 17, 2015 at 06:24:05AM +, sri wrote:
> Hi,
> I have below queries. Could somebody help me in understanding.
>
> 1)
> As per my understanding btrfs file system uses one chunk tree and one
> extent tree for entire btrfs disk allocation.
>
> Is this correct?
Yes.
> In, some art
Dear all,
We know that one cannot mount multiple btrfs file systems which contain the
same UUID (for example, snapshot ).
Is it possible to make use of "mount namespace" achieve that? means, if the
btrfs file system in different namespace can contain the same UUID.
Thanks
Mike
Hugo Mills carfax.org.uk> writes:
>
> On Fri, Apr 17, 2015 at 06:24:05AM +, sri wrote:
> > Hi,
> > I have below queries. Could somebody help me in understanding.
> >
> > 1)
> > As per my understanding btrfs file system uses one chunk tree and
one
> > extent tree for entire btrfs disk allo
On 17/04/15 10:20, WangMike wrote:
Dear all,
We know that one cannot mount multiple btrfs file systems which contain the
same UUID (for example, snapshot ).
Is it possible to make use of "mount namespace" achieve that? means, if the
btrfs file system in different namespace can contain the sam
On 2015-04-16 14:48, Miguel Negrão wrote:
Hello,
I'm running a laptop, macbook pro 8,2, with ubuntu, on kernel
3.13.0-49-lowlatency. I have a USB enclosure containing two harddrives
(Icydock JBOD). Each harddrive runs their own btrfs file system, on top of
luks partitions. I backup one harddrive
On Fri, Apr 17, 2015 at 7:54 AM, Noah Massey wrote:
> On Thu, Apr 16, 2015 at 7:33 PM, Dan Merillat wrote:
>> The inode is already found, use the data and make restore friendlier.
>>
>> Signed-off-by: Dan Merillat
>> ---
>> cmds-restore.c | 12
>> 1 file changed, 12 insertions(+)
>
Looks like XFS v5 and ext4 checksums include fs UUID throughout the
filesystem metadata. For XFS changing the UUID has been disabled in
xfs_admin whereas tune2fs supports changing it (which I'd think could
take quite a while). Btrfs supports it via seed device + device add +
removing seed which als
Resize of a filesystem image does not work as expected. This has been
confusing and can have bad consequences as people have reported,
resizing the wrong filesystem.
Signed-off-by: David Sterba
---
Documentation/btrfs-filesystem.asciidoc | 9 +++--
cmds-filesystem.c |
If we have concurrent fsync calls against files living in the same subvolume,
we have some time window where we don't add the collected ordered extents
to the running transaction's list of ordered extents and return success to
userspace. This can result in data loss if the ordered extents complete
We don't need to attach ordered extents that have completed to the current
transaction. Doing so only makes us hold memory for longer than necessary
and delaying the iput of the inode until the transaction is committed (for
each created ordered extent we do an igrab and then schedule an asynchronou
On Fri, Apr 17, 2015 at 09:19:11AM +, Hugo Mills wrote:
> > In, some article i read that future there will be more chunk tree/ extent
> > tree for single btrfs. Is this true.
>
>I recall, many moons ago, Chris saying that there probably wouldn't
> be.
More extent trees tied to a set of f
On Tue, Apr 14, 2015 at 09:19:12AM -0400, Austin S Hemmelgarn wrote:
> On 2015-04-14 08:28, David Sterba wrote:
> > On Tue, Apr 14, 2015 at 01:44:32PM +0300, Lauri Võsandi wrote:
> >> This patch forces btrfs receive to issue chroot before
> >> parsing the btrfs stream to confine the process and
> >
We have this check in the kernel but not in userspace, which makes fsck fail
when we wouldn't have a problem in the kernel. This was meant to catch this
case because it really isn't good, unfortunately it will require a design change
to fix in the kernel so in the meantime add this check so we can
We have this check in the kernel but not in userspace, which makes fsck fail
when we wouldn't have a problem in the kernel. This was meant to catch this
case because it really isn't good, unfortunately it will require a design change
to fix in the kernel so in the meantime add this check so we can
If we have concurrent fsync calls against files living in the same subvolume,
we have some time window where we don't add the collected ordered extents
to the running transaction's list of ordered extents and return success to
userspace. This can result in data loss if the ordered extents complete
On 04/17/2015 02:20 PM, Filipe Manana wrote:
If we have concurrent fsync calls against files living in the same subvolume,
we have some time window where we don't add the collected ordered extents
to the running transaction's list of ordered extents and return success to
userspace. This can resul
On Fri, Apr 17, 2015 at 7:26 PM, Josef Bacik wrote:
> On 04/17/2015 02:20 PM, Filipe Manana wrote:
>>
>> If we have concurrent fsync calls against files living in the same
>> subvolume,
>> we have some time window where we don't add the collected ordered extents
>> to the running transaction's lis
We don't need to attach ordered extents that have completed to the current
transaction. Doing so only makes us hold memory for longer than necessary
and delaying the iput of the inode until the transaction is committed (for
each created ordered extent we do an igrab and then schedule an asynchronou
Hi Austin,
On 17-04-2015 12:31, Austin S Hemmelgarn wrote:
>
> First, as mentioned in another reply to this, you should update your
> kernel. I don't think that the kernel is what is causing the issue, but
> it is an old kernel by BTRFS standards, and keeping up to date is
> important with a fil
I've been running some simple tests in a virtual machine with btrfs raid1 and I
found the background correction behaviour to be a bit surprising. I've set up a
raid1 and stored a big file along with its sha256sum on the volume. If I
manually corrupt one of the underlying devices and run a btrfs
On Fri, 17 Apr 2015 21:46:08 +0200, ivarun_ml wrote:
> But if I instead just read the file, then btrfs will still detect and
> correct the corruption, but the device stats are not updated, and the
> errors in the syslog have info-priority, making them much harder to
> notice. [..]
I don't know ab
On Thu, 2015-04-09 at 16:33 +, Hugo Mills wrote:
>btrfs sub find-new might be more helpful to you here. That will
> give you the list of changed files; then just feed that list to your
> existing bin-packing algorithm for working out what goes on which
> disks, and you're done.
hmm that s
Hey.
I've seen that this has been asked some times before, and there are
stackoverflow/etc. questions on that, but none with a really good
answer.
How can I best copy one btrfs filesystem (with snapshots and subvolumes)
into another, especially with keeping the CoW/reflink status of all
files?
An
On Fri, 17 Apr 2015 11:08:44 PM Christoph Anton Mitterer wrote:
> How can I best copy one btrfs filesystem (with snapshots and subvolumes)
> into another, especially with keeping the CoW/reflink status of all
> files?
dd works. ;)
> And ideally incrementally upgrade it later (again with all
> sn
On Sat, 2015-04-18 at 04:24 +, Russell Coker wrote:
> dd works. ;)
> There are patches to rsync that make it work on block devices. Of course
> that
> will copy space occupied by deleted files too.
I think both are not quite the solutions I was looking for.
Guess for dd this is obvious,
25 matches
Mail list logo