with the changes that has happened since last time it was updated
Signed-off-by: Anand Jain
---
INSTALL |6 ++
1 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/INSTALL b/INSTALL
index 8ead607..a86878a 100644
--- a/INSTALL
+++ b/INSTALL
@@ -12,7 +12,8 @@ complete:
modprobe
The send operation processes inodes by their ascending number, and assumes
that any rename/move operation can be successfully performed (sent to the
caller) once all previous inodes (those with a smaller inode number than the
one we're currently processing) were processed.
This is not true when an
The send operation processes inodes by their ascending number, and assumes
that any rename/move operation can be successfully performed (sent to the
caller) once all previous inodes (those with a smaller inode number than the
one we're currently processing) were processed.
This is not true when an
The send operation processes inodes by their ascending number, and assumes
that any rename/move operation can be successfully performed (sent to the
caller) once all previous inodes (those with a smaller inode number than the
one we're currently processing) were processed.
This is not true when an
reproducer:
mkfs.btrfs -f /dev/sdb &&\
mount /dev/sdb /btrfs &&\
btrfs dev add -f /dev/sdc /btrfs &&\
umount /btrfs &&\
wipefs -a /dev/sdc &&\
mount -o degraded /dev/sdb /btrfs
//above mount fails so try with RO
mount -o degraded,ro /dev/sdb /btrfs
--
sysfs: cannot create duplicate filename
'
Regression test for btrfs' incremental send feature:
1) Create several nested directories;
2) Create a read only snapshot;
3) Change the parentship of some of the deepest directories in a reverse
way, so that parents become children and children become parents;
4) Create another read only sn
The send operation processes inodes by their ascending number, and assumes
that any rename/move operation can be successfully performed (sent to the
caller) once all previous inodes (those with a smaller inode number than the
one we're currently processing) were processed.
This is not true when an
The send operation processes inodes by their ascending number, and assumes
that any rename/move operation can be successfully performed (sent to the
caller) once all previous inodes (those with a smaller inode number than the
one we're currently processing) were processed.
This is not true when an
The buffer size argument passed to snprintf must account for the
trailing null byte added by snprintf, and it returns a value >= then
sizeof(buffer) when the string can't fit in the buffer.
Since our buffer has a size of 64 characters, and the maximum orphan
name we can generate is 63 characters w
Ok, I think I found the issue. The array was okay and mounted
successfully after running "btrfs device scan", and the samba issue
was unrelated and had something to do with winbind integration with AD
which the reboot cured after getting the btrfs store to mount.
It has always done this automatica
Hi
I suddenly got issues accessing a samba share and rebooted the server.
After the reboot the btrfs mount does not come up and prints the
following messages in dmesg:
[ 530.881802] btrfs: device fsid 9302fc8f-15c6-46e9-9217-951d7423927c
devid 4 transid 138602 /dev/sdt
[ 530.882734] btrfs: disk
Hello,
Yes. Here I mount the three subvolumes:
Does scrubbing the volume give any errors?
Last time I did (that was after I discovered the first errors in
btrfsck) scrub, it found no error. But I will re-check asap.
As to the error messages: I do not know how critical those are.
I usua
Hello again:
I think, I do/did have some symptoms, but I cannot exclude other reasons..
-High Load without high cpu-usage (io was the bottleneck)
-Just now: transfer from one directory to the other on the same
subvolume (from /mnt/subvol/A/B to /mnt/subvol/A) I get 1.2MB/s instead
of > 60.
-For
When defragging a very large file, the cluster variable can wrap its 32-bit
signed int type and become negative, which eventually gets passed to
btrfs_force_ra() as a very large unsigned long value. On 32-bit platforms,
this eventually results in an Oops from the SLAB allocator.
Change the cluste
There are different values of "testing" and of "production" - in my
world, at least, they're not atomically defined categories. =)
On 01/21/2014 12:38 PM, Chris Murphy wrote:
It's for testing purposes. If you really want to commit a production
machine for testing a file system, and you're prepa
On Thu, Jan 16, 2014 at 10:32:38AM +0800, Miao Xie wrote:
> > Your fix makes sure that the deleted root will not get cleaned and stays
> > during the send. Only after it finishes it will be cleaned. Now, what if
> > send fails or is interrupted? There's no way to redo it. Yes the user
> > can be bl
David Sterba (2):
btrfs: sysfs: don't show reserved incompat feature
btrfs: sysfs: list the NO_HOLES feature
fs/btrfs/sysfs.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
--
1.7.9
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a me
Signed-off-by: David Sterba
---
fs/btrfs/sysfs.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index 1a893860d66b..782374d8fd19 100644
--- a/fs/btrfs/sysfs.c
+++ b/fs/btrfs/sysfs.c
@@ -202,6 +202,7 @@ BTRFS_FEAT_ATTR_INCOMPAT(big_met
The COMPRESS_LZOv2 incompat featue is currently not implemented, the bit
is only reserved, no point to list it in sysfs.
Signed-off-by: David Sterba
---
fs/btrfs/sysfs.c |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index ba94b277e9
Thanks again for the added info; very helpful.
I want to keep playing around with BTRFSS RAID 5 and testing with it...
assuming I have a drive with bad blocks, or let's say some inconsistent parity
am I right in assuming that a) a btrfs scrub operation will not fix the stripes
with bad parity a
On Jan 21, 2014, at 10:18 AM, Jim Salter wrote:
> Would it be reasonably accurate to say "btrfs' RAID5 implementation is likely
> working well enough and safe enough if you are backing up regularly and are
> willing and able to restore from backup if necessary if a device failure goes
> horri
Would it be reasonably accurate to say "btrfs' RAID5 implementation is
likely working well enough and safe enough if you are backing up
regularly and are willing and able to restore from backup if necessary
if a device failure goes horribly wrong", then?
This is a reasonably serious question.
Graham Fleming posted on Tue, 21 Jan 2014 01:06:37 -0800 as excerpted:
> Thanks for all the info guys.
>
> I ran some tests on the latest 3.12.8 kernel. I set up 3 1GB files and
> attached them to /dev/loop{1..3} and created a BTRFS RAID 5 volume with
> them.
>
> I copied some data (from dev/ura
On 2014-01-21 11:52, Hugo Mills wrote:
> On Tue, Jan 21, 2014 at 07:25:43AM -0500, Austin S Hemmelgarn wrote:
>> On 2014-01-21 01:42, Sandy McArthur wrote:
>>> On Mon, Jan 20, 2014 at 7:20 AM, Austin S Hemmelgarn
>>> wrote:
On 2014-01-16 14:23, Toggenburger Lukas wrote:
> 3. Improvin
On Tue, Jan 21, 2014 at 07:25:43AM -0500, Austin S Hemmelgarn wrote:
> On 2014-01-21 01:42, Sandy McArthur wrote:
> > On Mon, Jan 20, 2014 at 7:20 AM, Austin S Hemmelgarn
> > wrote:
> >>
> >> On 2014-01-16 14:23, Toggenburger Lukas wrote:
> >>> 3. Improving subvolume handling regarding taking recu
On Tue, Jan 21, 2014 at 4:45 PM, Wang Shilong wrote:
>
> Hello Filipe,
>
>> On Sat, Jan 18, 2014 at 4:58 AM, Wang Shilong
>> wrote:
>>> Steps to reproduce:
>>> # mkfs.btrfs -f /dev/sda8
>>> # mount /dev/sda8 /mnt
>>> # btrfs sub snapshot -r /mnt /mnt/snap1
>>> # btrfs sub snapshot -r /mnt /mnt/sn
Hello Filipe,
> On Sat, Jan 18, 2014 at 4:58 AM, Wang Shilong
> wrote:
>> Steps to reproduce:
>> # mkfs.btrfs -f /dev/sda8
>> # mount /dev/sda8 /mnt
>> # btrfs sub snapshot -r /mnt /mnt/snap1
>> # btrfs sub snapshot -r /mnt /mnt/snap2
>> # btrfs send /mnt/snap1 -p /mnt/snap2 -f /mnt/1
>> # dmes
On Sat, Jan 18, 2014 at 4:58 AM, Wang Shilong
wrote:
> Steps to reproduce:
> # mkfs.btrfs -f /dev/sda8
> # mount /dev/sda8 /mnt
> # btrfs sub snapshot -r /mnt /mnt/snap1
> # btrfs sub snapshot -r /mnt /mnt/snap2
> # btrfs send /mnt/snap1 -p /mnt/snap2 -f /mnt/1
> # dmesg
>
> The problem is t
On Sat, Jan 18, 2014 at 12:58:00PM +0800, Wang Shilong wrote:
> Steps to reproduce:
> # mkfs.btrfs -f /dev/sda8
> # mount /dev/sda8 /mnt
> # btrfs sub snapshot -r /mnt /mnt/snap1
> # btrfs sub snapshot -r /mnt /mnt/snap2
> # btrfs send /mnt/snap1 -p /mnt/snap2 -f /mnt/1
> # dmesg
>
> The pro
Commit "Btrfs-progs: make send/receive compatible with older kernels"
adds code that will become deprecated, let's clearly mark it in the
sources.
CC: Stefan Behrens
CC: Wang Shilong
Signed-off-by: David Sterba
---
send-utils.c | 28
send-utils.h | 10 +
Hi all,
I just noticed a mismatch between statfs.f_bfree and statfs.f_bavail, i.e.
(squeeze)fslab2:~# ./statfs /data/fhgfs/storage1/
/data/fhgfs/storage1/: avail: 3162112 free: 801586610176
(with
uint64_t avail = statbuf.f_bavail * statbuf.f_bsize;
uint64_t free = statbuf.f_bf
Hello Kyle,
you will probably have to tell btrfs to use the complete devices.
Checkout "btrfs filesystem resize". There are cases when one does not
want to grow the used space on a replacement disk automatically, e.g.
when the disk is to be replaced with a smaller disk again, so this is
not a bug.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have just recently replaced two 750 GB disks with two 1 TB disks. I
used the replace command to do so, aka "btrfs replace start /dev/sdX
/dev/sdY /mountpoint". Each replacement went smoothly, with no errors
reporting. I'm positive that the source
On 2014-01-21 01:42, Sandy McArthur wrote:
> On Mon, Jan 20, 2014 at 7:20 AM, Austin S Hemmelgarn
> wrote:
>>
>> On 2014-01-16 14:23, Toggenburger Lukas wrote:
>>> 3. Improving subvolume handling regarding taking recursive snapshots (
>>> https://btrfs.wiki.kernel.org/index.php/Project_ideas#Take
This testscript creates reflinks to files on different subvolumes,
overwrites the original files and reflinks, and moves reflinked files
between subvolumes.
Signed-off-by: Koen De Wit
---
Originally submitted as test 302, btrfs/316
diff --git a/tests/btrfs/030 b/tests/btrfs/030
new file mode 10
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Thanks for all the info guys.
I ran some tests on the latest 3.12.8 kernel. I set up 3 1GB files and attached
them to /dev/loop{1..3} and created a BTRFS RAID 5 volume with them.
I copied some data (from dev/urandom) into two test files and got their MD5
sums and saved them to a text file.
I t
37 matches
Mail list logo