subvolume parent id and uuid mismatch (progs 4.0, kernel 4.0.2)

2015-06-12 Thread Suman C
Hi,

I have a raid1 filesystem:

Label: 'raid1pool'  uuid: b97d3055-c60c-4f10-aa1b-bf9554f043c9
Total devices 2 FS bytes used 320.00KiB
devid1 size 2.00GiB used 437.50MiB path /dev/sdd
devid2 size 2.00GiB used 417.50MiB path /dev/sde

It's mounted at /mnt2/raid1pool

# mount | grep raid1pool
/dev/sdd on /mnt2/raid1pool type btrfs (rw,relatime,space_cache)

created a subvolume

# btrfs subvol create /mnt2/raid1pool/share1
Create subvolume '/mnt2/raid1pool/share1'
# btrfs subvol list -u -p -q /mnt2/raid1pool/
ID 261 gen 27 parent 5 top level 5 parent_uuid - uuid
0e31fe3c-d35d-d346-9560-b29b82ff384a path share1

created two snapshots(different destinations) like so:

# mkdir -p /mnt2/raid1pool/.snapshots/share1
# btrfs subvol snapshot /mnt2/raid1pool/share1
/mnt2/raid1pool/.snapshots/share1/snap1
Create a snapshot of '/mnt2/raid1pool/share1' in
'/mnt2/raid1pool/.snapshots/share1/snap1'
# btrfs subvol snapshot /mnt2/raid1pool/share1 /mnt2/raid1pool/share1/snap2
Create a snapshot of '/mnt2/raid1pool/share1' in '/mnt2/raid1pool/share1/snap2'

Then..

# btrfs subvol list -u -p -q /mnt2/raid1pool/
ID 261 gen 29 parent 5 top level 5 parent_uuid - uuid
0e31fe3c-d35d-d346-9560-b29b82ff384a path share1
ID 262 gen 28 parent 5 top level 5 parent_uuid
0e31fe3c-d35d-d346-9560-b29b82ff384a uuid
91354cbc-615f-be40-8afb-eb0c254157d8 path .snapshots/share1/snap1
ID 263 gen 29 parent 261 top level 261 parent_uuid
0e31fe3c-d35d-d346-9560-b29b82ff384a uuid
d236544e-5044-c645-a0b7-440c15727d8e path share1/snap2

You see how the parent_uuids of both snapshots match the uuid of the
subvolume(0e31fe3c-d35d-d346-9560-b29b82ff384a) but the parent ids
don't match. I was hoping to see 261 as the parent id for both
snapshots. Am I missing something or is this a bug in btrfs-progs?

Suman
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: what is the best way to monitor raid1 drive failures?

2014-10-14 Thread Suman C
Hi,

Here's a simple raid1 recovery experiment that's not working as expected.

kernel: 3.17, latest mainline
progs: 3.16.1

I started with a simple raid1 mirror of 2 drives (sda and sdb). The
filesystem is functional, I created one subvol, put some data,
read/write tested etc..

yanked the sdb out. (this is physical/hardware). btrfs fi show prints
drive missing, as expected.

powered the machine down. removed the bad(yanked out sdb) drive and
replaced it with a new drive. Powered up the machine.

The new drive shows up as sdb. btrfs fi show still prints drive missing.

mounted the filesystem with ro,degraded

tried adding the new sdb drive which results in the following error.
(-f because the new drive has a fs from past)

# btrfs device add -f /dev/sdb /mnt2/raid1pool
/dev/sdb is mounted

Unless I am missing something, this looks like a bug.

Let me know, I can retest.

Thanks
Suman

On Mon, Oct 13, 2014 at 7:13 PM, Anand Jain anand.j...@oracle.com wrote:



 On 10/14/14 03:50, Suman C wrote:

 I had progs 3.12 and updated to the latest from git(3.16). With this
 update, btrfs fi show reports there is a missing device immediately
 after i pull it out. Thanks!

 I am using virtualbox to test this. So, I am detaching the drive like so:

 vboxmanage storageattach vm --storagectl controller --port port
 --device device --medium none

 Next I am going to try and test a more realistic scenario where a
 harddrive is not pulled out, but is damaged.



 Can/does btrfs mark a filesystem(say, 2 drive raid1) degraded or
 unhealthy automatically when one drive is damaged badly enough that it
 cannot be written to or read from reliably?


  There are some gaps as directly compared to an enterprise volume
  manager, which is being fixed.  but pls do report what you find.

 Thanks, Anand



 Suman

 On Sun, Oct 12, 2014 at 7:21 PM, Anand Jain anand.j...@oracle.com wrote:


 Suman,

 To simulate the failure, I detached one of the drives from the system.
 After that, I see no sign of a problem except for these errors:


   Are you physically pulling out the device ? I wonder if lsblk or blkid
   shows the error ? reporting device missing logic is in the progs (so
   have that latest) and it works provided user script such as blkid/lsblk
   also reports the problem. OR for soft-detach tests you could use
   devmgt at http://github.com/anajain/devmgt

   Also I am trying to get the device management framework for the btrfs
   with a more better device management and reporting.

 Thanks,  Anand



 On 10/13/14 07:50, Suman C wrote:


 Hi,

 I am testing some disk failure scenarios in a 2 drive raid1 mirror.
 They are 4GB each, virtual SATA drives inside virtualbox.

 To simulate the failure, I detached one of the drives from the system.
 After that, I see no sign of a problem except for these errors:

 Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
 0, flush 1, corrupt 0, gen 0
 Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on
 /dev/sdb

 /dev/sdb is gone from the system, but btrfs fi show still lists it.

 Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
   Total devices 2 FS bytes used 1.46GiB
   devid1 size 4.00GiB used 2.45GiB path /dev/sdb
   devid2 size 4.00GiB used 2.43GiB path /dev/sdc

 I am able to read and write just fine, but do see the above errors in
 dmesg.

 What is the best way to find out that one of the drives has gone bad?

 Suman
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs
 in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: what is the best way to monitor raid1 drive failures?

2014-10-14 Thread Suman C
I cannot delete that way because it would mean going below the minimum
number of devices and it fails, as explained in the wiki.

The solution from wiki is to add a new device and then delete the old
one, but the problem here may be due to the new device appearing with
the same name(sdb)?

Suman

On Tue, Oct 14, 2014 at 7:52 AM, Rich Freeman
r-bt...@thefreemanclan.net wrote:
 On Tue, Oct 14, 2014 at 10:48 AM, Suman C schakr...@gmail.com wrote:

 The new drive shows up as sdb. btrfs fi show still prints drive missing.

 mounted the filesystem with ro,degraded

 tried adding the new sdb drive which results in the following error.
 (-f because the new drive has a fs from past)

 # btrfs device add -f /dev/sdb /mnt2/raid1pool
 /dev/sdb is mounted

 Unless I am missing something, this looks like a bug.


 You need to first run btrfs device delete missing /mnt2/raid1pool I
 believe (missing is a keyword for a missing device in the array - if
 the device were still present you could specify it by /dev/sdX).

 --
 Rich
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: what is the best way to monitor raid1 drive failures?

2014-10-14 Thread Suman C
After the reboot step, where I indicated that I mounted ro, I was
unable to mount rw or rw,degraded. I get the mount: wrong fs type,
bad option, bad superblock error if I try to mount it rw.

What might be the reason for that?

Suman

On Tue, Oct 14, 2014 at 12:15 PM, Chris Murphy li...@colorremedies.com wrote:

 On Oct 14, 2014, at 10:48 AM, Suman C schakr...@gmail.com wrote:

 mounted the filesystem with ro,degraded

 tried adding the new sdb drive which results in the following error.
 (-f because the new drive has a fs from past)

 # btrfs device add -f /dev/sdb /mnt2/raid1pool
 /dev/sdb is mounted

 Unless I am missing something, this looks like a bug.

 Strange message. I expect a device can't be added to a volume mounted ro. If 
 the device add command works on a volume mounted rw, then the bug is the 
 message '/dev/sdb is mounted' when adding device to ro mounted volume.


 Chris Murphy

 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: what is the best way to monitor raid1 drive failures?

2014-10-13 Thread Suman C
I had progs 3.12 and updated to the latest from git(3.16). With this
update, btrfs fi show reports there is a missing device immediately
after i pull it out. Thanks!

I am using virtualbox to test this. So, I am detaching the drive like so:

vboxmanage storageattach vm --storagectl controller --port port
--device device --medium none

Next I am going to try and test a more realistic scenario where a
harddrive is not pulled out, but is damaged.

Can/does btrfs mark a filesystem(say, 2 drive raid1) degraded or
unhealthy automatically when one drive is damaged badly enough that it
cannot be written to or read from reliably?

Suman

On Sun, Oct 12, 2014 at 7:21 PM, Anand Jain anand.j...@oracle.com wrote:

 Suman,

 To simulate the failure, I detached one of the drives from the system.
 After that, I see no sign of a problem except for these errors:

  Are you physically pulling out the device ? I wonder if lsblk or blkid
  shows the error ? reporting device missing logic is in the progs (so
  have that latest) and it works provided user script such as blkid/lsblk
  also reports the problem. OR for soft-detach tests you could use
  devmgt at http://github.com/anajain/devmgt

  Also I am trying to get the device management framework for the btrfs
  with a more better device management and reporting.

 Thanks,  Anand



 On 10/13/14 07:50, Suman C wrote:

 Hi,

 I am testing some disk failure scenarios in a 2 drive raid1 mirror.
 They are 4GB each, virtual SATA drives inside virtualbox.

 To simulate the failure, I detached one of the drives from the system.
 After that, I see no sign of a problem except for these errors:

 Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
 0, flush 1, corrupt 0, gen 0
 Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on
 /dev/sdb

 /dev/sdb is gone from the system, but btrfs fi show still lists it.

 Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
  Total devices 2 FS bytes used 1.46GiB
  devid1 size 4.00GiB used 2.45GiB path /dev/sdb
  devid2 size 4.00GiB used 2.43GiB path /dev/sdc

 I am able to read and write just fine, but do see the above errors in
 dmesg.

 What is the best way to find out that one of the drives has gone bad?

 Suman
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


what is the best way to monitor raid1 drive failures?

2014-10-12 Thread Suman C
Hi,

I am testing some disk failure scenarios in a 2 drive raid1 mirror.
They are 4GB each, virtual SATA drives inside virtualbox.

To simulate the failure, I detached one of the drives from the system.
After that, I see no sign of a problem except for these errors:

Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
0, flush 1, corrupt 0, gen 0
Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on /dev/sdb

/dev/sdb is gone from the system, but btrfs fi show still lists it.

Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
Total devices 2 FS bytes used 1.46GiB
devid1 size 4.00GiB used 2.45GiB path /dev/sdb
devid2 size 4.00GiB used 2.43GiB path /dev/sdc

I am able to read and write just fine, but do see the above errors in dmesg.

What is the best way to find out that one of the drives has gone bad?

Suman
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Rockstor 3.0 now available

2014-09-17 Thread Suman C
Hello everyone,

Some of you know about Rockstor already from an email I sent several
weeks ago. For others, It's the BTRFS powered NAS solution we've
developed and just made the 3.0 release available.

Besides software updates, we've also improved the distribution
infrastructure and the installer so it's fast and easy as opposed to
painfully slow that some users complained about with earlier release.

We have a lot of work ahead and I'd greatly appreciate any comments
you may have. Please feel free to contact me.

Here's the direct download link:
https://sourceforge.net/projects/rockstor/files/latest/download

Here's our website: http://rockstor.com/

Thanks
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


lvm volume like support

2013-02-25 Thread Suman C
Hi,

I think it would be great if there is a lvm volume or zfs zvol type
support in btrfs. As far as I can tell, there's nobody actively
working on this feature. I want to know what the core developers think
of this feature, is it technically possible? any strong opinions?
implementation ideas?

I'd be happy to work towards this feature, but want your feedback
before proceeding.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: lvm volume like support

2013-02-25 Thread Suman C
Yes, zvol like feature where a btrfs subvolume like construct can be
made available as a LUN/block device. This device can then be used by
any application that wants a raw block device. iscsi is another
obvious usecase. Having thin provisioning support would make it pretty
awesome.

Suman

On Mon, Feb 25, 2013 at 5:46 PM, Fajar A. Nugraha l...@fajar.net wrote:
 On Tue, Feb 26, 2013 at 11:59 AM, Mike Fleetwood
 mike.fleetw...@googlemail.com wrote:
 On 25 February 2013 23:35, Suman C schakr...@gmail.com wrote:
 Hi,

 I think it would be great if there is a lvm volume or zfs zvol type
 support in btrfs.


 Btrfs already has capabilities to add and remove block devices on the
 fly.  Data can be stripped or mirrored or both.  Raid 5/6 is in
 testing at the moment.
 https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
 https://btrfs.wiki.kernel.org/index.php/UseCases#RAID

 Which specific features do you think btrfs is lacking?


 I think he's talking about zvol-like feature.

 In zfs, instead of creating a
 filesystem-that-is-accessible-as-a-directory, you can create a zvol
 which behaves just like any other standard block device (e.g. you can
 use it as swap, or create ext4 filesystem on top of it). But it would
 also have most of the benefits that a normal zfs filesystem has, like:
 - thin provisioning (sparse allocation, snapshot  clone)
 - compression
 - integrity check (via checksum)

 Typical use cases would be:
 - swap in a pure-zfs system
 - virtualization (xen, kvm, etc)
 - NAS which exports the block device using iscsi/AoE

 AFAIK no such feature exist in btrfs yet.

 --
 Fajar
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: lvm volume like support

2013-02-25 Thread Suman C
Thanks for the sparse file idea, I am actually using that solution
already. I am not sure if its the best way, however.

Suman

On Mon, Feb 25, 2013 at 9:57 PM, Roman Mamedov r...@romanrm.ru wrote:
 On Mon, 25 Feb 2013 21:35:08 -0800
 Suman C schakr...@gmail.com wrote:

 Yes, zvol like feature where a btrfs subvolume like construct can be
 made available as a LUN/block device. This device can then be used by
 any application that wants a raw block device. iscsi is another
 obvious usecase. Having thin provisioning support would make it pretty
 awesome.

 I think what you are missing is that btrfs is a filesystem, not a block device
 management mechanism.

 For your use case can simply create a snapshot and then make a sparse file
 inside of it.

   btrfs sub create foobar
   dd if=/dev/zero of=foobar/100GB.img bs=1 count=1 seek=100G

 If you need this to be a block device, use 'losetup' to make foobar/100GB.img
 appear as one (/dev/loopX). But iSCSI/AoE/NBD can export files as well as 
 block
 devices, so this is not even necessary.

 --
 With respect,
 Roman
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Btrfs-Progs integration branch question

2012-09-03 Thread Suman C
Hi,

I would like to get the latest btrfs-progs code. To me, Chris Mason's
repo at git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
seems latest but obviously its missing the last several patches I see
in the mailing list. I also tried Hugo Mills' integration repo at
http://git.darksatanic.net/repo/btrfs-progs-unstable.git and unless I
am looking at it wrong, it seems behind.

Can someone please point me to the latest process that is followed for
testing/developing recent btrfs-progs?

I am trying to integrate the quota patch from August 10th by Jan
Schmidt and getting conflicts when I git apply.

Thanks
Suman
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html