** Description changed:

  If you create a bcache device, you can't reuse all your disk/partitions
  without a reboot.
  
  You can reproduce the case this way:
  
  start a vm with 2 disks (caching must be bigger or equal to the backing
  cf bug :1377130) and an iso of utopic desktop amd64
  
  create the bcache device:
  
-                 make-bcache --writeback --discard -C /dev/sda -B
- /dev/sdb
- 
-               UUID:                   b245150d-cfbe-4f90-836a-343e0e1a4c55
-               Set UUID:               c990a31a-f531-4231-9603-d40230dc6504
-               version:                0
-               nbuckets:               16384
-               block_size:             1
-               bucket_size:            1024
-               nr_in_set:              1
-               nr_this_dev:            0
-               first_bucket:           1
-               UUID:                   cc31e0bb-db29-4115-a1b2-e9ff54e5f127
-               Set UUID:               c990a31a-f531-4231-9603-d40230dc6504
-               version:                1
-               block_size:             1
-               data_offset:            16
- 
-               ******************************
-               command: 
-               lsblk
- 
-               result:
-               NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
-               sda         8:0    0    8G  0 disk 
-               └─bcache0 251:0    0   16G  0 disk 
-               sdb         8:16   0   16G  0 disk 
-               └─bcache0 251:0    0   16G  0 disk 
-               sr0        11:0    1  1,1G  0 rom  /cdrom
-               loop0       7:0    0  1,1G  1 loop /rofs
-               ******************************
+   make-bcache --writeback --discard -C /dev/sda -B /dev/sdb
+ 
+   UUID:                       b245150d-cfbe-4f90-836a-343e0e1a4c55
+   Set UUID:           c990a31a-f531-4231-9603-d40230dc6504
+   version:            0
+   nbuckets:           16384
+   block_size:         1
+   bucket_size:                1024
+   nr_in_set:          1
+   nr_this_dev:                0
+   first_bucket:               1
+   UUID:                       cc31e0bb-db29-4115-a1b2-e9ff54e5f127
+   Set UUID:           c990a31a-f531-4231-9603-d40230dc6504
+   version:            1
+   block_size:         1
+   data_offset:                16
+ 
+   ******************************
+   command:
+   lsblk
+ 
+   result:
+   NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
+   sda         8:0    0    8G  0 disk
+   └─bcache0 251:0    0   16G  0 disk
+   sdb         8:16   0   16G  0 disk
+   └─bcache0 251:0    0   16G  0 disk
+   sr0        11:0    1  1,1G  0 rom  /cdrom
+   loop0       7:0    0  1,1G  1 loop /rofs
+   ******************************
  
  All is good
  
-               lsmod | grep bcache:
-               bcache                227884  3
+   lsmod | grep bcache:
+   bcache                227884  3
  
  format the bcache device:
  
-               ******************************
-               command: 
-               mkfs.ext4 /dev/bcache0
- 
-               result:
-               Rejet des blocs de périphérique :    4096/4194302 complété      
                  
-               Creating filesystem with 4194302 4k blocks and 1048576 inodes
-               Filesystem UUID: 587d2249-3eaf-4590-a00d-42939f257e99
-               Superblocs de secours stockés sur les blocs : 
-                       32768, 98304, 163840, 229376, 294912, 819200, 884736, 
1605632, 2654208, 
-                       4096000
- 
-               Allocation des tables de groupe :   0/128 complété              
          
-               Écriture des tables d'i-noeuds :   0/128 complété               
         
-               Création du journal (32768 blocs) : complété
-               Écriture des superblocs et de l'information de comptabilité du 
système de
-               fichiers :   0/128   2/128 complété
- 
-                 ******************************
+   ******************************
+   command:
+   mkfs.ext4 /dev/bcache0
+ 
+   result:
+   Rejet des blocs de périphérique :    4096/4194302 complété
+   Creating filesystem with 4194302 4k blocks and 1048576 inodes
+   Filesystem UUID: 587d2249-3eaf-4590-a00d-42939f257e99
+   Superblocs de secours stockés sur les blocs :
+    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
+    4096000
+ 
+   Allocation des tables de groupe :   0/128 complété
+   Écriture des tables d'i-noeuds :   0/128 complété
+   Création du journal (32768 blocs) : complété
+   Écriture des superblocs et de l'information de comptabilité du système de
+   fichiers :   0/128   2/128 complété
+ 
+   ******************************
  
  Now mount it:
-               mount /dev/bcache0 /mnt/
-               mkdir /mnt/test_dir
-               touch /mnt/test_file
+   mount /dev/bcache0 /mnt/
+   mkdir /mnt/test_dir
+   touch /mnt/test_file
  
  state of: /sys/fs/bcache/
-               ls /sys/fs/bcache/
- 
-               c990a31a-f531-4231-9603-d40230dc6504
-               register
-               register_quiet
+   ls /sys/fs/bcache/
+ 
+   c990a31a-f531-4231-9603-d40230dc6504
+   register
+   register_quiet
  
  bcache-super-show /dev/sda
-               sb.magic                ok
-               sb.first_sector         8 [match]
-               sb.csum                 E6A8D9AC496B0B04 [match]
-               sb.version              3 [cache device]
- 
-               dev.label               (empty)
-               dev.uuid                b245150d-cfbe-4f90-836a-343e0e1a4c55
-               dev.sectors_per_block   1
-               dev.sectors_per_bucket  1024
-               dev.cache.first_sector  1024
-               dev.cache.cache_sectors 16776192
-               dev.cache.total_sectors 16777216
-               dev.cache.ordered       yes
-               dev.cache.discard       yes
-               dev.cache.pos           0
-               dev.cache.replacement   0 [lru]
- 
-               cset.uuid               c990a31a-f531-4231-9603-d40230dc6504
-               ******************************
+   sb.magic            ok
+   sb.first_sector             8 [match]
+   sb.csum                     E6A8D9AC496B0B04 [match]
+   sb.version          3 [cache device]
+ 
+   dev.label           (empty)
+   dev.uuid            b245150d-cfbe-4f90-836a-343e0e1a4c55
+   dev.sectors_per_block       1
+   dev.sectors_per_bucket      1024
+   dev.cache.first_sector      1024
+   dev.cache.cache_sectors     16776192
+   dev.cache.total_sectors     16777216
+   dev.cache.ordered   yes
+   dev.cache.discard   yes
+   dev.cache.pos               0
+   dev.cache.replacement       0 [lru]
+ 
+   cset.uuid           c990a31a-f531-4231-9603-d40230dc6504
+   ******************************
  
  bcache-super-show -f /dev/sdb
  
-               sb.magic                ok
-               sb.first_sector         8 [match]
-               sb.csum                 9600B159F36A67DD [match]
-               sb.version              1 [backing device]
- 
-               dev.label               (empty)
-               dev.uuid                cc31e0bb-db29-4115-a1b2-e9ff54e5f127
-               dev.sectors_per_block   1
-               dev.sectors_per_bucket  1024
-               dev.data.first_sector   16
-               dev.data.cache_mode     1 [writeback]
-               dev.data.cache_state    2 [dirty]
- 
-               cset.uuid               c990a31a-f531-4231-9603-d40230dc6504
-               ******************************
+   sb.magic            ok
+   sb.first_sector             8 [match]
+   sb.csum                     9600B159F36A67DD [match]
+   sb.version          1 [backing device]
+ 
+   dev.label           (empty)
+   dev.uuid            cc31e0bb-db29-4115-a1b2-e9ff54e5f127
+   dev.sectors_per_block       1
+   dev.sectors_per_bucket      1024
+   dev.data.first_sector       16
+   dev.data.cache_mode 1 [writeback]
+   dev.data.cache_state        2 [dirty]
+ 
+   cset.uuid           c990a31a-f531-4231-9603-d40230dc6504
+   ******************************
  
  mount:
-               /dev/bcache0 on /mnt type ext4 (rw)
+   /dev/bcache0 on /mnt type ext4 (rw)
  
  we will unregister the bcache:
-               echo 1 
/sys/fs/bcache/c990a31a-f531-4231-9603-d40230dc6504/unregister
- 
+   echo 1 /sys/fs/bcache/c990a31a-f531-4231-9603-d40230dc6504/unregister
  
  check the content of /sys/fs/bcache
-               ls /sys/fs/bcache/
- 
-               register
-               register_quiet
+   ls /sys/fs/bcache/
+ 
+   register
+   register_quiet
  
  So bcache is unregistered, but status of /dev/sda:
  
-               bcache-super-show -f /dev/sda
-               sb.magic                ok
-               sb.first_sector         8 [match]
-               sb.csum                 E6A8D9AC496B0B04 [match]
-               sb.version              3 [cache device]
- 
-               dev.label               (empty)
-               dev.uuid                b245150d-cfbe-4f90-836a-343e0e1a4c55
-               dev.sectors_per_block   1
-               dev.sectors_per_bucket  1024
-               dev.cache.first_sector  1024
-               dev.cache.cache_sectors 16776192
-               dev.cache.total_sectors 16777216
-               dev.cache.ordered       yes
-               dev.cache.discard       yes
-               dev.cache.pos           0
-               dev.cache.replacement   0 [lru]
- 
-                 cset.uuid
- c990a31a-f531-4231-9603-d40230dc6504
- 
- 
-               command: 
-               bcache-super-show -f /dev/sdb
- 
-               result:
-               sb.magic                ok
-               sb.first_sector         8 [match]
-               sb.csum                 D41799F794675FA8 [match]
-               sb.version              1 [backing device]
- 
-               dev.label               (empty)
-               dev.uuid                cc31e0bb-db29-4115-a1b2-e9ff54e5f127
-               dev.sectors_per_block   1
-               dev.sectors_per_bucket  1024
-               dev.data.first_sector   16
-               dev.data.cache_mode     1 [writeback]
-               dev.data.cache_state    0 [detached]
- 
-               cset.uuid               00000000-0000-0000-0000-000000000000
-               ******************************
+   bcache-super-show -f /dev/sda
+   sb.magic            ok
+   sb.first_sector             8 [match]
+   sb.csum                     E6A8D9AC496B0B04 [match]
+   sb.version          3 [cache device]
+ 
+   dev.label           (empty)
+   dev.uuid            b245150d-cfbe-4f90-836a-343e0e1a4c55
+   dev.sectors_per_block       1
+   dev.sectors_per_bucket      1024
+   dev.cache.first_sector      1024
+   dev.cache.cache_sectors     16776192
+   dev.cache.total_sectors     16777216
+   dev.cache.ordered   yes
+   dev.cache.discard   yes
+   dev.cache.pos               0
+   dev.cache.replacement       0 [lru]
+ 
+   cset.uuid             c990a31a-f531-4231-9603-d40230dc6504
+ 
+   command:
+   bcache-super-show -f /dev/sdb
+ 
+   result:
+   sb.magic            ok
+   sb.first_sector             8 [match]
+   sb.csum                     D41799F794675FA8 [match]
+   sb.version          1 [backing device]
+ 
+   dev.label           (empty)
+   dev.uuid            cc31e0bb-db29-4115-a1b2-e9ff54e5f127
+   dev.sectors_per_block       1
+   dev.sectors_per_bucket      1024
+   dev.data.first_sector       16
+   dev.data.cache_mode 1 [writeback]
+   dev.data.cache_state        0 [detached]
+ 
+   cset.uuid           00000000-0000-0000-0000-000000000000
+   ******************************
  
  Maybe I'm wrong, but I would expect the cahing device to be 000000000ed
  not the backing device, as we may still want to use the data on it.
  
  But data is still accessible on the mount point.
  
-               So we wipe /dev/sda now:
-               ******************************
-               command: 
-               wipefs -a /dev/sda
- 
-               result:
-               /dev/sda: 16 bytes were erased at offset 0x00001018 (bcache): 
c6 85 73 f6 4e 1a 45 ca 82 65 f5 7f 48 ba 6d 81
-               ******************************
+   So we wipe /dev/sda now:
+   ******************************
+   command:
+   wipefs -a /dev/sda
+ 
+   result:
+   /dev/sda: 16 bytes were erased at offset 0x00001018 (bcache): c6 85 73 f6 
4e 1a 45 ca 82 65 f5 7f 48 ba 6d 81
+   ******************************
  
  mount:
-               /dev/bcache0 on /mnt type ext4 (rw)
+   /dev/bcache0 on /mnt type ext4 (rw)
  
  data still there:
-               ******************************
-               command: 
-               ls /mnt/
- 
-               result:
-               lost+found
-               test_dir
-               test_file
-               ******************************
- 
-               ******************************
-               command: 
-               lsblk
- 
-               result:
-               NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
-               sda         8:0    0    8G  0 disk 
-               sdb         8:16   0   16G  0 disk 
-               └─bcache0 251:0    0   16G  0 disk /mnt
-               sr0        11:0    1  1,1G  0 rom  /cdrom
-               loop0       7:0    0  1,1G  1 loop /rofs
-               ******************************
+   ******************************
+   command:
+   ls /mnt/
+ 
+   result:
+   lost+found
+   test_dir
+   test_file
+   ******************************
+ 
+   ******************************
+   command:
+   lsblk
+ 
+   result:
+   NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
+   sda         8:0    0    8G  0 disk
+   sdb         8:16   0   16G  0 disk
+   └─bcache0 251:0    0   16G  0 disk /mnt
+   sr0        11:0    1  1,1G  0 rom  /cdrom
+   loop0       7:0    0  1,1G  1 loop /rofs
+   ******************************
  
  Ok now we can backup our data if we need to, next, we will umount:
-               umount /mnt/
+   umount /mnt/
  
  ok no errors.
  
-               lsblk:
-               NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
-               sda         8:0    0    8G  0 disk 
-               sdb         8:16   0   16G  0 disk 
-               └─bcache0 251:0    0   16G  0 disk 
-               sr0        11:0    1  1,1G  0 rom  /cdrom
-               loop0       7:0    0  1,1G  1 loop /rofs
-               ******************************
+   lsblk:
+   NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
+   sda         8:0    0    8G  0 disk
+   sdb         8:16   0   16G  0 disk
+   └─bcache0 251:0    0   16G  0 disk
+   sr0        11:0    1  1,1G  0 rom  /cdrom
+   loop0       7:0    0  1,1G  1 loop /rofs
+   ******************************
  
  bcache0 is still there, ok, but how to delete to unregister it
  definitively?
  
-                 ls /sys/fs/bcache/
- 
-               result:
-               register
-               register_quiet
-               **************
- 
-                 lsmod
- 
-               result:
-               Module                  Size  Used by
-               bcache                227884  1 
+   ls /sys/fs/bcache/
+ 
+   result:
+   register
+   register_quiet
+   **************
+ 
+   lsmod
+ 
+   result:
+   Module                  Size  Used by
+   bcache                227884  1
  
  we can't wipefs -a /dev/sdb, device is in use, so we reboot machine (NOT
  OK)
  
-               in the vm:
-               ls /sys/fs/bcache/
- 
-               result:
-               register
-               register_quiet
+   in the vm:
+   ls /sys/fs/bcache/
+ 
+   result:
+   register
+   register_quiet
  
  update packages and reinstall bcache-tools:
  
-                 bcache-super-show -f /dev/sda
- 
-               result:
-               Invalid superblock (bad magic)
-               sb.magic                bad magic
- 
-                 bcache-super-show -f /dev/sdb
- 
-               result:
-               sb.magic                ok
-               sb.first_sector         8 [match]
-               sb.csum                 D41799F794675FA8 [match]
-               sb.version              1 [backing device]
- 
-               dev.label               (empty)
-               dev.uuid                cc31e0bb-db29-4115-a1b2-e9ff54e5f127
-               dev.sectors_per_block   1
-               dev.sectors_per_bucket  1024
-               dev.data.first_sector   16
-               dev.data.cache_mode     1 [writeback]
-               dev.data.cache_state    0 [detached]
- 
-                 cset.uuid
- 00000000-0000-0000-0000-000000000000
+   bcache-super-show -f /dev/sda
+ 
+   result:
+   Invalid superblock (bad magic)
+   sb.magic            bad magic
+ 
+   bcache-super-show -f /dev/sdb
+ 
+   result:
+   sb.magic            ok
+   sb.first_sector             8 [match]
+   sb.csum                     D41799F794675FA8 [match]
+   sb.version          1 [backing device]
+ 
+   dev.label           (empty)
+   dev.uuid            cc31e0bb-db29-4115-a1b2-e9ff54e5f127
+   dev.sectors_per_block       1
+   dev.sectors_per_bucket      1024
+   dev.data.first_sector       16
+   dev.data.cache_mode 1 [writeback]
+   dev.data.cache_state        0 [detached]
+ 
+   cset.uuid             00000000-0000-0000-0000-000000000000
  
  Now we can wipe /dev/sdb because we rebooted, and bcache don't use the device 
anymore:
-               wipefs -a /dev/sdb
- 
-               result:
-               /dev/sdb: 16 bytes were erased at offset 0x00001018 (bcache): 
c6 85 73 f6 4e 1a 45 ca 82 65 f5 7f 48 ba 6d 81
- 
- **********************************************************
- 
- Now I can re-use my device, but I needed a reboot. Maybe we must also
- have better bcache management tools also.
+   wipefs -a /dev/sdb
+ 
+   result:
+   /dev/sdb: 16 bytes were erased at offset 0x00001018 (bcache): c6 85 73 f6 
4e 1a 45 ca 82 65 f5 7f 48 ba 6d 81
+ 
+   **********************************************************
+ 
+ Now I can re-use my device, but I needed a reboot. Maybe we also must
+ have better bcache management tools.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to bcache-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1377142

Title:
  Bcache doesn't allow full unregistering without rebooting

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bcache-tools/+bug/1377142/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs

Reply via email to