On Sun, Jul 10, 2016 at 12:42 PM, Morgan Read <mst...@read.org.nz> wrote:


> [root@morgansmachine ~]# ssm resize -s-3G
> /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316
> [root@morgansmachine ~]# ssm resize -s-3G
> /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6


These two commands should have at the least reduced the size of the
file system volumes mounted at /home and / but I have no idea why this
was permitted because online shrink is not supported by ext4.

---------------
# resize2fs /dev/VG/test4 40G
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/VG/test4 is mounted on /mnt/0; on-line resizing required
resize2fs: On-line shrinking not supported

# ssm resize -s-3G /dev/VG/test4
Do you want to unmount "/mnt/0"? [Y|n] n
fsadm: Cannot proceed with mounted filesystem "/mnt/0"
  fsadm failed: 1
  Filesystem resize failed.
SSM Error (2012): ERROR running command: "lvm lvresize -r -L
49283072.0k /dev/VG/test4"
---------------

If I allow the unmount:

---------------
[root@f24s ~]# ssm resize -s-3G /dev/VG/test4
Do you want to unmount "/mnt/0"? [Y|n] y
fsck from util-linux 2.28

/dev/mapper/VG-test4: 11/3276800 files (0.0% non-contiguous),
251699/13107200 blocks
resize2fs 1.42.13 (17-May-2015)
Resizing the filesystem on /dev/mapper/VG-test4 to 12320768 (4k) blocks.
The filesystem on /dev/mapper/VG-test4 is now 12320768 (4k) blocks long.

  Size of logical volume VG/test4 changed from 50.00 GiB (12800
extents) to 47.00 GiB (12032 extents).
  Logical volume test4 successfully resized.
---------------

So why don't you have any messages about what actually ssm resize did?
I can't tell if it did anything at all, which means it probably did
not resize the file system. This appears to be true when looking at
ssm list results after you did this resize. The file systems are still
mounted at /home and /, and they are still the same size as before, no
change.


> [root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/home
>   WARNING: Reducing active and open logical volume to 192.73 GiB.
>   THIS MAY DESTROY YOUR DATA (filesystem etc.)
> Do you really want to reduce fedora_morgansmachine/home? [y/n]: y
>   Size of logical volume fedora_morgansmachine/home changed from 195.73 GiB
> (50107 extents) to 192.73 GiB (49339 extents).
>   Logical volume home successfully resized.
> [root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/root
>   WARNING: Reducing active and open logical volume to 17.00 GiB.
>   THIS MAY DESTROY YOUR DATA (filesystem etc.)
> Do you really want to reduce fedora_morgansmachine/root? [y/n]: y
>   Size of logical volume fedora_morgansmachine/root changed from 20.00 GiB
> (5120 extents) to 17.00 GiB (4352 extents).
>   Logical volume root successfully resized.

Yeah I think you did just indeed destroy you data on both of these
because the file system was not resized in the first step and then you
asked it to change the size of the LV. So those extents revert back to
the VG.

Had the file system resize happened correctly, ssm will resize the LV
for you, so you didn't need to do this step anyway.






> [root@morgansmachine ~]# ssm resize -s+3G /dev/fedora_morgansmachine/var
>   Size of logical volume fedora_morgansmachine/var changed from 3.00 GiB
> (768 extents) to 6.00 GiB (1536 extents).
>   Logical volume var successfully resized.
> [root@morgansmachine ~]# ssm resize -s+3G
> /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f
> SSM Error (2005): There is not enough space in the pool 'none' to grow
> volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size
> 6289408.0 KB!

I think the problem here is now ssm is confused somehow. You should
have just done the 2nd command on the file system itself, because ssm
will know that it first must increase the size of the LV, and then the
size of the LUKS volume, and then the fs. But you only increased the
size of the LV, not the LUKS volume, which now has a different size
than its underlying LV, so SSM seems to get stuck.

Further, the problem is that by shrinking some LVs, and then growing
another, the extents for /home and / are now with some other LV and
have probably been stepped on, so /home and / are likely a total loss.
It would take some very tedious patience to unwind all of this in the
*exact* reverse order in order to get the same extents linearly
allocated back to the /home and / file systems.



> [root@morgansmachine ~]# ssm resize
> /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f
> Traceback (most recent call last):

That's a bug. Anytime there's a crash it's a bug.




-- 
Chris Murphy
--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://lists.fedoraproject.org/admin/lists/users@lists.fedoraproject.org
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org

Reply via email to