Hi,

i've modified the module to support multiple LUKS devices (UUIDs). It works
with my setup which has only one LUKS device but it should work with more
than one.

You have to add the UUIDs of you luks devices separated by comma (e.g.
rd.ykluks.uuid=UUD1,UUID2,UUID3).

Hope this works and happy to get any feedback.

Regards
the2nd



On Wed, Aug 15, 2018 at 2:18 PM, <joevio...@gmail.com> wrote:

>
> > Please note that the current version will probably not work with a
> default qubes LUKS-on-LVM installation. But if some experienced user is
> willing to help testing i'll try to come up with a version that supports
> this too.
> >
> > Besides the yubikey/luks stuff the module handles the
> rd.qubes.hide_all_usb stuff via its own rd.ykluks.hide_all_usb command line
> parameter because the yubikey is connected via USB and needs to be
> accessable until we got the challenge from it. i am still unsure if this is
> the best method to implement this. So if anyone with a deeper knowledge of
> qubes/dracut does have a better/more secure solution i happy about any help.
> >
> > Regards
> > the2nd
>
>
>
> So I've screwed up... when I filled up my LVM, I added a disk to the
> Volume Group and expanded the pool.
>
> But I didn't encrypt the new drive, thinking I had LVM on LUKS.  But I
> have this now.
> [root@dom0]# lsblk | grep -v "\-\-"
> NAME                                                         MAJ:MIN RM
>  SIZE RO TYPE  MOUNTPOINT
> sdb                                                            8:16   0
>  3.7T  0 disk
> └─sdb1                                                         8:17   0
>  3.7T  0 part
>   ├─qubes_dom0-pool00_tmeta                                  253:1    0
>  2.1G  0 lvm
>   │ └─qubes_dom0-pool00-tpool                                253:3    0
>  1T  0 lvm
>   │   ├─qubes_dom0-pool00                                    253:6    0
>  1T  0 lvm
>   │   ├─qubes_dom0-root                                      253:4    0
> 192.6G  0 lvm   /
>   ├─qubes_dom0-pool00_meta0                                  253:63   0
>  2.1G  0 lvm
>   └─qubes_dom0-pool00_tdata                                  253:2    0
>  1T  0 lvm
>     └─qubes_dom0-pool00-tpool                                253:3    0
>  1T  0 lvm
>       ├─qubes_dom0-pool00                                    253:6    0
>  1T  0 lvm
>       ├─qubes_dom0-root                                      253:4    0
> 192.6G  0 lvm   /
> sr0                                                           11:0    1
> 1024M  0 rom
> loop0                                                          7:0    0
>  500M  0 loop
> sda                                                            8:0    0
> 232.9G  0 disk
> └─sda1                                                         8:1    0
> 232.9G  0 part
> nvme0n1                                                      259:0    0
> 232.9G  0 disk
> ├─nvme0n1p1                                                  259:1    0
>  1G  0 part  /boot
> └─nvme0n1p2                                                  259:2    0
> 231.9G  0 part
>   └─luks-bfcca13a-213d-46ec-b156-53df348dba30                253:0    0
> 231.9G  0 crypt
>     ├─qubes_dom0-pool00_tdata                                253:2    0
>  1T  0 lvm
>     │ └─qubes_dom0-pool00-tpool                              253:3    0
>  1T  0 lvm
>     │   ├─qubes_dom0-pool00                                  253:6    0
>  1T  0 lvm
>     │   ├─qubes_dom0-root                                    253:4    0
> 192.6G  0 lvm   /
>     └─qubes_dom0-swap                                        253:5    0
> 23.3G  0 lvm   [SWAP]
>
>
> With this LVM on LUKS setup, extending the thin pool onto a new disk that
> was added to the volume group... winds up leaving plain text data on the
> new disk.
>
>
> Here's what I think my setup will have to be:
>
> nvme0n1 (2 drives in hw RAID 0)
> ├─nvme0n1p1                       part  /boot
> └─nvme0n1p2                       part
>   └─luks (same key)               crypt
>     ├─qubes_dom0-pool00_tmeta     lvm
>     ├─qubes_dom0-pool00_tdata     lvm
>     │ └─qubes_dom0-pool00-tpool   lvm
>     │   ├─qubes_dom0-pool00       lvm
>     │   ├─qubes_dom0-root         lvm   /
>     │   └─ ... vm                 lvm
>     └─qubes_dom0-swap             lvm   [SWAP]
>
> sda  (2 drives in hw RAID 0)
> └─sda1                            part
>   └─luks (same key)               crypt
>     └─qubes_dom0-pool00_tdata     lvm
>       └─qubes_dom0-pool00-tpool   lvm
>         ├─qubes_dom0-pool00       lvm
>         ├─qubes_dom0-root         lvm   /
>         └─ ... vm                 lvm
>
> With your ykluks dracut module:
> > The default Qubes OS installation is a LVM-on-LUKS setup which will not
> work yet. Patches for LVM-on-LUKS are welcome as well as experienced
> testers because a dont have a LVM-on-LUKS installation to test with.
>
> I will be a tester for this.
>
> Thanks
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "qubes-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/qubes-users/hB0XaquzBAg/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> qubes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to qubes-users@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/qubes-users/08e39e6c-97f6-456b-b0c6-c09a86a8a856%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAA0%2BMPdQgsOb_rR0WgA_jwmfpn414tcESK9M4Ld0nR7uF5qFVQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to