geli+Root on ZFS installation

2013-09-20 Thread yudi v
Hi,

I managed to install with "geli+root on ZFS" setup but have a few
questions.  Most of the instructions just list commands but offer very
little explanation.
I adapted the instructions in
https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE  to suit my needs.

Here's the process I used for the test on a VM:

2 GB RAM
two HDDs 8 GB each mirrored - three partitions

for boot code 128 KB
for /boot 2 GB
for the rest of the system and encrypted

no key file for encrypted partitions, only passphrase
using 9.1-RELEASE
there will be no swap or handling of 4k drives, just to keep it as simple
as possible.

*Create the basic three partitions:*


gpart destroy -F da0
gpart destroy -F da1
gpart create -s gpt da0
gpart create -s gpt da1
gpart add -s 128 -t freebsd-boot da0
gpart add -s 128 -t freebsd-boot da1
gpart add -s 2G -t freebsd-zfs da0
gpart add -s 2G -t freebsd-zfs da1
gpart add -t freebsd-zfs da0
gpart add -t freebsd-zfs da1

*Write boot code to both disks:*

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da1

*Load necessary modules:*

kldload zfs
kldload geom_eli
*
Encrypt the disks with only a passphrase:*

geli init -b -s 4096 /dev/da0p3
geli init -b -s 4096 /dev/da1p3

geli attach /dev/da0p3
geli attach /dev/da1p3

*Creating ZFS datasets:*

zpool create bootdir mirror /dev/da0p2 /dev/da1p2
zpool set bootfs=bootdir bootdir

zpool create -R /mnt -O canmount=off tank mirror /dev/da0p3.eli
/dev/da1p3.eli
zfs create -o mountpoint=/tank/ROOT
zfs set mountpoint=/mnt/bootdirbootdir
zfs mount bootdir

*Then exit out of the shell and go back to bsdinstall. Install as normal
and then get back to the shell after bsdinstall finishes ( do not reboot
yet).*

Once in the newly installed system:

mount -t devfs devfs /dev   ( to use ZFS commands in the new environment)

*Add the necessary variables/settings:*

echo ‘zfs_enable=”YES”‘ >> /etc/rc.conf
echo ‘vfs.root.mountfrom=”zfs:tank/ROOT”‘  >> /boot/loader.conf
echo ‘zfs_load=”YES”‘ >> /boot/loader.conf
echo ‘geom_eli_load=”YES”‘ >> /boot/loader.conf

*Then create a zpool cache file:*

 zpool set cachefile=/boot/zfs/zpool.cache tank.

*Then move the boot folder to the second partition under the bootdir
dataset:*

mv boot bootdir/
*
Then set the final mount points:*

zfs set mountpoint=legacy tank
zfs set mountpoint=/bootdir bootdir

*then reboot.*
It should boot fine into the new system.

-  My questions:  -

*1.*   Almost all the guides  I came across, do not install to the root
dataset, they only seem to use it to derive/mount other
datasets/filesystems.
One of the reasons is to user boot environments, what are the other
possible reasons for doing this?



*2*.   Is it necessary to create a symbolic link to the /boot dir? Again
one of the howtos on the web had this step (
https://www.dan.me.uk/blog/2012/05/06/full-disk-encryption-with-zfs-root-for-freebsd-9-x/
).

ln -fs bootdir/boot

*3*.   This below option is where I had most trouble. This definitely needs
to be present when using geli+ZFS, if it's only ZFS, then I think the
bootfs flag suffices. Can someone with more knowledge of this please shed
some light on when this entry is needed.

vfs.root.mountfrom=”zfs:tank/ROOT”

*4.* In the wiki link above, what is the purpose of:

# zfs set mountpoint=/  zroot/ROOT
# zfs set mountpoint=/zroot zroot

I cannot understand the logic behind the second command.
Does that mean zroot  will display under / (root of the filesystem)?  and
Why?

looking at the rest of the commands:

# zfs set mountpoint=/tmp zroot/tmp
 # zfs set mountpoint=/usr zroot/usr
 # zfs set mountpoint=/var zroot/var

so if ROOT is set to /
then tmp, usr and var all appear under ROOT, is that right?


*5.* There seems to be lot of variation on how the system directories are
mounted under ZFS. In the above wiki link, there seems to be separate
filesystems created under the root dataset for usr, var, tmp, usr/home 
  What's the logic? Are there any general guidelines/best practice
instructions?



Thank you.
Yudi
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Is it possible to suspend to disk with geli+Root on ZFS installation

2013-09-27 Thread yudi v
Hi all,

Is it possible to suspend to disk (hibernate) when using geli for full disk
encryption. My set-up is listed below. So I am going to have an encrypted
container and ZFS on top. There are two options for the swap with this
set-up, either use a swap file on the ZFS pool or use a separate partition
for swap and encrypt that. What I want to know is will either of this work
with suspend to disk.

Reading geli(8) <http://man.freebsd.org/geli/8> man page does not say
anything about suspending to disk. Geli itself has suspend and resume
commands but looks like they cannot be used on the file system where
geliutility is stored (so the root pool cannot be suspended?)

And the onetime option does not support geli suspend.

Thank you.
Yudi

PS. I haven't received any response to the email below, if someone would
still like to answer some of the questions at the end, that would be
wonderful.


-- Forwarded message --
From: yudi v 
Date: Fri, Sep 20, 2013 at 7:09 PM
Subject: geli+Root on ZFS installation
To: freebsd-questions@freebsd.org


Hi,

I managed to install with "geli+root on ZFS" setup but have a few
questions.  Most of the instructions just list commands but offer very
little explanation.
I adapted the instructions in
https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE  to suit my needs.

Here's the process I used for the test on a VM:

2 GB RAM
two HDDs 8 GB each mirrored - three partitions

for boot code 128 KB
for /boot 2 GB
for the rest of the system and encrypted

no key file for encrypted partitions, only passphrase
using 9.1-RELEASE
there will be no swap or handling of 4k drives, just to keep it as simple
as possible.

*Create the basic three partitions:*


gpart destroy -F da0
gpart destroy -F da1
gpart create -s gpt da0
gpart create -s gpt da1
gpart add -s 128 -t freebsd-boot da0
gpart add -s 128 -t freebsd-boot da1
gpart add -s 2G -t freebsd-zfs da0
gpart add -s 2G -t freebsd-zfs da1
gpart add -t freebsd-zfs da0
gpart add -t freebsd-zfs da1

*Write boot code to both disks:*

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da1

*Load necessary modules:*

kldload zfs
kldload geom_eli
*
Encrypt the disks with only a passphrase:*

geli init -b -s 4096 /dev/da0p3
geli init -b -s 4096 /dev/da1p3

geli attach /dev/da0p3
geli attach /dev/da1p3

*Creating ZFS datasets:*

zpool create bootdir mirror /dev/da0p2 /dev/da1p2
zpool set bootfs=bootdir bootdir

zpool create -R /mnt -O canmount=off tank mirror /dev/da0p3.eli
/dev/da1p3.eli
zfs create -o mountpoint=/tank/ROOT
zfs set mountpoint=/mnt/bootdirbootdir
zfs mount bootdir

*Then exit out of the shell and go back to bsdinstall. Install as normal
and then get back to the shell after bsdinstall finishes ( do not reboot
yet).*

Once in the newly installed system:

mount -t devfs devfs /dev   ( to use ZFS commands in the new environment)

*Add the necessary variables/settings:*

echo ‘zfs_enable=”YES”‘ >> /etc/rc.conf
echo ‘vfs.root.mountfrom=”zfs:tank/ROOT”‘  >> /boot/loader.conf
echo ‘zfs_load=”YES”‘ >> /boot/loader.conf
echo ‘geom_eli_load=”YES”‘ >> /boot/loader.conf

*Then create a zpool cache file:*

 zpool set cachefile=/boot/zfs/zpool.cache tank.

*Then move the boot folder to the second partition under the bootdir
dataset:*

mv boot bootdir/
*
Then set the final mount points:*

zfs set mountpoint=legacy tank
zfs set mountpoint=/bootdir bootdir

*then reboot.*
It should boot fine into the new system.

-  My questions:  -

*1.*   Almost all the guides  I came across, do not install to the root
dataset, they only seem to use it to derive/mount other
datasets/filesystems.
One of the reasons is to user boot environments, what are the other
possible reasons for doing this?



*2*.   Is it necessary to create a symbolic link to the /boot dir? Again
one of the howtos on the web had this step (
https://www.dan.me.uk/blog/2012/05/06/full-disk-encryption-with-zfs-root-for-freebsd-9-x/
).

ln -fs bootdir/boot

*3*.   This below option is where I had most trouble. This definitely needs
to be present when using geli+ZFS, if it's only ZFS, then I think the
bootfs flag suffices. Can someone with more knowledge of this please shed
some light on when this entry is needed.

vfs.root.mountfrom=”zfs:tank/ROOT”

*4.* In the wiki link above, what is the purpose of:

# zfs set mountpoint=/  zroot/ROOT
# zfs set mountpoint=/zroot zroot

I cannot understand the logic behind the second command.
Does that mean zroot  will display under / (root of the filesystem)?  and
Why?

looking at the rest of the commands:

# zfs set mountpoint=/tmp zroot/tmp
 # zfs set mountpoint=/usr zroot/usr
 # zfs set mountpoint=/var zroot/var

so if ROOT is set to /
then tmp, usr and var all appear under ROOT, is that right?


*5.* There seems to be lot of variation on how the system di

Geli and ZFS

2013-10-08 Thread yudi v
*
*
--
 There are few different ways to set-up geli with ZFS. I just want to get
some opinions (benefits and disadvantages) about the below two options


*First option*: (most commonly encountered set-up)

Have geli on the block device and ZFS on top of the geli provider.
*
Second option:*

Create a ZFS Volume on a block device, then create geli provider on top of
the ZFS volume, and finally, ZFS datasets on top.

Generally, it's recommended to let ZFS manage the whole disk if possible,
so I was wondering if the second option is better.
I will be using couple of 3TB HDDs mirrored for data and want to encrypt
them.

I am hoping someone with an in-depth understanding of ZFS will be able to
offer some insight.

-- 
Kind regards,
Yudi
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Is it possible to suspend to disk with geli+Root on ZFS installation

2013-10-12 Thread yudi v
On Mon, Sep 30, 2013 at 2:47 AM, Ian Smith  wrote:
> In freebsd-questions Digest, Vol 486, Issue 7, Message: 5
> On Sat, 28 Sep 2013 16:25:33 +0200 Roland Smith  wrote:
>  > On Fri, Sep 27, 2013 at 05:37:55PM +1000, yudi v wrote:
>  > > Hi all,
>  > >
>  > > Is it possible to suspend to disk (hibernate) when using geli for
full disk
>  > > encryption.
>  >
>  > As far as I can tell, FreeBSD doesn't support suspend to disk on all
>  > architectures. On amd64 the necessary infrastructure doesn't exist,
and on
>  > i386 FPU state is lost, there is no multiprocessor support and some
MSRs are
>  > not restored [1].
>  >
>  > [1]: https://wiki.freebsd.org/SuspendResume
>
> Roland, sorry, no; you (and that page) are talking about Suspend to RAM,
> ACPI state S3.  What you've said is correct re Suspend to RAM - though
> some running amd64 have achieved some success on some machines lately;
> most of the issues are with restoring modern video, backlight and such.
>
> Those i386 comments don't apply to my Thinkpad T23s, which suspend and
> resume, in console mode and X, flawlessly on 9.1-R and properly after
> various tweaks on 8.x, 7.x and 6.x - but they're a single core P3-M ..
>
> I must reiterate, FreeBSD does not support Suspend to Disk (state S4 aka
> 'hibernate') on ANY platform, except - perhaps - on machines supporting
> S4 in BIOS (hw.acpi.s4bios=1) which are very rarely spotted in the wild.
>
>  > And even suspend to RAM doesn't work on every machine [2].
>  >
>  > [2]: https://wiki.freebsd.org/IdeasPage#Suspend_to_disk
>
> That page IS about Suspend to Disk - but only as a wishlist idea, as it
> has been for many years.  Someone did take it on as a Google SoC project
> years ago, but nothing ever came of it to my knowledge.
>
> The last laptop I have that will properly hibernate - ie save RAM and
> all state to disk and power off, then reload all RAM and state on power
> return - is a 300MHz Compaq Armada 1500C (mfg '98), but using the older
> APM BIOS rather than ACPI.  (It's still running, 24/7/365 since 2002 :)
>
> cheers, Ian

Thanks Ian for clarifying that FreeBSD does not support Suspend to Disk. I
just assumed all major distros supported all the suspend states. Now I am
looking for a UPS that cleanly shuts down the machine when there is a power
outage.
I am looking at a APC Power-Saving Back-UPS ES 8 Outlet 700VA 230V AS
3112<http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=BE700G-AZ&total_watts=200&tab=features>,
anyone know if apcupsd daemon works fine under FreeBSD or should I be
looking at Network UPS Tools (NUT).

-- 
Kind regards,
Yudi
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


new backup server file system options

2012-12-21 Thread yudi v
Hi all,

I am building a new freebsd fileserver to use for backups, will be using 2
disk raid mirroring in a HP microserver n40l.
I have gone through some of the documentation and would like to know what
file systems to choose.

According to the docs, ufs is suggested for the system partitions but
someone on the freebsd irc channel suggested using zfs for the rootfs as
well

Are there any disadvantages of using zfs for the whole system rather than
going with ufs for the system files and zfs for the user data?

-- 
Kind regards,
Yudi
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"