Re: Zfs encryption property for freebsd 8.3
On Tue, Sep 3, 2013 at 6:22 AM, Florent Peterschmitt flor...@peterschmitt.fr wrote: Le 03/09/2013 14:14, Emre Çamalan a écrit : Hi, I want to encrypt some disk on my server with Zfs encryption property but it is not available. That would require ZFS v30. As far as I am aware Oracle has not released the code under CDDL. Oracle's ZFS encryption is crap anyway. It works at the filesystem level, not the pool level, so a lot of metadata is in plaintext; I don't remember how much exactly. It's also highly vulnerable to watermarking attacks. From http://forums.freebsd.org/showthread.php?t=30036 So you can use ZFS pools on GELI volumes, it can be a good start. I not play with it. GELI is full-disk encryption. It's far superior to ZFS encryption. -- Florent Peterschmitt | Please: flor...@peterschmitt.fr| * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org
Re: Zfs encryption property for freebsd 8.3
On Tue, Sep 3, 2013 at 9:01 AM, Florent Peterschmitt flor...@peterschmitt.fr wrote: Le 03/09/2013 16:53, Alan Somers a écrit : GELI is full-disk encryption. It's far superior to ZFS encryption. Yup, but is there a possibility to encrypt a ZFS volume (not a whole pool) with a separate GELI partition? You mean encrypt a zvol with GELI and put a file system on that? I suppose that would work, but I bet that it would be slow. Also, in-ZFS encryption would be a nice thing if it could work like an LVM/LUKS where each logical LVM volume can be encrypted or not and have its own crypt key. My understanding is that this is exactly how Oracle's ZFS encryption works. Each ZFS filesystem can have its own key, or be in plaintext. Every cryptosystem involves a tradeoff between security and convenience, and ZFS encryption goes fairly hard toward convenience. In particular, Oracle decided that encrypted files must be deduplicatable. A necessary result is that they are trivially vulnerable to watermarking attacks. https://blogs.oracle.com/darren/entry/zfs_encryption_what_is_on I saw that Illumos has ZFS encrytion in the TODO list. -- Florent Peterschmitt | Please: flor...@peterschmitt.fr| * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org
Re: GPT issues with device path lengths involving make_dev_physpath_alias
It's a compatibility problem. If you change that constant, then any binaries built with the old value will break if they rely on it having a fixed value in a system or library call. For example, the MFIIO_QUERY_DISK ioctl in the mfi(4) driver passes a structure with an array of size SPECNAMELEN + 1. If you change SPECNAMELEN, then you'll have to add a compatibility mechanism for this ioctl. I'm sure there are other places that would have the same problem. Happy Hacking. On Sun, Jul 14, 2013 at 11:50 PM, Selphie Keller selphie.kel...@gmail.com wrote: hello hackers, I recently ran into a issue with a storage server that has some of the drives in gpt vs mbr, tracked it down to a 64 char limit that is preventing aliases in function make_dev_physpath_alias. I was curious if there was any reason why this couldn't be bumped from 64 to 128 which would make room for the device paths of gpt roughly around 94 and 96 chars long. - #define SPECNAMELEN http://fxr.watson.org/fxr/ident?im=3;i=SPECNAMELEN 63 */* max length of devicename */ + *#define SPECNAMELEN http://fxr.watson.org/fxr/ident?im=3;i=SPECNAMELEN 127 */* max length of devicename */* http://fxr.watson.org/fxr/source/sys/param.h#L106 Jul 14 22:10:17 fbsd9 kernel: make_dev_physpath_alias: WARNING - Unable to alias gptid/4d177c56-ce17-26e3-843e-9c8a9faf1e0f to enc@n5003048000ba7d7d /type@0/slot@b/elmdesc@Slot_11/gptid/4d177c56-ce17-26e3-843e-9c8a9faf1e0f - path too long Jul 14 22:10:17 fbsd9 kernel: make_dev_physpath_alias: WARNING - Unable to alias gptid/4b1caf38-d967-24ee-c3a0-badff404e7ed to enc@n5003048000ba7d7d /type@0/slot@5/elmdesc@Slot_05/gptid/4b1caf38-d967-24ee-c3a0-badff404e7ed - path too long -Selphie (Estella Mystagic) ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org
Re: Attempting to roll back zfs transactions on a disk to recover a destroyed ZFS filesystem
zpool export does not wipe the transaction history. It does, however, write new labels and some metadata, so there is a very slight chance that it might overwrite some of the blocks that you're trying to recover. But it's probably safe. An alternative, much more complicated, solution would be to have ZFS open the device non-exclusively. This patch will do that. Caveat programmer: I haven't tested this patch in isolation. Change 624068 by willa@willa_SpectraBSD on 2012/08/09 09:28:38 Allow multiple opens of geoms used by vdev_geom. Also ignore the pool guid for spares when checking to decide whether it's ok to attach a vdev. This enables using hotspares to replace other devices, as well as using a given hotspare in multiple pools. We need to investigate alternative solutions in order to allow opening the geoms exclusive. Affected files ... ... //SpectraBSD/stable/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c#2 edit Differences ... //SpectraBSD/stable/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c#2 (text) @@ -179,49 +179,23 @@ gp = g_new_geomf(zfs_vdev_class, zfs::vdev); gp-orphan = vdev_geom_orphan; gp-attrchanged = vdev_geom_attrchanged; - cp = g_new_consumer(gp); - error = g_attach(cp, pp); - if (error != 0) { - printf(%s(%d): g_attach failed: %d\n, __func__, - __LINE__, error); - g_wither_geom(gp, ENXIO); - return (NULL); - } - error = g_access(cp, 1, 0, 1); - if (error != 0) { - printf(%s(%d): g_access failed: %d\n, __func__, - __LINE__, error); - g_wither_geom(gp, ENXIO); - return (NULL); - } - ZFS_LOG(1, Created geom and consumer for %s., pp-name); - } else { - /* Check if we are already connected to this provider. */ - LIST_FOREACH(cp, gp-consumer, consumer) { - if (cp-provider == pp) { - ZFS_LOG(1, Provider %s already in use by ZFS. - Failing attach., pp-name); - return (NULL); - } - } - cp = g_new_consumer(gp); - error = g_attach(cp, pp); - if (error != 0) { - printf(%s(%d): g_attach failed: %d\n, - __func__, __LINE__, error); - g_destroy_consumer(cp); - return (NULL); - } - error = g_access(cp, 1, 0, 1); - if (error != 0) { - printf(%s(%d): g_access failed: %d\n, - __func__, __LINE__, error); - g_detach(cp); - g_destroy_consumer(cp); - return (NULL); - } - ZFS_LOG(1, Created consumer for %s., pp-name); + } + cp = g_new_consumer(gp); + error = g_attach(cp, pp); + if (error != 0) { + printf(%s(%d): g_attach failed: %d\n, __func__, + __LINE__, error); + g_wither_geom(gp, ENXIO); + return (NULL); + } + error = g_access(cp, /*r*/1, /*w*/0, /*e*/0); + if (error != 0) { + printf(%s(%d): g_access failed: %d\n, __func__, + __LINE__, error); + g_wither_geom(gp, ENXIO); + return (NULL); } + ZFS_LOG(1, Created consumer for %s., pp-name); cp-private = vd; vd-vdev_tsd = cp; @@ -251,7 +225,7 @@ cp-private = NULL; gp = cp-geom; - g_access(cp, -1, 0, -1); + g_access(cp, -1, 0, 0); /* Destroy consumer on last close. */ if (cp-acr == 0 cp-ace == 0) { ZFS_LOG(1, Destroyed consumer to %s., cp-provider-name); @@ -384,6 +358,18 @@ cp-provider-name); } +static inline boolean_t +vdev_attach_ok(vdev_t *vd, uint64_t pool_guid, uint64_t vdev_guid) +{ + boolean_t pool_ok; + boolean_t vdev_ok; + + /* Spares can be assigned to multiple pools. */ + pool_ok = vd-vdev_isspare || pool_guid == spa_guid(vd-vdev_spa); + vdev_ok = vdev_guid == vd-vdev_guid; + return (pool_ok vdev_ok); +} + static struct g_consumer * vdev_geom_attach_by_guids(vdev_t *vd) { @@ -420,8 +406,7 @@ g_topology_lock(); g_access(zcp, -1, 0, 0); g_detach(zcp); - if (pguid != spa_guid(vd-vdev_spa) || - vguid != vd-vdev_guid) +