Re: [HEADSUP] zfs root pool mounting

2012-12-23 Thread Andriy Gapon
on 20/12/2012 00:14 Garrett Cooper said the following:
 On Thu, Dec 13, 2012 at 2:56 PM, Freddie Cash fjwc...@gmail.com wrote:
 
 ...
 
 You could at least point to the FreeBSD Forums version of that post.  :)

 https://forums.freebsd.org/showthread.php?t=31662
 
 Andriy,
 
 I figured out my problem. It was really, really stupid PEBKAC (copied
 a KERNCONF from one system that didn't have the appropriate storage
 driver for the other system and I didn't put the right entries into
 src.conf).
 
 Sorry for the noise :(.

I am glad that you could resolve this.
One of those things that look obvious only after discovered.

-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-23 Thread Andriy Gapon
on 20/12/2012 00:34 Kimmo Paasiala said the following:
 What is the status of the MFC process to 9-STABLE? I'm on 9-STABLE
 r244407, should I be able to boot from this ZFS pool without
 zpool.cache?

I haven't MFC-ed the change as of now.

After I eventually MFC it you should be able to boot from any pool from which
you can boot now unless you have the condition described in the original 
message.

-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-19 Thread Garrett Cooper
On Thu, Dec 13, 2012 at 2:56 PM, Freddie Cash fjwc...@gmail.com wrote:

...

 You could at least point to the FreeBSD Forums version of that post.  :)

 https://forums.freebsd.org/showthread.php?t=31662

Andriy,

I figured out my problem. It was really, really stupid PEBKAC (copied
a KERNCONF from one system that didn't have the appropriate storage
driver for the other system and I didn't put the right entries into
src.conf).

Sorry for the noise :(.

Thanks,
-Garrett
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-19 Thread Kimmo Paasiala
On Wed, Nov 28, 2012 at 8:35 PM, Andriy Gapon a...@freebsd.org wrote:

 Recently some changes were made to how a root pool is opened for root 
 filesystem
 mounting.  Previously the root pool had to be present in zpool.cache.  Now it 
 is
 automatically discovered by probing available GEOM providers.
 The new scheme is believed to be more flexible.  For example, it allows to 
 prepare
 a new root pool at one system, then export it and then boot from it on a new
 system without doing any extra/magical steps with zpool.cache.  It could also 
 be
 convenient after zpool split and in some other situations.

 The change was introduced via multiple commits, the latest relevant revision 
 in
 head is r243502.  The changes are partially MFC-ed, the remaining parts are
 scheduled to be MFC-ed soon.

 I have received a report that the change caused a problem with booting on at 
 least
 one system.  The problem has been identified as an issue in local environment 
 and
 has been fixed.  Please read on to see if you might be affected when you 
 upgrade,
 so that you can avoid any unnecessary surprises.

 You might be affected if you ever had a pool named the same as your current 
 root
 pool.  And you still have any disks connected to your system that belonged to 
 that
 pool (in whole or via some partitions).  And that pool was never properly
 destroyed using zpool destroy, but merely abandoned (its disks
 re-purposed/re-partitioned/reused).

 If all of the above are true, then I recommend that you run 'zdb -l disk' 
 for
 all suspect disks and their partitions (or just all disks and partitions).  If
 this command reports at least one valid ZFS label for a disk or a partition 
 that
 do not belong to any current pool, then the problem may affect you.

 The best course is to remove the offending labels.

 If you are affected, please follow up to this email.

 --
 Andriy Gapon
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Hello,

What is the status of the MFC process to 9-STABLE? I'm on 9-STABLE
r244407, should I be able to boot from this ZFS pool without
zpool.cache?

zpool status
  pool: zwhitezone
 state: ONLINE
  scan: scrub repaired 0 in 0h53m with 0 errors on Sat Dec 15 23:41:09 2012
config:

NAME   STATE READ WRITE CKSUM
zwhitezone ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
label/wzdisk0  ONLINE   0 0 0
label/wzdisk1  ONLINE   0 0 0
  mirror-1 ONLINE   0 0 0
label/wzdisk2  ONLINE   0 0 0
label/wzdisk3  ONLINE   0 0 0

errors: No known data errors
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-13 Thread Andriy Gapon
on 07/12/2012 02:33 Garrett Cooper said the following:
 On Thu, Dec 6, 2012 at 3:08 PM, Garrett Cooper yaneg...@gmail.com wrote:
 
 ...
 
 Please document the process to make this work in UPDATING (or at least
 the fact that this behavior was changed).

 I'm debugging moving from 9.1-RC2 to CURRENT [as of Tuesday] as it
 hasn't been as smooth as some of the other upgrades I've done; my
 zpool -- root -- is setup with a non-legacy mountpoint, I noticed that
 the cachefile attribute is now None, etc. I have limited capability
 with my installed system to debug this because unfortunately there
 aren't a ton of CURRENT based livecds around to run from (I might look
 into one of gjb's livecds later on if I get super stuck, but I'm
 trying to avoid having to do that). gptzfsboot sees the pool with
 lsdev, but it gets stuck at the mountroot prompt trying to find the
 filesystem.

 I'll wipe my /boot/kernel directory and try building/installing the
 kernel again, but right now I'm kind of dead in the water on the
 system I'm upgrading :/.

One thing that I recommend to all ZFS users is to make use of boot environments.
They are very easy, very convenient and may save a lot of trouble.
Use either any of the tool available in ports (e.g. sysutils/beadm) or just do
boot environments in an ad hoc fashion: snapshot and clone your current / known
good boot+root filesystem and you have a safe environment to fall back to.

 I thought r236884 requiring a zpool upgrade was the culprit, but
 it wasn't. Still stuck at a mountroot prompt (but now I have gjb's
 liveCD so I can do something about it).
 Something looks off with zdb -l on CURRENT and STABLE/9. Example
 on my 9-stable box:
 
 # uname -a
 FreeBSD forza.west.isilon.com 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0
 r+2fd0a57: Mon Dec  3 12:02:18 PST 2012
 gcoo...@forza.west.isilon.com:/usr/obj/usr/src/sys/FORZA  amd64
 # zdb -l sac2
 cannot open 'sac2': No such file or directory
 # zpool list
 NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 sac 95G  69.7G  25.3G73%  1.00x  ONLINE  -
 sac2   232G   117G   115G50%  1.00x  ONLINE  -

Proper zdb -l usage was described in the HEADSUP posting.
It's also available in zdb(8).  zdb -l should be used with disks/partitions/etc,
not with pool names.

 I'm running into the same behavior before and after I upgraded sac/sac2.
 My git branch is a lightly modified version of FreeBSD, but
 doesn't contain any ZFS specific changes (I can point you to it if you
 like to look at it).
 Would appreciate some pointers on what to do next.

Try to get a working environment (using livecd, another disk, backups, etc), try
to follow the original instructions.

-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-13 Thread Andriy Gapon
on 07/12/2012 02:55 Garrett Cooper said the following:
 If I try and let it import the pool at boot it claims the pool is in a
 FAULTED state when I point mountroot to /dev/cd0 (one of gjb's
 snapshot CDs -- thanks!), run service hostid onestart, etc. If I
 export and try to reimport the pool it claims it's not available (!).
 However, if I boot, run service hostid onestart, _then_ import the
 pool, then the pool is imported properly.

This sounds messy, not sure if it has any informative value.
I think I've seen something like this after some reason ZFS import from upsteam
when my kernel and userland were out of sync.
Do you do a full boot from the livecd?  Or do you boot your kernel but then 
mount
userland from the cd?
In any case, not sure if this is relevan to your main trouble.

 While I was mucking around with the pool trying to get the system to
 boot I set the cachefile attribute to /boot/zfs/zpool.cache before
 upgrading. In order to diagnose whether or not that was at fault, I
 set that back to none and I'm still running into the same issue.
 
 I'm going to try backing out your commit and rebuild my kernel in
 order to determine whether or not that's at fault.
 
 One other thing: both my machines have more than one ZFS-only zpool,
 and it might be probing the pools in the wrong order; one of the pools
 has bootfs set, the other doesn't, and the behavior is sort of
 resembling it not being set properly.

bootfs property should not better.  Multi-pool configurations has been tested
before the commit.

-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-13 Thread Garrett Cooper
On Thu, Dec 13, 2012 at 3:20 AM, Andriy Gapon a...@freebsd.org wrote:

...

 One thing that I recommend to all ZFS users is to make use of boot 
 environments.
 They are very easy, very convenient and may save a lot of trouble.
 Use either any of the tool available in ports (e.g. sysutils/beadm) or just 
 do
 boot environments in an ad hoc fashion: snapshot and clone your current / 
 known
 good boot+root filesystem and you have a safe environment to fall back to.

Looks interesting (this page has a slightly more in-depth description
of what beadm tries to achieve for the layman like me that doesn't use
*Solaris: 
http://www.linuxquestions.org/questions/*bsd-17/howto-zfs-madness-beadm-on-freebsd-4175412036/
).

 Proper zdb -l usage was described in the HEADSUP posting.
 It's also available in zdb(8).  zdb -l should be used with 
 disks/partitions/etc,
 not with pool names.

I realized that mistake after the fact :/.

 I'm running into the same behavior before and after I upgraded sac/sac2.
 My git branch is a lightly modified version of FreeBSD, but
 doesn't contain any ZFS specific changes (I can point you to it if you
 like to look at it).
 Would appreciate some pointers on what to do next.

 Try to get a working environment (using livecd, another disk, backups, etc), 
 try
 to follow the original instructions.

Ok. That's sort of what I'm trying. Given that I didn't do a *ton* of
customizations, I might just tar up the root pool's contents and basic
configuration, wipe it out, and try creating a new one from scratch
and restore the contents after the fact. However, thinking back I
could turn on terse debugging in ZFS after building it with that (I've
done it once before -- forgot about it until now). That would probably
be the best way to get the official story from ZFS as to what it
thinks is going south when importing the pool.

Thanks!
-Garrett
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-13 Thread Garrett Cooper
On Thu, Dec 13, 2012 at 3:24 AM, Andriy Gapon a...@freebsd.org wrote:

...

 This sounds messy, not sure if it has any informative value.
 I think I've seen something like this after some reason ZFS import from 
 upsteam
 when my kernel and userland were out of sync.
 Do you do a full boot from the livecd?  Or do you boot your kernel but then 
 mount
 userland from the cd?
 In any case, not sure if this is relevan to your main trouble.

I tried booting from a custom kernel, ran into the mountroot prompt
and then chose cd9660:/dev/cd0 instead of zfs:root (my root pool's
name).

...

 bootfs property should not better.  Multi-pool configurations has been tested
 before the commit.

That's what I thought, but I was unsure...
Thanks!
-Garrett
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-13 Thread Freddie Cash
On Thu, Dec 13, 2012 at 2:49 PM, Garrett Cooper yaneg...@gmail.com wrote:

 On Thu, Dec 13, 2012 at 3:20 AM, Andriy Gapon a...@freebsd.org wrote:

 ...

  One thing that I recommend to all ZFS users is to make use of boot
 environments.
  They are very easy, very convenient and may save a lot of trouble.
  Use either any of the tool available in ports (e.g. sysutils/beadm) or
 just do
  boot environments in an ad hoc fashion: snapshot and clone your current
 / known
  good boot+root filesystem and you have a safe environment to fall back
 to.

 Looks interesting (this page has a slightly more in-depth description
 of what beadm tries to achieve for the layman like me that doesn't use
 *Solaris:
 http://www.linuxquestions.org/questions/*bsd-17/howto-zfs-madness-beadm-on-freebsd-4175412036/
 ).

 You could at least point to the FreeBSD Forums version of that post.  :)

https://forums.freebsd.org/showthread.php?t=31662


-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-11 Thread Garrett Cooper
On Mon, Dec 10, 2012 at 1:25 PM, Garrett Cooper yaneg...@gmail.com wrote:

...

Going back in time a until the start of November didn't help. This
zpool get version for both of the pools is quite interesting -- in
particular, why is the value unset (-)? I've provided the zdb -C
info as well (just in case) (interesting why things are ordered the
way they are in the zdb output -- not sure if it matters or not). I
unset cachefile on the zpool (set it to `none` before according to the
documentation's list of accepted values).
I rebooted the box and still ran into mounting issues... Going to
try to keep on working on this for a couple more iterations before I
give up and move one of the drives to UFS or reformat with ZFS using
gjb's livecd.
Thanks,
-Garrett

# zpool get version root
NAME  PROPERTY  VALUESOURCE
root  version   -default
# zpool get version scratch
NAME PROPERTY  VALUESOURCE
scratch  version   28   local

# zpool upgrade scratch
This system supports ZFS pool feature flags.

Successfully upgraded 'scratch' from version 28 to feature flags.
Enabled the following features on 'scratch':
  async_destroy
  empty_bpobj

# zpool get version scratch
NAME PROPERTY  VALUESOURCE
scratch  version   -default

# zdb -e -C root

MOS Configuration:
version: 5000
name: 'root'
state: 0
txg: 100769
pool_guid: 16626214580045724926
hostid: 820800805
hostname: ''
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 16626214580045724926
children[0]:
type: 'disk'
id: 0
guid: 9546056018082780826
path: '/dev/da0p3'
phys_path: '/dev/da0p3'
whole_disk: 1
metaslab_array: 31
metaslab_shift: 30
ashift: 9
asize: 172685787136
is_log: 0
create_txg: 4
features_for_read:
# zdb -C scratch

MOS Configuration:
name: 'scratch'
state: 0
txg: 95459
pool_guid: 4958045391345281734
hostid: 820800805
hostname: 'gran-tourismo.west.isilon.com'
vdev_children: 2
vdev_tree:
type: 'root'
id: 0
guid: 4958045391345281734
children[0]:
type: 'disk'
id: 0
guid: 5020728737590777479
path: '/dev/da1p1'
phys_path: '/dev/da1p1'
whole_disk: 1
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 80021553152
is_log: 0
DTL: 70
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 17800508717108910354
path: '/dev/da2p1'
phys_path: '/dev/da2p1'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 29
ashift: 9
asize: 80021553152
is_log: 0
DTL: 71
create_txg: 4
features_for_read:
version: 5000
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-10 Thread Garrett Cooper
On Thu, Dec 6, 2012 at 4:55 PM, Garrett Cooper yaneg...@gmail.com wrote:

...

 (Removing bogus list)

 If I try and let it import the pool at boot it claims the pool is in a
 FAULTED state when I point mountroot to /dev/cd0 (one of gjb's
 snapshot CDs -- thanks!), run service hostid onestart, etc. If I
 export and try to reimport the pool it claims it's not available (!).
 However, if I boot, run service hostid onestart, _then_ import the
 pool, then the pool is imported properly.

 While I was mucking around with the pool trying to get the system to
 boot I set the cachefile attribute to /boot/zfs/zpool.cache before
 upgrading. In order to diagnose whether or not that was at fault, I
 set that back to none and I'm still running into the same issue.

 I'm going to try backing out your commit and rebuild my kernel in
 order to determine whether or not that's at fault.

 One other thing: both my machines have more than one ZFS-only zpool,
 and it might be probing the pools in the wrong order; one of the pools
 has bootfs set, the other doesn't, and the behavior is sort of
 resembling it not being set properly.

I reverted the following commits with no change.
Thanks,
-Garrett

commit 969475c599c9cd5095012f47c25faef914b63a45
Author: Garrett Cooper yaneg...@gmail.com
Date:   Wed Dec 5 23:48:06 2012 -0800

Don't define the TESTSDIR if it's empty

The problem is that items like lib/crypt/tests with empty TESTSDIRS will
blow up the build if make installworld is run with the modifications
made to bsd.subdir.mk .

Signed-off-by: Garrett Cooper yaneg...@gmail.com
commit 7a1c6d1a406d78810db56e5a8915d9ec945f546c
Author: avg a...@freebsd.org
Date:   Sat Nov 24 13:23:15 2012 +

zfs roopool: add support for multi-vdev configurations

Tested by:  madpilot
MFC after:  10 days
commit 7ad1a2bc4c09f0635d5f7216a8e96703cd70f371
Author: avg a...@freebsd.org
Date:   Sat Nov 24 13:16:49 2012 +

spa_import_rootpool: initialize ub_version before calling spa_config_parse

... because the latter makes some decision based on the version.
This is especially important for raidz vdevs.
This is similar to what spa_load does.

This is not an issue for upstream because they do not seem to support
using raidz as a root pool.

Reported by:Andrei Lavreniyuk andy.l...@gmail.com
Tested by:  Andrei Lavreniyuk andy.l...@gmail.com
MFC after:  6 days
commit 574c22111b5590734e10a02b2b7d27a290b38148
Author: avg a...@freebsd.org
Date:   Sat Nov 24 13:14:53 2012 +

spa_import_rootpool: do not call spa_history_log_version

The call is a NOP, because pool version in spa_ubsync.ub_version is not
initialized and thus appears to be zero.
If the version is properly set then the call leads to a NULL pointer
dereference because the spa object is still under-constructed.

The same change was independently made in the upstream as a part of
a larger change (4445fffbbb1ea25fd0e9ea68b9380dd7a6709025).

MFC after:  6 days
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-06 Thread Garrett Cooper
On Thu, Dec 6, 2012 at 3:08 PM, Garrett Cooper yaneg...@gmail.com wrote:

...

 Please document the process to make this work in UPDATING (or at least
 the fact that this behavior was changed).

 I'm debugging moving from 9.1-RC2 to CURRENT [as of Tuesday] as it
 hasn't been as smooth as some of the other upgrades I've done; my
 zpool -- root -- is setup with a non-legacy mountpoint, I noticed that
 the cachefile attribute is now None, etc. I have limited capability
 with my installed system to debug this because unfortunately there
 aren't a ton of CURRENT based livecds around to run from (I might look
 into one of gjb's livecds later on if I get super stuck, but I'm
 trying to avoid having to do that). gptzfsboot sees the pool with
 lsdev, but it gets stuck at the mountroot prompt trying to find the
 filesystem.

 I'll wipe my /boot/kernel directory and try building/installing the
 kernel again, but right now I'm kind of dead in the water on the
 system I'm upgrading :/.

I thought r236884 requiring a zpool upgrade was the culprit, but
it wasn't. Still stuck at a mountroot prompt (but now I have gjb's
liveCD so I can do something about it).
Something looks off with zdb -l on CURRENT and STABLE/9. Example
on my 9-stable box:

# uname -a
FreeBSD forza.west.isilon.com 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0
r+2fd0a57: Mon Dec  3 12:02:18 PST 2012
gcoo...@forza.west.isilon.com:/usr/obj/usr/src/sys/FORZA  amd64
# zdb -l sac2
cannot open 'sac2': No such file or directory
# zpool list
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
sac 95G  69.7G  25.3G73%  1.00x  ONLINE  -
sac2   232G   117G   115G50%  1.00x  ONLINE  -

I'm running into the same behavior before and after I upgraded sac/sac2.
My git branch is a lightly modified version of FreeBSD, but
doesn't contain any ZFS specific changes (I can point you to it if you
like to look at it).
Would appreciate some pointers on what to do next.
Thanks,
-Garrett
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-06 Thread Garrett Cooper
On Thu, Dec 6, 2012 at 4:33 PM, Garrett Cooper yaneg...@gmail.com wrote:
 On Thu, Dec 6, 2012 at 3:08 PM, Garrett Cooper yaneg...@gmail.com wrote:

 ...

 Please document the process to make this work in UPDATING (or at least
 the fact that this behavior was changed).

 I'm debugging moving from 9.1-RC2 to CURRENT [as of Tuesday] as it
 hasn't been as smooth as some of the other upgrades I've done; my
 zpool -- root -- is setup with a non-legacy mountpoint, I noticed that
 the cachefile attribute is now None, etc. I have limited capability
 with my installed system to debug this because unfortunately there
 aren't a ton of CURRENT based livecds around to run from (I might look
 into one of gjb's livecds later on if I get super stuck, but I'm
 trying to avoid having to do that). gptzfsboot sees the pool with
 lsdev, but it gets stuck at the mountroot prompt trying to find the
 filesystem.

 I'll wipe my /boot/kernel directory and try building/installing the
 kernel again, but right now I'm kind of dead in the water on the
 system I'm upgrading :/.

 I thought r236884 requiring a zpool upgrade was the culprit, but
 it wasn't. Still stuck at a mountroot prompt (but now I have gjb's
 liveCD so I can do something about it).
 Something looks off with zdb -l on CURRENT and STABLE/9. Example
 on my 9-stable box:

 # uname -a
 FreeBSD forza.west.isilon.com 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0
 r+2fd0a57: Mon Dec  3 12:02:18 PST 2012
 gcoo...@forza.west.isilon.com:/usr/obj/usr/src/sys/FORZA  amd64
 # zdb -l sac2
 cannot open 'sac2': No such file or directory
 # zpool list
 NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 sac 95G  69.7G  25.3G73%  1.00x  ONLINE  -
 sac2   232G   117G   115G50%  1.00x  ONLINE  -

 I'm running into the same behavior before and after I upgraded sac/sac2.
 My git branch is a lightly modified version of FreeBSD, but
 doesn't contain any ZFS specific changes (I can point you to it if you
 like to look at it).
 Would appreciate some pointers on what to do next.

(Removing bogus list)

If I try and let it import the pool at boot it claims the pool is in a
FAULTED state when I point mountroot to /dev/cd0 (one of gjb's
snapshot CDs -- thanks!), run service hostid onestart, etc. If I
export and try to reimport the pool it claims it's not available (!).
However, if I boot, run service hostid onestart, _then_ import the
pool, then the pool is imported properly.

While I was mucking around with the pool trying to get the system to
boot I set the cachefile attribute to /boot/zfs/zpool.cache before
upgrading. In order to diagnose whether or not that was at fault, I
set that back to none and I'm still running into the same issue.

I'm going to try backing out your commit and rebuild my kernel in
order to determine whether or not that's at fault.

One other thing: both my machines have more than one ZFS-only zpool,
and it might be probing the pools in the wrong order; one of the pools
has bootfs set, the other doesn't, and the behavior is sort of
resembling it not being set properly.

Thanks,
-Garrett
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-12-01 Thread Pawel Jakub Dawidek
On Fri, Nov 30, 2012 at 08:51:48AM +0200, Daniel Braniss wrote:
  
  Recently some changes were made to how a root pool is opened for root 
  filesystem
  mounting.  Previously the root pool had to be present in zpool.cache.  Now 
  it is
  automatically discovered by probing available GEOM providers.
  The new scheme is believed to be more flexible.  For example, it allows to 
  prepare
  a new root pool at one system, then export it and then boot from it on a new
  system without doing any extra/magical steps with zpool.cache.  It could 
  also be
  convenient after zpool split and in some other situations.
  
  The change was introduced via multiple commits, the latest relevant 
  revision in
  head is r243502.  The changes are partially MFC-ed, the remaining parts are
  scheduled to be MFC-ed soon.
  
  I have received a report that the change caused a problem with booting on 
  at least
  one system.  The problem has been identified as an issue in local 
  environment and
  has been fixed.  Please read on to see if you might be affected when you 
  upgrade,
  so that you can avoid any unnecessary surprises.
  
  You might be affected if you ever had a pool named the same as your current 
  root
  pool.  And you still have any disks connected to your system that belonged 
  to that
  pool (in whole or via some partitions).  And that pool was never properly
  destroyed using zpool destroy, but merely abandoned (its disks
  re-purposed/re-partitioned/reused).
  
  If all of the above are true, then I recommend that you run 'zdb -l disk' 
  for
  all suspect disks and their partitions (or just all disks and partitions).  
  If
  this command reports at least one valid ZFS label for a disk or a partition 
  that
  do not belong to any current pool, then the problem may affect you.
  
  The best course is to remove the offending labels.
  
  If you are affected, please follow up to this email.
 
 GREATE
 in a diskless environment, /boot is read only, and the zpool.cache issue
 has been bothering me ever since, there was no way (and I tried) to re route 
 it.

I believe zpool.cache is not required only for root pool anymore and
that you still need it if you want non-root pools to be automatically
configured after reboot. Am I right, Andriy?

Zpool.cache basically tells ZFS which pools should be automatically
imported and file systems mounted. You can have disks in your system
with ZFS pools that should not be auto-imported and zpool.cache is the
way to tell the difference.

-- 
Pawel Jakub Dawidek   http://www.wheelsystems.com
FreeBSD committer http://www.FreeBSD.org
Am I Evil? Yes, I Am! http://tupytaj.pl


pgpbgMyrchJJv.pgp
Description: PGP signature


Re: [HEADSUP] zfs root pool mounting

2012-12-01 Thread Andriy Gapon
on 01/12/2012 15:36 Pawel Jakub Dawidek said the following:
 On Fri, Nov 30, 2012 at 08:51:48AM +0200, Daniel Braniss wrote:
 GREATE in a diskless environment, /boot is read only, and the
 zpool.cache issue has been bothering me ever since, there was no way (and
 I tried) to re route it.
 
 I believe zpool.cache is not required only for root pool anymore and that
 you still need it if you want non-root pools to be automatically configured
 after reboot. Am I right, Andriy?

Yes, definitely.

 Zpool.cache basically tells ZFS which pools should be automatically 
 imported and file systems mounted. You can have disks in your system with
 ZFS pools that should not be auto-imported and zpool.cache is the way to
 tell the difference.


-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: [HEADSUP] zfs root pool mounting

2012-11-29 Thread Daniel Braniss
 
 Recently some changes were made to how a root pool is opened for root 
 filesystem
 mounting.  Previously the root pool had to be present in zpool.cache.  Now it 
 is
 automatically discovered by probing available GEOM providers.
 The new scheme is believed to be more flexible.  For example, it allows to 
 prepare
 a new root pool at one system, then export it and then boot from it on a new
 system without doing any extra/magical steps with zpool.cache.  It could also 
 be
 convenient after zpool split and in some other situations.
 
 The change was introduced via multiple commits, the latest relevant revision 
 in
 head is r243502.  The changes are partially MFC-ed, the remaining parts are
 scheduled to be MFC-ed soon.
 
 I have received a report that the change caused a problem with booting on at 
 least
 one system.  The problem has been identified as an issue in local environment 
 and
 has been fixed.  Please read on to see if you might be affected when you 
 upgrade,
 so that you can avoid any unnecessary surprises.
 
 You might be affected if you ever had a pool named the same as your current 
 root
 pool.  And you still have any disks connected to your system that belonged to 
 that
 pool (in whole or via some partitions).  And that pool was never properly
 destroyed using zpool destroy, but merely abandoned (its disks
 re-purposed/re-partitioned/reused).
 
 If all of the above are true, then I recommend that you run 'zdb -l disk' 
 for
 all suspect disks and their partitions (or just all disks and partitions).  If
 this command reports at least one valid ZFS label for a disk or a partition 
 that
 do not belong to any current pool, then the problem may affect you.
 
 The best course is to remove the offending labels.
 
 If you are affected, please follow up to this email.

GREATE
in a diskless environment, /boot is read only, and the zpool.cache issue
has been bothering me ever since, there was no way (and I tried) to re route it.

thanks,
danny



___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


[HEADSUP] zfs root pool mounting

2012-11-28 Thread Andriy Gapon

Recently some changes were made to how a root pool is opened for root filesystem
mounting.  Previously the root pool had to be present in zpool.cache.  Now it is
automatically discovered by probing available GEOM providers.
The new scheme is believed to be more flexible.  For example, it allows to 
prepare
a new root pool at one system, then export it and then boot from it on a new
system without doing any extra/magical steps with zpool.cache.  It could also be
convenient after zpool split and in some other situations.

The change was introduced via multiple commits, the latest relevant revision in
head is r243502.  The changes are partially MFC-ed, the remaining parts are
scheduled to be MFC-ed soon.

I have received a report that the change caused a problem with booting on at 
least
one system.  The problem has been identified as an issue in local environment 
and
has been fixed.  Please read on to see if you might be affected when you 
upgrade,
so that you can avoid any unnecessary surprises.

You might be affected if you ever had a pool named the same as your current root
pool.  And you still have any disks connected to your system that belonged to 
that
pool (in whole or via some partitions).  And that pool was never properly
destroyed using zpool destroy, but merely abandoned (its disks
re-purposed/re-partitioned/reused).

If all of the above are true, then I recommend that you run 'zdb -l disk' for
all suspect disks and their partitions (or just all disks and partitions).  If
this command reports at least one valid ZFS label for a disk or a partition that
do not belong to any current pool, then the problem may affect you.

The best course is to remove the offending labels.

If you are affected, please follow up to this email.

-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org