[zfs-discuss] Aaron Toponce: Install ZFS on Debian GNU/Linux

2012-04-17 Thread David E.

fyi

Sent to you by David E. via Google Reader: Aaron Toponce: Install ZFS
on Debian GNU/Linux via Planet Ubuntu on 4/17/12

Quick post on installing ZFS as a kernel module, not FUSE, on Debian
GNU/Linux. The documents already exist for getting this going, I’m just
hoping to spread this to a larger audience, in case you are unaware
that it exists.

First, the Lawrence Livermore National Laboratory has been working on
porting the native Solaris ZFS source to the Linux kernel as a kernel
module. So long as the project remains under contract by the Department
of Defense in the United States, I’m confident there will be continuous
updates. You can track the progress of that porting at
http://zfsonlinux.org.

Now, download the SPL and ZFS sources. I’m running the latest RC, which
seems to be quite stable:

$ mkdir ~/src/{spl,zfs} $ cd ~/src/spl $ wget
http://github.com/downloads/zfsonlinux/spl/spl-0.6.0-rc8.tar.gz $ cd
~/src/zfs $ wget
http://github.com/downloads/zfsonlinux/zfs/zfs-0.6.0-rc8.tar.gz

At this point, you will need to install the dependencies for SPL, then
go ahead and compile and make the necessary .deb files:

$ sudo aptitude install build-essential gawk alien fakeroot
linux-headers-$(uname -r) $ cd ~/src/spl $ tar -xf spl-0.6.0-rc8.tar.gz
$ cd spl-0.6.0-rc8 $ ./configure $ make deb

Now do the same for ZFS:

$ sudo aptitude install zlib1g-dev uuid-dev libblkid-dev libselinux-dev
parted lsscsi $ cd ~/src/zfs $ tar -xf zfs-0.6.0-rc8.tar.gz $ cd
zfs-0.6.0-rc8 $ ./configure $ make deb

You should have built both the SPL and ZFS Debian packages, at which
point you can install:

$ sudo dpkg -i ~/src/{spl,zfs}/*.deb

If you’re running Ubuntu, which I know most of you are, you can install
the packages from the Launchpad PPA https://launchpad.net/~zfs-native.

A word of note: the manpages get installed to /share/man/. I found this
troubling. You can modify your $MANPATH variable to
include /share/man/man8/, or by creating symlinks, which is the
approach I took:

# cd /usr/share/man/man8/ # ln -s /share/man/man8/zdb.8 zdb.8 # ln
-s /share/man/man8/zfs.8 zfs.8 # ln -s /share/man/man8/zpool.8 zpool.8

Now, make your zpool, and start playing:

$ sudo zpool create test raidz sdd sde sdf sdg sdh sdi

It is stable enough to run a ZFS root filesystem on a GNU/Linux
installation for your workstation as something to play around with. It
is copy-on-write, supports compression, deduplication, file atomicity,
off-disk caching, encryption, and much more. At this point,
unfortunately, I’m convinced that ZFS as a Linux kernel module will
become “stable” long before Btrfs will be stable in the mainline
kernel. Either way, it doesn’t matter to me. Both are Free Software,
and both provide the long needed features we’ve needed with today’s
storage needs. Competition is healthy, and I love having choice. Right
now, that choice might just be ZFS.

Things you can do from here:
- Subscribe to Planet Ubuntu using Google Reader
- Get started using Google Reader to easily keep up with all your
favorite sites
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive upgrades

2012-04-17 Thread Peter Jeremy
On 2012-Apr-17 17:25:36 +1000, Jim Klimov  wrote:
>For the sake of archives, can you please post a common troubleshooting
>techinque which users can try at home to see if their disks honour the
>request or not? ;) I guess it would involve random-write bandwidths in
>two cases?

1) Issue "disable write cache" command to drive
2) Write several MB of data to drive
3) As soon as drive acknowledges completion, remove power to drive (this
   will require a electronic switch in the drive's power lead)
4) Wait until drive spins down.
5) Power up drive and wait until ready
6) Verify data written in (2) can be read.
7) Argue with drive vendor that drive doesn't meet specifications :-)

A similar approach can also be used to verify that NCQ & cache flush
commands actually work.

-- 
Peter Jeremy


pgp4WNXKBfWaW.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel panic during zfs import [UPDATE]

2012-04-17 Thread Stephan Budach

Hi Carsten,


Am 17.04.12 17:40, schrieb Carsten John:

Hello everybody,

just to let you know what happened in the meantime:

I was able to open a Service Request at Oracle.

The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)

The bug has bin fixed (according to Oracle support) since build 164, but there 
is no fix for Solaris 11 available so far (will be fixed in S11U7?).

There is a workaround available that works (partly), but my system crashed 
again when trying to rebuild the offending zfs within the affected zpool.

At the moment I'm waiting for a so called "interim diagnostic relief" patch


cu

Carsten




Afaik, bug 6742788 is fixed in S11 FCS (release) but you might be 
hitting this bug: 7098658. This bug, according to MOS, is still 
unresolved. My solution is to mount the affected zfs fs in read-only 
mode upon importing the zpool and setting it to rw afterwards.


Cheers,
budy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel panic during zfs import [UPDATE]

2012-04-17 Thread Enda O'Connor

On 17/04/2012 16:40, Carsten John wrote:

Hello everybody,

just to let you know what happened in the meantime:

I was able to open a Service Request at Oracle.

The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)

The bug has bin fixed (according to Oracle support) since build 164, but there 
is no fix for Solaris 11 available so far (will be fixed in S11U7?).

There is a workaround available that works (partly), but my system crashed 
again when trying to rebuild the offending zfs within the affected zpool.

At the moment I'm waiting for a so called "interim diagnostic relief" patch


so are you on s11, can I see pkg info entire

this bug is fixed in FCS s11 release, as that is 175b, and it got fixed 
in build 164. So if you have solaris 11 that CR is fixed.


In solaris 10 it is fixed in 147440-14/147441-14 ( sparc/x86 )


Enda



cu

Carsten



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Cindy Swearingen

Hi Matt,

Regarding this issue:

>As an aside, I have noticed that on the old laptop, it would not boot
>if the USB part of the mirror was not attached to the laptop,
>successful boot could only be achieved when both mirror devices were
>online. Is this a know issue with ZFS ? bug ?

Which Solaris release is this? I see related bugs are fixed so I'm
not sure what is going on here.

I detach mirrored root pool disks and booting is not impacted. The
best method is to let ZFS know that the device is detached before
the reboot, like this:

# zpool detach rpool usb-disk

Thanks,

Cindy


On 04/17/12 04:47, Matt Keenan wrote:

Hi Cindy,

Tried out your example below in a vbox env, and detaching a device from
a pool makes that device simply unavailable. and simply cannot be
re-imported.

I then tried setting up a mirrored rpool within a vbox env, agreed one
device is not USB however, when booted into the rpool, split worked, I
then tried booting directly into the rpool on the faulty laptop, and
split still failed.

My only conclusion for failure is
- The rpool I'm attempting to split has a LOT of history been around for
some 2 years now, so has gone through a lot of upgrades etc, there may
be some ZFS history there that's not letting this happen, BTW the
version is 33 which is current.
- or is it possible that one of the devices being a USB device is
causing the failure ? I don't know.

My reason for splitting the pool was so I could attach the clean USB
rpool to another laptop and simply attach the disk from the new laptop,
let it resilver, installgrub to new laptop disk device and boot it up
and I would be back in action.

As a workaround I'm trying to simply attach my USB rpool to the new
laptop and use zfs replace to effectively replace the offline device
with the new laptop disk device. So far so good, 12% resilvering, so
fingers crossed this will work.

As an aside, I have noticed that on the old laptop, it would not boot if
the USB part of the mirror was not attached to the laptop, successful
boot could only be achieved when both mirror devices were online. Is
this a know issue with ZFS ? bug ?

cheers

Matt


On 04/16/12 10:05 PM, Cindy Swearingen wrote:

Hi Matt,

I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S11 FCS release without
problems.

I'm not a fan of root pools on external USB devices.

I haven't tested these steps in a while but you might try
these steps instead. Make sure you have a recent snapshot
of your rpool on the unhealthy laptop.

1. Ensure that the existing root pool and disks are healthy.

# zpool status -x

2. Detach the USB disk.

# zpool detach rpool disk-name

3. Connect the USB disk to the new laptop.

4. Force import the pool on the USB disk.

# zpool import -f rpool rpool2

5. Device cleanup steps, something like:

Boot from media and import rpool2 as rpool.
Make sure the device info is visible.
Reset BIOS to boot from this disk.

On 04/16/12 04:12, Matt Keenan wrote:

Hi

Attempting to split a mirrored rpool and fails with error :

Unable to split rpool: pool already exists


I have a laptop with main disk mirrored to an external USB. However as
the laptop is not too healthy I'd like to split the pool into two pools
and attach the external drive to another laptop and mirror it to the new
laptop.

What I did :

- Booted laptop into an live DVD

- Import the rpool:
$ zpool import rpool

- Attempt to split :
$ zpool split rpool rpool-ext"

- Error message shown and split fails :
Unable to split rpool: pool already exists

- So I tried exporting the pool
and re-importing with a different name and I still get the same
error. There are no other zpools on the system, both zpool list and
zpool export return nothing other than the rpool I've just imported.

I'm somewhat stumped... any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel panic during zfs import [UPDATE]

2012-04-17 Thread Carsten John
Hello everybody,

just to let you know what happened in the meantime:

I was able to open a Service Request at Oracle.

The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)

The bug has bin fixed (according to Oracle support) since build 164, but there 
is no fix for Solaris 11 available so far (will be fixed in S11U7?).

There is a workaround available that works (partly), but my system crashed 
again when trying to rebuild the offending zfs within the affected zpool.

At the moment I'm waiting for a so called "interim diagnostic relief" patch


cu

Carsten

-- 
Max Planck Institut fuer marine Mikrobiologie
- Network Administration -
Celsiustr. 1
D-28359 Bremen
Tel.: +49 421 2028568
Fax.: +49 421 2028565
PGP public key:http://www.mpi-bremen.de/Carsten_John.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive upgrades

2012-04-17 Thread Richard Elling
On Apr 17, 2012, at 12:25 AM, Jim Klimov wrote:

> 2012-04-17 5:15, Richard Elling wrote:
>> For the archives...
>> 
>> Write-back cache enablement is toxic for file systems that do not issue
>> cache flush commands, such as Solaris' UFS. In the early days of ZFS,
>> on Solaris 10 or before ZFS was bootable on OpenSolaris, it was not
> > uncommon to have ZFS and UFS on the same system.
>> 
>> NB, there are a number of consumer-grade IDE/*ATA disks that ignore
>> disabling
>> the write buffer. Hence, it is not always a win to enable the write
>> buffer that cannot
>> be disabled.
>> -- richard
> 
> For the sake of archives, can you please post a common troubleshooting
> techinque which users can try at home to see if their disks honour the
> request or not? ;) I guess it would involve random-write bandwidths in
> two cases?

I am aware of only one method that is guaranteed to work: contact the 
manufacturer, sign NDA, read the docs.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Matt Keenan

On 04/17/12 01:00 PM, Jim Klimov wrote:

2012-04-17 14:47, Matt Keenan wrote:

- or is it possible that one of the devices being a USB device is
causing the failure ? I don't know.


Might be, I've got little experience with those beside LiveUSB
imagery ;)


My reason for splitting the pool was so I could attach the clean USB
rpool to another laptop and simply attach the disk from the new laptop,
let it resilver, installgrub to new laptop disk device and boot it up
and I would be back in action.


If the USB disk split-off were to work, I'd rather try booting
the laptop off the USB disk, if BIOS permits, or I'd boot off
a LiveCD/LiveUSB (if Solaris 11 has one - or from installation
media and break out into a shell) and try to import the rpool
from USB disk and then attach the laptop's disk to it to resilver.


This is exactly what I am doing, booted new laptop into LiveCD, imported 
USB pool, and zpool replacing the old laptop disk device which is in 
degraded state, with the new laptop disk device (after I partitioned to 
keep windows install).





As a workaround I'm trying to simply attach my USB rpool to the new
laptop and use zfs replace to effectively replace the offline device
with the new laptop disk device. So far so good, 12% resilvering, so
fingers crossed this will work.


Won't this overwrite the USB disk with the new laptop's (empty)
disk? The way you describe it...


No the offline disk in this instance is the old laptop's internal disk, 
the online device is the USB drive.





As an aside, I have noticed that on the old laptop, it would not boot if
the USB part of the mirror was not attached to the laptop, successful
boot could only be achieved when both mirror devices were online. Is
this a know issue with ZFS ? bug ?


Shouldn't be as mirrors are to protect against the disk failures.
What was your rpool's "failmode" zpool-level attribute?
It might have some relevance, but should define kernel's reaction
to "catastrophic failures" of the pool, and loss of a mirror's
side IMHO should not be one?.. Try failmode=continue and see if
that helps the rpool, to be certain. I think that's what the
installer should have done.


Exactly what I would have thought ZFS should actually help here not 
hinder. From what I can see the default failmode as set by install is 
"wait" which is exactly what is happening when I attempt to boot.


Just tried setting zpool failmode=continue and unfortunately still fails 
to boot, failmode=wait is definitely the default.


cheers

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Jim Klimov

2012-04-17 14:47, Matt Keenan wrote:

- or is it possible that one of the devices being a USB device is
causing the failure ? I don't know.


Might be, I've got little experience with those beside LiveUSB
imagery ;)


My reason for splitting the pool was so I could attach the clean USB
rpool to another laptop and simply attach the disk from the new laptop,
let it resilver, installgrub to new laptop disk device and boot it up
and I would be back in action.


If the USB disk split-off were to work, I'd rather try booting
the laptop off the USB disk, if BIOS permits, or I'd boot off
a LiveCD/LiveUSB (if Solaris 11 has one - or from installation
media and break out into a shell) and try to import the rpool
from USB disk and then attach the laptop's disk to it to resilver.


As a workaround I'm trying to simply attach my USB rpool to the new
laptop and use zfs replace to effectively replace the offline device
with the new laptop disk device. So far so good, 12% resilvering, so
fingers crossed this will work.


Won't this overwrite the USB disk with the new laptop's (empty)
disk? The way you describe it...


As an aside, I have noticed that on the old laptop, it would not boot if
the USB part of the mirror was not attached to the laptop, successful
boot could only be achieved when both mirror devices were online. Is
this a know issue with ZFS ? bug ?


Shouldn't be as mirrors are to protect against the disk failures.
What was your rpool's "failmode" zpool-level attribute?
It might have some relevance, but should define kernel's reaction
to "catastrophic failures" of the pool, and loss of a mirror's
side IMHO should not be one?.. Try failmode=continue and see if
that helps the rpool, to be certain. I think that's what the
installer should have done.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Matt Keenan

Hi Cindy,

Tried out your example below in a vbox env, and detaching a device from 
a pool makes that device simply unavailable. and simply cannot be 
re-imported.


I then tried setting up a mirrored rpool within a vbox env, agreed one 
device is not USB however, when booted into the rpool, split worked, I 
then tried booting directly into the rpool on the faulty laptop, and 
split still failed.


My only conclusion for failure is
 - The rpool I'm attempting to split has a LOT of history been around 
for some 2 years now, so has gone through a lot of upgrades etc, there 
may be some ZFS history there that's not letting this happen, BTW the 
version is 33 which is current.
- or is it possible that one of the devices being a USB device is 
causing the failure ? I don't know.


My reason for splitting the pool was so I could attach the clean USB 
rpool to another laptop and simply attach the disk from the new laptop, 
let it resilver, installgrub to new laptop disk device and boot it up 
and I would be back in action.


As a workaround I'm trying to simply attach my USB rpool to the new 
laptop and use zfs replace to effectively replace the offline device 
with the new laptop disk device. So far so good, 12% resilvering, so 
fingers crossed this will work.


As an aside, I have noticed that on the old laptop, it would not boot if 
the USB part of the mirror was not attached to the laptop, successful 
boot could only be achieved when both mirror devices were online. Is 
this a know issue with ZFS ? bug ?


cheers

Matt


On 04/16/12 10:05 PM, Cindy Swearingen wrote:

Hi Matt,

I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S11 FCS release without
problems.

I'm not a fan of root pools on external USB devices.

I haven't tested these steps in a while but you might try
these steps instead. Make sure you have a recent snapshot
of your rpool on the unhealthy laptop.

1. Ensure that the existing root pool and disks are healthy.

# zpool status -x

2. Detach the USB disk.

# zpool detach rpool disk-name

3. Connect the USB disk to the new laptop.

4. Force import the pool on the USB disk.

# zpool import -f rpool rpool2

5. Device cleanup steps, something like:

Boot from media and import rpool2 as rpool.
Make sure the device info is visible.
Reset BIOS to boot from this disk.

On 04/16/12 04:12, Matt Keenan wrote:

Hi

Attempting to split a mirrored rpool and fails with error :

Unable to split rpool: pool already exists


I have a laptop with main disk mirrored to an external USB. However as
the laptop is not too healthy I'd like to split the pool into two pools
and attach the external drive to another laptop and mirror it to the new
laptop.

What I did :

- Booted laptop into an live DVD

- Import the rpool:
$ zpool import rpool

- Attempt to split :
$ zpool split rpool rpool-ext"

- Error message shown and split fails :
Unable to split rpool: pool already exists

- So I tried exporting the pool
and re-importing with a different name and I still get the same
error. There are no other zpools on the system, both zpool list and
zpool export return nothing other than the rpool I've just imported.

I'm somewhat stumped... any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11/ZFS historical reporting

2012-04-17 Thread Darren J Moffat

On 04/16/12 20:18, Anh Quach wrote:

Are there any tools that ship w/ Solaris 11 for historical reporting on things 
like network activity, zpool iops/bandwidth, etc., or is it pretty much 
roll-your-own scripts and whatnot?


For network activity look at flowstat it can read exacct format files.

For IO depends what level you want to look at, if it is the device level 
iostat, if it is how ZFS is using the devices look at 'zpool iostat'. 
If it is the filesystem level look at fsstat.


Also look acctadm(1M).



--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive upgrades

2012-04-17 Thread Jim Klimov

2012-04-17 5:15, Richard Elling wrote:

For the archives...

Write-back cache enablement is toxic for file systems that do not issue
cache flush commands, such as Solaris' UFS. In the early days of ZFS,
on Solaris 10 or before ZFS was bootable on OpenSolaris, it was not

> uncommon to have ZFS and UFS on the same system.


NB, there are a number of consumer-grade IDE/*ATA disks that ignore
disabling
the write buffer. Hence, it is not always a win to enable the write
buffer that cannot
be disabled.
-- richard


For the sake of archives, can you please post a common troubleshooting
techinque which users can try at home to see if their disks honour the
request or not? ;) I guess it would involve random-write bandwidths in
two cases?

And for the sake of archives, here's what I do on my home system for
its pools to toggle the cache on disks involved (could be scripted
better to detect disk names from zpool listing, but works-for-me
as-is):

# cat /etc/rc2.d/S95disable-pool-wcache
#!/bin/sh

case "$1" in
start)
for C in 7; do for T in 0 1 2 3 4 5; do
( echo cache; echo write; echo display; echo disable; echo 
display ) | format -e -d c${C}t${T}d0 &

done; done
wait
sync
;;
stop)
for C in 7; do for T in 0 1 2 3 4 5; do
( echo cache; echo write; echo display; echo enable; echo 
display ) | format -e -d c${C}t${T}d0 &

done; done
wait
sync
;;
*)
for C in 7; do for T in 0 1 2 3 4 5; do
( echo cache; echo write; echo display ) | format -e -d 
c${C}t${T}d0 &

done; done
wait
sync
;;
esac
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss