Re: [zfs-discuss] ZFS send/recv extreme performance penalty in snv_128

2009-12-12 Thread Brent Jones
On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones  wrote:
> On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
>  wrote:
>> On Sat, 12 Dec 2009, Brent Jones wrote:
>>
>>> I've noticed some extreme performance penalties simply by using snv_128
>>
>> Does the 'zpool scrub' rate seem similar to before?  Do you notice any read
>> performance problems?  What happens if you send to /dev/null rather than via
>> ssh?
>>
>> Bob
>> --
>> Bob Friesenhahn
>> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
>> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>>
>
> Scrubs on both systems seem to take about the same amoutn of time (16
> hours, on a 48TB pool, with about 20TB used)
>
> I'll test to dev/null tonight
>
> --
> Brent Jones
> br...@servuhome.net
>

I tested send performance to /dev/null, and I sent a 500GB filesystem
in just a few minutes.

The two servers are linked over GigE fiber (between two cities)

Iperf output:

[ ID] Interval   Transfer Bandwidth
[  5]  0.0-60.0 sec  2.06 GBytes295 Mbits/sec
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-60.0 sec  2.38 GBytes341 Mbits/sec

Usually a bit faster, but some other stuff goes over that pipe.


Though looking at network traffic between these two hosts during the
send, I see a lot of network traffic (about 100-150Mbit usually)
during the send. So theres traffic, but a 100MB send has taken over 10
minutes and still not complete. But given 100Mbit/sec, it should take
about 10 seconds roughly, not 10 minutes.
There is a little bit of disk activity, maybe a MB/sec on average, and
about 30 iops.
So it seems the hosts are exchanging a lot of data about the snapshot,
but not actually replicating any data for a very long time.
SSH CPU usage is minimal, just a few percent (arcfour, but tried
others, no difference)

Odd behavior to be sure, and looks very familiar to what snapshot
replication did back in build 101, before they made significant speed
improvements to snapshot replication. Wonder if this is a major
regression, due to changes in newer ZFS versions, maybe to accomodate
de-dupe?

Sadly, I can't roll back, since I already upgraded my pool, but I may
try upgrading to 129, but my IPS doesn't seem to recognize the newer
version yet.


-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-12 Thread Jack Kielsmeier
My system was pingable again, unfortunately I disabled all services such as 
ssh. My console was still hung, but I was wondering if I had hung USB crap 
(since I use a USB keyboard and everything had been hung for days).

I force rebooted and the pool was not imported :(. I started the process off 
again, this time with remote services enabled and am telling myself to not 
touch the sucker for 7 days. We'll see if that lasts :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-12 Thread Michael Herf
Most manufacturers have a utility available that sets this behavior.

For WD drives, it's called WDTLER.EXE. You have to make a bootable USB stick to 
run the app, but it is simple to change the setting to the enterprise behavior.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-12 Thread Jack Kielsmeier
It's been over 72 hours since my last import attempt.

System still is non-responsive. No idea if it's doing anything
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] compressratio vs. dedupratio

2009-12-12 Thread Robert Milkowski

Hi,

The compressratio property seems to be a ratio of compression for a 
given dataset calculated in such a way so all data in it (compressed or 
not) is taken into account.
The dedupratio property on the other hand seems to be taking into 
account only dedupped data in a pool.
So for example if there is already 1TB of data before dedup=on and then 
dedup is set to on and 3 small identical files are copied in the 
dedupratio will be 3. IMHO it is misleading as it suggest that on 
average a ratio of 3 was achieved in a pool which is not true.


Is it by design or is it a bug?
If it is by design then having an another property which would give a 
ratio of dedup in relation to all data in a pool (dedupped or not) would 
be useful.



Example (snv 129):


mi...@r600:/rpool/tmp# mkfile 200m file1
mi...@r600:/rpool/tmp# zpool create -O atime=off test /rpool/tmp/file1

mi...@r600:/rpool/tmp# ls -l /var/adm/messages
-rw-r--r-- 1 root root 70993 2009-12-12 21:50 /var/adm/messages
mi...@r600:/rpool/tmp# cp /var/adm/messages /test/
mi...@r600:/rpool/tmp# sync
mi...@r600:/rpool/tmp# zfs get compressratio test
NAME  PROPERTY   VALUE  SOURCE
test  compressratio  1.00x  -


mi...@r600:/rpool/tmp# zfs set compression=gzip test
mi...@r600:/rpool/tmp# cp /var/adm/messages /test/messages.1
mi...@r600:/rpool/tmp# sync
mi...@r600:/rpool/tmp# zfs get compressratio test
NAME  PROPERTY   VALUE  SOURCE
test  compressratio  1.27x  -


mi...@r600:/rpool/tmp# zfs get compressratio test
NAME  PROPERTY   VALUE  SOURCE
test  compressratio  1.24x  -





mi...@r600:/rpool/tmp# zpool destroy test
mi...@r600:/rpool/tmp# zpool create -O atime=off test /rpool/tmp/file1
mi...@r600:/rpool/tmp# zpool get dedupratio test
NAME  PROPERTYVALUE  SOURCE
test  dedupratio  1.00x  -


mi...@r600:/rpool/tmp# cp /var/adm/messages /test/
mi...@r600:/rpool/tmp# sync
mi...@r600:/rpool/tmp# zpool get dedupratio test
NAME  PROPERTYVALUE  SOURCE
test  dedupratio  1.00x  -

mi...@r600:/rpool/tmp# cp /var/adm/messages /test/messages.1
mi...@r600:/rpool/tmp# sync
mi...@r600:/rpool/tmp# zpool get dedupratio test
NAME  PROPERTYVALUE  SOURCE
test  dedupratio  1.00x  -
mi...@r600:/rpool/tmp# cp /var/adm/messages /test/messages.2
mi...@r600:/rpool/tmp# sync
mi...@r600:/rpool/tmp# zpool get dedupratio test
NAME  PROPERTYVALUE  SOURCE
test  dedupratio  2.00x  -

mi...@r600:/rpool/tmp# rm /test/messages
mi...@r600:/rpool/tmp# sync
mi...@r600:/rpool/tmp# zpool get dedupratio test
NAME  PROPERTYVALUE  SOURCE
test  dedupratio  2.00x  -







--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540 + SFA F20 PCIe?

2009-12-12 Thread Robert Milkowski

Andrey Kuzmin wrote:

As to whether it makes sense (as opposed to two distinct physical
devices), you would have read cache hits competing with log writes for
bandwidth. I doubt both will be pleased :-)
As usual it depends on your workload. In many real-life scenarios the 
bandwidth  probably  won't be an issue.
Then also keep in mind that you can put up-to 4 ssd modules on it and 
each module iirc  is presented as a separate device anyway. So in order 
to get all the performance you need to make sure to issue I/O to all 
modules.


--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Richard Elling

On Dec 12, 2009, at 10:32 AM, Mattias Pantzare wrote:

On Sat, Dec 12, 2009 at 18:08, Richard Elling > wrote:

On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:


On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:

The host identity had - of course - changed with the new  
motherboard

and it no longer recognised the zpool as its own.  'zpool import -f
rpool' to take ownership, reboot and it all worked no problem  
(which

was amazing in itself as I had switched from AMD to Intel ...).


Do I understand correctly if I read this as: OpenSolaris is able to
switch between systems without reinstalling? Just a zfs import -f  
and

everything runs? Wow, that would be an improvemment and would make
things more like *BSD/linux.


Solaris has been able to do that for 20+ years.  Why do you think
it should be broken now?


Solaris has _not_ been able to do that for 20+ years. In fact Sun has
always recommended a reinstall. You could do it if you really knew
how, but it was not easy.


A flash archive is merely  a cpio image of an existing system wrapped
by clever scripts that edit /etc/vfstab and reset the sysidcfg.  You can
do this by hand or script quite easily and many of us have done so since
the late 1980s. With ZFS it is a little bit easier, because you no  
longer

have to edit /etc/vfstab.

As to what is "supported," I know of nobody at Sun that has a list of
what is or what is not "supported." Clearly, it is easier to CYA by
falling back to "reinstallation required."


If you switch between identical system it will of course work fine
(before zfs that is, now you may have to import the pool on the new
system).


When I was designing appliances at Sun, I kept having to fight with
marketing because we developed the software stack to work across
all platforms of the same architecture. Marketing was convinced that
you could not create a software stack that worked the same on a lowly
desktop as a F15K. Go figure.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Mike Gerdts
On Sat, Dec 12, 2009 at 9:58 AM, Edward Ned Harvey
 wrote:
> I would suggest something like this:  While the system is still on, if the
> failed drive is at least writable *a little bit* … then you can “dd
> if=/dev/zero of=/dev/rdsk/FailedDiskDevice bs=1024 count=1024” … and then
> after the system is off, you could plug the drives into another system
> one-by-one, and read the first 1M, and see if it’s all zeros.   (Or instead
> of dd zero, you could echo some text onto the drive, or whatever you think
> is easiest.)
>

How about reading instead?

dd if=/dev/rdsk/$whatever of=/dev/null

If the failed disk generates I/O errors that prevent it from reading
at a rate that causes an LED to blink, you could read from all of the
good disks.  The one that doesn't blink is the broken one.

You can also get the drive serial number with iostat -En:

$ iostat -En
c3d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: Hitachi HTS5425 Revision:  Serial No: 080804BB6300HCG Size:
160.04GB <160039305216 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
...

That /should/ be printed on the disk somewhere.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv extreme performance penalty in snv_128

2009-12-12 Thread Brent Jones
On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
 wrote:
> On Sat, 12 Dec 2009, Brent Jones wrote:
>
>> I've noticed some extreme performance penalties simply by using snv_128
>
> Does the 'zpool scrub' rate seem similar to before?  Do you notice any read
> performance problems?  What happens if you send to /dev/null rather than via
> ssh?
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>

Scrubs on both systems seem to take about the same amoutn of time (16
hours, on a 48TB pool, with about 20TB used)

I'll test to dev/null tonight

-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Toby Thain


On 12-Dec-09, at 1:32 PM, Mattias Pantzare wrote:

On Sat, Dec 12, 2009 at 18:08, Richard Elling  
 wrote:

On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:


On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:

The host identity had - of course - changed with the new  
motherboard

and it no longer recognised the zpool as its own.  'zpool import -f
rpool' to take ownership, reboot and it all worked no problem  
(which

was amazing in itself as I had switched from AMD to Intel ...).


Do I understand correctly if I read this as: OpenSolaris is able to
switch between systems without reinstalling? Just a zfs import -f  
and

everything runs? Wow, that would be an improvemment and would make
things more like *BSD/linux.


Solaris has been able to do that for 20+ years.  Why do you think
it should be broken now?


Solaris has _not_ been able to do that for 20+ years. In fact Sun has
always recommended a reinstall. You could do it if you really knew
how, but it was not easy.

If you switch between identical system it will of course work fine



Linux can't do it either, of course, unless one is deliberately using  
a sufficiently generic kernel.


--Toby
(who doesn't really wish to start an O/S pissing contest)



(before zfs that is, now you may have to import the pool on the new
system).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Mattias Pantzare
On Sat, Dec 12, 2009 at 18:08, Richard Elling  wrote:
> On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
>
>> On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
>>
>>> The host identity had - of course - changed with the new motherboard
>>> and it no longer recognised the zpool as its own.  'zpool import -f
>>> rpool' to take ownership, reboot and it all worked no problem (which
>>> was amazing in itself as I had switched from AMD to Intel ...).
>>
>> Do I understand correctly if I read this as: OpenSolaris is able to
>> switch between systems without reinstalling? Just a zfs import -f and
>> everything runs? Wow, that would be an improvemment and would make
>> things more like *BSD/linux.
>
> Solaris has been able to do that for 20+ years.  Why do you think
> it should be broken now?

Solaris has _not_ been able to do that for 20+ years. In fact Sun has
always recommended a reinstall. You could do it if you really knew
how, but it was not easy.

If you switch between identical system it will of course work fine
(before zfs that is, now you may have to import the pool on the new
system).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Bob Friesenhahn

On Sat, 12 Dec 2009, dick hoogendijk wrote:


Because, like I said, I always understood it was very difficult to
change disks to another system and run the installed solaris version on
that new hardware.


A place where I used to work had several thousand Sun workstations and 
I noticed that if a system drive failed, the system administrator 
would just walk up with a replacement drive that had Solaris 
pre-installed, do the swap, and the system was running in a few 
minutes.  Of course that was quite a while ago (when the world was a 
cooler place) and things could have become broken since then.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-12 Thread Roman Ivanov
Am I missing something?

I have had monthly,weekly,daily,hourly,frequent snapshots since March 2009.
Now with new b129 I lost all of them.
>From zpool history:

2009-12-12.20:30:02 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-11-26-09:28
2009-12-12.20:30:03 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-11-18-23:37
2009-12-12.20:30:04 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:monthly-2009-10-17-20:32
2009-12-12.20:30:04 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-12-19:47
2009-12-12.20:30:05 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-15:59
2009-12-12.20:30:05 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-14:54
2009-12-12.20:30:06 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-13:54
2009-12-12.20:30:07 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-12:54
2009-12-12.20:30:07 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-11:54
.
2009-12-12.20:30:43 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:monthly-2009-06-16-08:15
2009-12-12.20:30:44 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:monthly-2009-05-16-11:52
2009-12-12.20:30:44 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:monthly-2009-04-16-08:06
2009-12-12.20:30:46 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:monthly-2009-03-16-18:55

Current zfs list -t all:
NAME   USED  AVAIL  REFER  
MOUNTPOINT
rpool 54,3G  83,5G  63,5K  
/rpool
rpool/ROOT17,1G  83,5G18K  
legacy
rpool/ROOT/b128a  28,5M  83,5G  9,99G  
legacy
rpool/ROOT/b1...@zfs-auto-snap:frequent-2009-12-12-20:17  9,70M  -  9,99G  -
rpool/ROOT/b129   17,1G  83,5G  10,2G  
legacy
rpool/ROOT/b...@2009-09-04-11:28:13   3,74G  -  10,0G  -
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-12-03-14:59 1,25G  -  10,2G  -
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-12-10-14:59  550M  -  10,4G  -
rpool/ROOT/b...@2009-12-12-17:11:35   29,9M  -  10,4G  -
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-12-21:00 0  -  10,2G  -
rpool/ROOT/b...@zfs-auto-snap:-2009-12-12-21:00   0  -  10,2G  -
rpool/dump1023M  83,5G  1023M  -
rpool/rixx35,2G  83,5G  34,9G  
/export/home/rixx
rpool/rixxx...@zfs-auto-snap:weekly-2009-12-03-14:59   190M  -  31,8G  -
rpool/rixxx...@zfs-auto-snap:weekly-2009-12-10-14:59   116M  -  34,9G  -
rpool/rixxx...@zfs-auto-snap:-2009-12-12-21:002,29M  -  34,9G  -
rpool/swap1023M  84,3G   275M  -

The latest snapshot does not have word "frequent" in it. Moreover hourly 
snapshot died right after born
2009-12-12.21:00:02 zfs snapshot -r 
rpool/rixxx...@zfs-auto-snap:hourly-2009-12-12-21:00
2009-12-12.21:00:02 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:hourly-2009-12-12-21:00
2009-12-12.21:00:03 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:-2009-12-12-20:45
2009-12-12.21:00:03 zfs snapshot -r 
rpool/rixxx...@zfs-auto-snap:-2009-12-12-21:00
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread dick hoogendijk
On Sat, 2009-12-12 at 09:08 -0800, Richard Elling wrote:
> On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
> 
> > On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
> >
> >> The host identity had - of course - changed with the new motherboard
> >> and it no longer recognised the zpool as its own.  'zpool import -f
> >> rpool' to take ownership, reboot and it all worked no problem (which
> >> was amazing in itself as I had switched from AMD to Intel ...).
> >
> > Do I understand correctly if I read this as: OpenSolaris is able to
> > switch between systems without reinstalling? Just a zfs import -f and
> > everything runs? Wow, that would be an improvemment and would make
> > things more like *BSD/linux.
> 
> Solaris has been able to do that for 20+ years.  Why do you think
> it should be broken now?

Because, like I said, I always understood it was very difficult to
change disks to another system and run the installed solaris version on
that new hardware.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b129
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Patrick O'Sullivan
I've found that when I build a system, it's worth the initial effort
to install drives one by one to see how they get mapped to names. Then
I put labels on the drives and SATA cables. If there were room to
label the actual SATA ports on the motherboard and cards, I would.

While this isn't foolproof, it gives me a bit more reassurance in the
[inevitable] event of a drive failure.

On Sat, Dec 12, 2009 at 9:17 AM, Paul Bruce  wrote:
> Hi,
> I'm just about to build a ZFS system as a home file server in raidz, but I
> have one question - pre-empting the need to replace one of the drives if it
> ever fails.
> How on earth do you determine the actual physical drive that has failed ?
> I've got the while zpool status thing worked out, but how do I translate
> the c1t0d0, c1t0d1 etc.. to a real physical driver.
> I can just see myself looking at the 6 drives, and thinking ".
>  c1t0d1 i think that's *this* one".. einee menee minee moe
> P
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Richard Elling

On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:


On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:


The host identity had - of course - changed with the new motherboard
and it no longer recognised the zpool as its own.  'zpool import -f
rpool' to take ownership, reboot and it all worked no problem (which
was amazing in itself as I had switched from AMD to Intel ...).


Do I understand correctly if I read this as: OpenSolaris is able to
switch between systems without reinstalling? Just a zfs import -f and
everything runs? Wow, that would be an improvemment and would make
things more like *BSD/linux.


Solaris has been able to do that for 20+ years.  Why do you think
it should be broken now?
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Ed Plese
On Sat, Dec 12, 2009 at 8:17 AM, Paul Bruce  wrote:
> Hi,
> I'm just about to build a ZFS system as a home file server in raidz, but I
> have one question - pre-empting the need to replace one of the drives if it
> ever fails.
> How on earth do you determine the actual physical drive that has failed ?
> I've got the while zpool status thing worked out, but how do I translate
> the c1t0d0, c1t0d1 etc.. to a real physical driver.
> I can just see myself looking at the 6 drives, and thinking ".
>  c1t0d1 i think that's *this* one".. einee menee minee moe
> P

As suggested at
http://opensolaris.org/jive/thread.jspa?messageID=416264, you can try
viewing the disk serial numbers with cfgadm:

cfgadm -al -s "select=type(disk),cols=ap_id:info"

You may need to power down the system to view the serial numbers
printed on the disks to match them up, but it beats guessing.


Ed Plese
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540 + SFA F20 PCIe?

2009-12-12 Thread Andrey Kuzmin
As to whether it makes sense (as opposed to two distinct physical
devices), you would have read cache hits competing with log writes for
bandwidth. I doubt both will be pleased :-)

On 12/12/09, Robert Milkowski  wrote:
> Jens Elkner wrote:
>> Hi,
>>
>> just got a quote from our campus reseller, that readzilla and logzilla
>> are not available for the X4540 - hmm strange Anyway, wondering
>> whether it is possible/supported/would make sense to use a Sun Flash
>> Accelerator F20 PCIe Card in a X4540 instead of 2.5" SSDs?
>>
>> If so, is it possible to "partition" the F20, e.g. into 36 GB "logzilla",
>> 60GB "readzilla" (also interesting for other X servers)?
>>
>>
> IIRC the card presents 4x LUNs so you could use each of them for
> different purpose.
> You could also use different slices.
>> me or not. Is this correct?
>>
>>
>
> It still does. The capacitor is not for flushing data to disks drives!
> The card has a small amount of DRAM memory on it which is being flushed
> to FLASH. Capacitor is to make sure it actually happens if the power is
> lost.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


-- 
Regards,
Andrey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Edward Ned Harvey
This is especially important, because if you have 1 failed drive, and you
pull the wrong drive, now you have 2 failed drives.  And that could destroy
the dataset (depending on whether you have raidz-1 or raidz-2)

 

Whenever possible, always get the hotswappable hardware, that will blink a
red light for you, so there can be no mistake.  Even if the hardware doesn't
blink a light for you, you could manually cycle between activity and
non-activity on the disks, to identify the disk yourself . But if that's not
a possibility . if you have no lights on non-hotswappable disks . then . 

 

Given you're going to have to power off the system.

 

Given it's difficult to map the device name to physical wire.

 

I would suggest something like this:  While the system is still on, if the
failed drive is at least writable *a little bit* . then you can "dd
if=/dev/zero of=/dev/rdsk/FailedDiskDevice bs=1024 count=1024" . and then
after the system is off, you could plug the drives into another system
one-by-one, and read the first 1M, and see if it's all zeros.   (Or instead
of dd zero, you could echo some text onto the drive, or whatever you think
is easiest.)  

 

Obviously that's not necessarily an option.  If the drive is completely
dead, totally unwritable, then when you plug the drives one-by-one into
another system, it should be easy to identify the failed drive.

 

 

 

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Bruce
Sent: Saturday, December 12, 2009 9:18 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS - how to determine which physical drive to
replace

 

Hi, 

 

I'm just about to build a ZFS system as a home file server in raidz, but I
have one question - pre-empting the need to replace one of the drives if it
ever fails. 

 

How on earth do you determine the actual physical drive that has failed ?

 

I've got the while zpool status thing worked out, but how do I translate the
c1t0d0, c1t0d1 etc.. to a real physical driver. 





I can just see myself looking at the 6 drives, and thinking ".
c1t0d1 i think that's *this* one".. einee menee minee moe





P

 

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv extreme performance penalty in snv_128

2009-12-12 Thread Bob Friesenhahn

On Sat, 12 Dec 2009, Brent Jones wrote:


I've noticed some extreme performance penalties simply by using snv_128


Does the 'zpool scrub' rate seem similar to before?  Do you notice any 
read performance problems?  What happens if you send to /dev/null 
rather than via ssh?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540 + SFA F20 PCIe?

2009-12-12 Thread Robert Milkowski

Jens Elkner wrote:

Hi,

just got a quote from our campus reseller, that readzilla and logzilla
are not available for the X4540 - hmm strange Anyway, wondering
whether it is possible/supported/would make sense to use a Sun Flash
Accelerator F20 PCIe Card in a X4540 instead of 2.5" SSDs? 


If so, is it possible to "partition" the F20, e.g. into 36 GB "logzilla",
60GB "readzilla" (also interesting for other X servers)?

  
IIRC the card presents 4x LUNs so you could use each of them for 
different purpose.

You could also use different slices.

me or not. Is this correct?

  


It still does. The capacitor is not for flushing data to disks drives! 
The card has a small amount of DRAM memory on it which is being flushed 
to FLASH. Capacitor is to make sure it actually happens if the power is 
lost.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Messed up zpool (double device label)

2009-12-12 Thread Dr. Martin Mundschenk
Hi!

I tried to add an other FiweFire Drive to my existing four devices but it 
turned out, that the OpenSolaris IEEE1394 support doen't seem to be 
well-engineered.

After not recognizing the new device and exporting and importing the existing 
zpool, I get this zpool status:

  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
  raidz1 DEGRADED 0 0 0
c12t0d0  ONLINE   0 0 0
c12t0d0  FAULTED  0 0 0  corrupted data
c14t0d0  ONLINE   0 0 0
c15t0d0  ONLINE   0 0 0

The device c12t0d0 appears two times!?

'format' returns these devices:

AVAILABLE DISK SELECTIONS:
   0. c7d0 
  /p...@0,0/pci-...@b/i...@0/c...@0,0
   1. c12t0d0 
  
/p...@0,0/pci10de,a...@16/pci11c1,5...@0/u...@00303c02e014fc66/d...@0,0
   2. c13t0d0 
  
/p...@0,0/pci10de,a...@16/pci11c1,5...@0/u...@00303c02e014fc32/d...@0,0
   3. c14t0d0 
  
/p...@0,0/pci10de,a...@16/pci11c1,5...@0/u...@00303c02e014fc61/d...@0,0
   4. c15t0d0 
  
/p...@0,0/pci10de,a...@16/pci11c1,5...@0/u...@00303c02e014fc9d/d...@0,0


When I scrub data, the devices c12t0d0, c13t0d0 and c14t0d0 re accessed and 
c15t0d0 sleeps. I don't get it! How can such a mess happen and how do I get it 
back straight?

Regards,
Martin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Paul Bruce
Hi,

I'm just about to build a ZFS system as a home file server in raidz, but I
have one question - pre-empting the need to replace one of the drives if it
ever fails.

How on earth do you determine the actual physical drive that has failed ?

I've got the while zpool status thing worked out, but how do I translate
the c1t0d0, c1t0d1 etc.. to a real physical driver.

I can just see myself looking at the 6 drives, and thinking ".
 c1t0d1
i think that's *this* one".. einee menee minee moe

P
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Kernel Panic

2009-12-12 Thread Dr. Martin Mundschenk
Hi!

My OpenSolaris 2009.06 box runs into kernel panics almost every day. There are 
4 FireWire drives, as a RaidZ pool attached to a MacMini. The panic seems to be 
related to this known bug:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6835533

Since there are no known workarounds, is my hardware configuration worthless? 

Regards,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS send/recv extreme performance penalty in snv_128

2009-12-12 Thread Brent Jones
I've noticed some extreme performance penalties simply by using snv_128

I take snapshots, and send them over SSH to another server over
Gigabit ethernet.
Before, I would get 20-30MBps, prior to snv_128 (127, and nearly all
previous builds).

However, simply image-updating to snv_128 has caused a majority of my
snapshots to do this:

receiving incremental stream of pdxfilu01/vault/0...@20091212-01:15:00
into pdxfilu02/vault/0...@20091212-01:15:00
received 13.8KB stream in 491 seconds (28B/sec)

De-dupe is NOT enabled on any pool, but I have upgraded to the newest
ZFS pool version, which prevents me from rolling back to snv_127,
which would send at many tens of megabytes a second.

This is on an X4540, dual quad cores, and 64GB RAM.

Anyone else seeing similar issues?

-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread dick hoogendijk
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:

> The host identity had - of course - changed with the new motherboard
>  and it no longer recognised the zpool as its own.  'zpool import -f
>  rpool' to take ownership, reboot and it all worked no problem (which
>  was amazing in itself as I had switched from AMD to Intel ...).

Do I understand correctly if I read this as: OpenSolaris is able to
switch between systems without reinstalling? Just a zfs import -f and
everything runs? Wow, that would be an improvemment and would make
things more like *BSD/linux.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss