Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-11-14 Thread george white
Is there a way to use only 2 or 3 digits for the second level of the 
var/pkg/download cache?  This directory hierarchy is particularly problematic 
relative to moving, copying, sending, etc.  This would probably speed up 
lookups as well.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Backing up ZVOLs

2009-11-14 Thread Brian McKerr
Hello all,

Are there any best practices / recommendations for ways of doing this ?

In this case the ZVOLs would be iSCSI LUNS containing ESX VMs .I am aware 
of the of the need for the VMs to be quiesced for the backups to be useful.

Cheers.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Orvar Korvar
I use Intel Q9450 + P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot, 
not PCI-x. About the HBA, I have no idea.

So I had half of the drives in the AOC card, and the other half on the mobo 
SATA ports. Now I have all drives to the AOC card, and suddenly a scrub takes 
15h instead of 8h. Same data. This is weird. I dont get it. I dont care too 
much about it, but just wanted to tell you this. Thanks for your attention.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan

 P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot

I'm not sure how many half your disks are or how your vdevs 
are configured, but the ICH10 has 6 sata ports at 300MB and 
one PCI port at 266MB (that's also shared with the IT8213 IDE chip) 

so in an ideal world your scrub bandwidth would be 

300*6 MB with 6 disks on ICH10, in a strip
300*1 MB with 6 disks on ICH10, in a raidz
300*3+(266/3) MB with 3 disks on ICH10, and 3 on shared PCI in a strip
266/3 MB with 3 disks on ICH10, and 3 on shared PCI in a raidz
266/6 MB with 6 disks on shared PCI in a stripe
266/6 MB with 6 disks on shared PCI in a raidz

we know disk don't go that fast anyway, but going from a 8h to 15h 
scrub is very reasonable depending on vdev config.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] Boot failure with snv_122 and snv_123

2009-11-14 Thread peter brouwer
Has this issue been solved yet? I see the same issue when trying to upgrade 
from 112 to 126.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Eric D. Mudama

On Sat, Nov 14 at 11:23, Rob Logan wrote:



P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot


I'm not sure how many half your disks are or how your vdevs
are configured, but the ICH10 has 6 sata ports at 300MB and
one PCI port at 266MB (that's also shared with the IT8213 IDE chip)

so in an ideal world your scrub bandwidth would be

300*6 MB with 6 disks on ICH10, in a strip
300*1 MB with 6 disks on ICH10, in a raidz
300*3+(266/3) MB with 3 disks on ICH10, and 3 on shared PCI in a strip
266/3 MB with 3 disks on ICH10, and 3 on shared PCI in a raidz
266/6 MB with 6 disks on shared PCI in a stripe
266/6 MB with 6 disks on shared PCI in a raidz

we know disk don't go that fast anyway, but going from a 8h to 15h
scrub is very reasonable depending on vdev config.

Rob


Agreed, sounds like you're saturating the PCI port.

I'm pretty sure that when Thumper uses that board, they have 6 of them
in PCI-X slots, which of course wouldn't have the same bandwidth
limitation.

--eric


--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshot question

2009-11-14 Thread Richard Elling

On Nov 13, 2009, at 10:36 PM, Tristan Ball wrote:

I think the exception may be when doing a recursive snapshot - ZFS  
appears to halt IO so that it can take all the snapshots at the same  
instant.


Snapshots cause a txg commit, similar to what you get when you run sync.
The time required to commit depends on many factors, perhaps the largest
of which is the latency of the disk.



At least, that's what it looked like to me. I've got an Opensolaris  
ZFS box providing NFS to VMWare, and I was getting SCSI timeout's  
within the Virtual Machines that appeared to happen exactly as the  
snapshots were taken.


SCSI timeouts?!? How short are their timeouts? By default in Solaris,  
SCSI

timeouts are 60 seconds.  Have you seen a recursive snapshot take more
than 60 seconds?



When I turned off the recursive snapshots, and rather had each FS  
snapshot individually, the problem went away.


There have been performance tweeks over the past few years which can
impact snapshot performance, though it is still largely gated by the  
disk.

What release were you running?
 -- richard



Regards,
Tristan.

-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org 
] On Behalf Of Richard Elling

Sent: Saturday, 14 November 2009 5:02 AM
To: Rodrigo E. De León Plicet
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Snapshot question


On Nov 13, 2009, at 6:43 AM, Rodrigo E. De León Plicet wrote:


While reading about NILFS here:

http://www.linux-mag.com/cache/7345/1.html


I saw this:

One of the most noticeable features of NILFS is that it can
continuously and automatically save instantaneous states of the
file system without interrupting service. NILFS refers to these as
checkpoints. In contrast, other file systems such as ZFS, can
provide snapshots but they have to suspend operation to perform the
snapshot operation. NILFS doesn't have to do this. The snapshots
(checkpoints) are part of the file system design itself.

I don't think that's correct. Can someone clarify?


It sounds to me like they confused Solaris UFS with ZFS.  What they
say applies to UFS, but not ZFS.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] permanent files error, unable to access pool

2009-11-14 Thread daniel.rodriguez.delg...@gmail.com
I have been using opensolaris for a couple of weeks, today is my first time I 
reboot the system and I ran into a problem loading my external hd (meant for 
backup).

I was expecting a more descriptive name of the file names, but given that I 
have no clue which ones are those, can I just tell the OS to delate those or to 
ignore such files?

Any help would be greatly appreciated...

jdrod...@_solaris:~# zpool status -v external
pool: external
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
external ONLINE 0 0 0
c10t0d0 ONLINE 0 0 0

errors: Permanent errors have been detected in the following files:

metadata:0x0
metadata:0x14
jdrod...@_solaris:~#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-14 Thread Miles Nordin
 ph == Phil Harman phil.har...@gmail.com writes:

 The format of the stream is committed. You will be able to
 receive your streams on future versions of ZFS.

What Erik said is stronger than the man page in an important way,
though.  He said you can dump an old stream into a filesystem on a new
zpool, and when you 'zfs send' the stream back out, it'll be in the
old format.  Keeping this commitment means you can store s10 streams
on snv_xxx backup servers as expanded filesystems, not
stream-inside-a-file, and still restore them onto s10 by 'zfs send'ing
from nevada.  One reason this makes sense as something one might
actually do because zpools are a lot more resilient to corruption than
'zfs send' streams.


pgpe1IUEUzQ4Y.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-11-14 Thread Cindy Swearingen
 Seems like upgrading from b126 to b127 will have the
 same problem.

Yes, good point. I provided a blurb about this issue, here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_Problem_.28Starting_in_Nevada.2C_build_125.29

Its a good idea to review this page whenever you are considering upgrading to 
the next build.

Cindy
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Brandon High
On Sat, Nov 14, 2009 at 7:00 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
 I use Intel Q9450 + P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI 
 slot, not PCI-x. About the HBA, I have no idea.

It sounds like you're saturating the PCI port. The ICH10 has a
32-bit/33MHz PCI bus which provides 133MB/s at half duplex. This a
much less than the full bandwidth from the number of drives you have
on the AOC card.

Getting a mobo with a PCI-X slot, getting a PCIe controller, or
leaving as many drives as you can on the ICH will help performance.

-B

-- 
Brandon High : bh...@freaks.com
War is Peace. Slavery is Freedom. AOL is the Internet.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZVOLs

2009-11-14 Thread Brian McKerr
Thanks for the help.

I was curious whether the zfs send|receive was considered suitable given a few 
things I've read which said somethings along the lines of don't count on being 
able to restore this stuff. Ideally that is what I would use with the 
'incremental' option so as to only backup changed blocks on subsequent backups.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-14 Thread Ian Collins

Miles Nordin wrote:

ph == Phil Harman phil.har...@gmail.com writes:



 The format of the stream is committed. You will be able to
 receive your streams on future versions of ZFS.

What Erik said is stronger than the man page in an important way,
though.  He said you can dump an old stream into a filesystem on a new
zpool, and when you 'zfs send' the stream back out, it'll be in the
old format.  Keeping this commitment means you can store s10 streams
on snv_xxx backup servers as expanded filesystems, not
stream-inside-a-file, and still restore them onto s10 by 'zfs send'ing
from nevada.  One reason this makes sense as something one might
actually do because zpools are a lot more resilient to corruption than
'zfs send' streams.
  
  
It also came in very handy for me to archive a stream that caused a 
panic on Solaris 10.  I was able to resend the stream to test patches.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan

 The ICH10 has a 32-bit/33MHz PCI bus which provides 133MB/s at half duplex.

you are correct, I thought ICH10 used a 66Mhz bus, when infact its 33Mhz. The
AOC card works fine in a PCI-X 64Bit/133Mhz slot good for 1,067 MB/s 
even if the motherboard uses a PXH chip via 8 lane PCIE.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up ZVOLs

2009-11-14 Thread Richard Elling

On Nov 14, 2009, at 1:59 PM, Brian McKerr wrote:


Thanks for the help.

I was curious whether the zfs send|receive was considered suitable  
given a few things I've read which said somethings along the lines  
of don't count on being able to restore this stuff. Ideally that  
is what I would use with the 'incremental' option so as to only  
backup changed blocks on subsequent backups.


Please consult the ZFS Best Practices guide.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_Backup_.2F_Restore_Recommendations

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-14 Thread Dale Ghent


There is also a long-standing bug in the ALi chipset used on these servers 
which ZFS tickles. I don't think a work-around for this bug was ever 
implemented, and it's still present in Solaris 10.

On Nov 13, 2009, at 11:29 AM, Richard Elling wrote:

 The Netra X1 has one ATA bus for both internal drives.
 No way to get high perf out of a snail.
 
  -- richard
 
 
 
 On Nov 13, 2009, at 8:08 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
 wrote:
 
 On Fri, 13 Nov 2009, Tim Cook wrote:
 If it is using parallel SCSI, perhaps there is a problem with the SCSI bus 
 termination or a bad cable?
 SCSI?  Try PATA ;)
 
 Is that good?  I don't recall ever selecting that option when purchasing a 
 computer.  It seemed safer to stick with SCSI than to try exotic 
 technologies.
 
 Does PATA daisy-chain disks onto the same cable and controller?
 
 If this PATA and drives are becoming overwelmed, maybe it will help to tune 
 zfs:zfs_vdev_max_pending down to a very small value in the kernel.
 
 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-14 Thread Phil Harman
Yes indeed, but my point was that it was not always so. This change  
was something I and others campaigned for quite some time ago.


Indeed, in the early days of ZFS evangelism (when I was still at Sun)  
the issue came up rather often. In those days there were even more  
reasons not to use a zfs send stream as a backup.


My point was made partly to celebrate this welcome change (as  
reflected in the manpage) because I was pleasantly surprised to find  
that the change we had requested has already been delivered.


The other reason for my point was to remind people that there may be  
system out there for which this may not be the case, in which the old  
restrictions still hold until the relevant patches and/or upgrades  
have been applied.


Indeed, the heads up link I posted seems to imply that there are still  
some older versions of ZFS stream format which cannot be imported -  
that backwards compatibility has its reasonable limits.


When it come to Erik's exciting  extention beyond what is already good  
news, perhaps the manpage needs to be stronger? It would be a shame  
for such useful information to be accessible only to random googlers  
and the august readers of this list.


On 14 Nov 2009, at 17:58, Miles Nordin car...@ivy.net wrote:


ph == Phil Harman phil.har...@gmail.com writes:



The format of the stream is committed. You will be able to
receive your streams on future versions of ZFS.


What Erik said is stronger than the man page in an important way,
though.  He said you can dump an old stream into a filesystem on a new
zpool, and when you 'zfs send' the stream back out, it'll be in the
old format.  Keeping this commitment means you can store s10 streams
on snv_xxx backup servers as expanded filesystems, not
stream-inside-a-file, and still restore them onto s10 by 'zfs send'ing
from nevada.  One reason this makes sense as something one might
actually do because zpools are a lot more resilient to corruption than
'zfs send' streams.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss