Re: [zfs-discuss] Checksums

2009-10-26 Thread Ross
Thanks for the update Adam, that's good to hear.  Do you have a bug ID number 
for this, or happen to know which build it's fixed in?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Albert Chin
On Sun, Oct 25, 2009 at 01:45:05AM -0700, Orvar Korvar wrote:
 I am trying to backup a large zfs file system to two different
 identical hard drives. I have therefore started two commands to backup
 myfs and when they have finished, I will backup nextfs
 
 zfs send mypool/m...@now | zfs receive backupzpool1/now  zfs send
 mypool/m...@now | zfs receive backupzpool2/now ; zfs send
 mypool/nex...@now | zfs receive backupzpool3/now
 
 in parallell. The logic is that the same file data is cached and
 therefore easy to send to each backup drive.
 
 Should I instead have done one zfs send... and waited for it to
 complete, and then started the next?
 
 It seems that zfs send... takes quite some time? 300GB takes 10
 hours, this far. And I have in total 3TB to backup. This means it will
 take 100 hours. Is this normal? If I had 30TB to back up, it would
 take 1000 hours, which is more than a month. Can I speed this up?

It's not immediately obvious what the cause is. Maybe the server running
zfs send has slow MB/s performance reading from disk. Maybe the network.
Or maybe the remote system. This might help:
  http://tinyurl.com/yl653am

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Retrieve per-block checksum algorithm

2009-10-26 Thread Stathis Kamperis
Greetings to everyone.

I'm trying to retrieve the checksumming algorithm on a per-block basis
with zdb(1M). I know it's supposed to be ran by Sun's support
engineers only  I take full responsibility for whatever damage I
cause to my machine by using it.

Now.
I created a tank/test filesystem, dd'ed some files, then changed the
checksum to sha256 and dd'ed some more files. I retrieved the DVAs of
all the files and wanted to verify that some of them are using the
default and the rest the sha256 checksums. The problem is that zdb -R
either returns (null), meaning that printf() has been given a NULL
pointer or it returns corrupt data and there are cases where it works
ok. This is a case where it fails:

$ zdb -R tank:0:f076d8600:7a00:b
Found vdev type: mirror
DVA[0]: vdev_id 1199448952 / 4315c6bdf768bc00
DVA[0]:   GANG: TRUE   GRID:  00bd  ASIZE: eb45ac00
DVA[0]: :1199448952:4315c6bdf768bc00:a75a00:egd
DVA[1]: vdev_id 1938508981 / e19c60208f39cc00
DVA[1]:   GANG: TRUE   GRID:  005d  ASIZE: 11fe4ac00
DVA[1]: :1938508981:e19c60208f39cc00:a75a00:egd
DVA[2]: vdev_id 1231513852 / 646586e9b6609400
DVA[2]:   GANG: FALSE  GRID:  00e6  ASIZE: 15e953200
DVA[2]: :1231513852:646586e9b6609400:a75a00:edd
LSIZE:  6efc00  PSIZE: a75a00
ENDIAN:BIG  TYPE:  (null)
BIRTH:  2a9513965f18afdLEVEL: 24FILL:  85adfa322e48a796
CKFUNC: (null)  COMP:  (null)
CKSUM:  
7408c0516468b934:a0f29a7c28b6c319:28280aab19d1ad3c:64607350c7ea256c
$

Is it a zdb deficiency of my input is to blame?
Thank you for considering.

Best regards,
Stathis Kamperis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Retrieve per-block checksum algorithm

2009-10-26 Thread Victor Latushkin

On 26.10.09 14:25, Stathis Kamperis wrote:

Greetings to everyone.

I'm trying to retrieve the checksumming algorithm on a per-block basis
with zdb(1M). I know it's supposed to be ran by Sun's support
engineers only  I take full responsibility for whatever damage I
cause to my machine by using it.

Now.
I created a tank/test filesystem, dd'ed some files, then changed the
checksum to sha256 and dd'ed some more files. I retrieved the DVAs of
all the files and wanted to verify that some of them are using the
default and the rest the sha256 checksums. The problem is that zdb -R
either returns (null), meaning that printf() has been given a NULL
pointer or it returns corrupt data and there are cases where it works
ok. This is a case where it fails:

$ zdb -R tank:0:f076d8600:7a00:b
Found vdev type: mirror
DVA[0]: vdev_id 1199448952 / 4315c6bdf768bc00
DVA[0]:   GANG: TRUE   GRID:  00bd  ASIZE: eb45ac00
DVA[0]: :1199448952:4315c6bdf768bc00:a75a00:egd
DVA[1]: vdev_id 1938508981 / e19c60208f39cc00
DVA[1]:   GANG: TRUE   GRID:  005d  ASIZE: 11fe4ac00
DVA[1]: :1938508981:e19c60208f39cc00:a75a00:egd
DVA[2]: vdev_id 1231513852 / 646586e9b6609400
DVA[2]:   GANG: FALSE  GRID:  00e6  ASIZE: 15e953200
DVA[2]: :1231513852:646586e9b6609400:a75a00:edd
LSIZE:  6efc00  PSIZE: a75a00
ENDIAN:BIG  TYPE:  (null)
BIRTH:  2a9513965f18afdLEVEL: 24FILL:  85adfa322e48a796
CKFUNC: (null)  COMP:  (null)
CKSUM:  
7408c0516468b934:a0f29a7c28b6c319:28280aab19d1ad3c:64607350c7ea256c
$

Is it a zdb deficiency of my input is to blame?
Thank you for considering.


I guess -S option can help you to get what you are looking for.

Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Retrieve per-block checksum algorithm

2009-10-26 Thread Stathis Kamperis
2009/10/26 Victor Latushkin victor.latush...@sun.com:
 On 26.10.09 14:25, Stathis Kamperis wrote:

 Greetings to everyone.

 I'm trying to retrieve the checksumming algorithm on a per-block basis
 with zdb(1M). I know it's supposed to be ran by Sun's support
 engineers only  I take full responsibility for whatever damage I
 cause to my machine by using it.
 I guess -S option can help you to get what you are looking for.

 Victor

Hi Victor; thank you for your answer.
I tried with -S with no luck. I tried also different levels of
verboseness (-vv/-SS, etc). E.g.,

$ zdb -S all:all tank/test
Dataset tank/test [ZPL], ID 197, cr_txg 94447, 63.5K, 7 objects
$

From what I've read there's also zdb - pool which outputs ckalg
as a side effect while doing some validation checks. E.g.,

objset 0 object 26 offset 0x76000 [L0 SPA space map] 1000L/c00P
DVA[0]=0:232c680400:c00 DVA[1]=0:108f12a00:c00
DVA[2]=0:430237ce00:c00 fletcher4 lzjb LE contiguous birth=34121
fill=1 cksum=911bd91bf9:fdcdafd76e06:1056870d1b78c0a:c623a15a8f99054a

Just wondering if is doable on a per user specified block basis.


Best regards,
Stathis Kamperis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dumb idea?

2009-10-26 Thread erik.ableson
Or in OS X with smart folders where you define a set of search terms  
and as write operations occur on the known filesystems the folder  
contents will be updated to reflect the current state of the attached  
filesystems


The structures you defined seemed to be designed around the idea of  
reductionism (ie - subfolders representing a subset of the parent)  
which cannot currently be implemented in Libraries or Smart folders  
since the contents are read-only listings.  I don't know for sure  
about the Win7 Libraries behaviour though - it might be more  
permissive in this respect...


Erik

On 25 oct. 2009, at 20:48, j...@lentecs.com wrote:

This actually sounds a little like what ms is trying to accomplish,  
in win7, with libraries.  They will act as standard folders if you  
treat them as such.  But they are really designed to group different  
pools of files into one easy place.  You just have to configure it  
to pull from local and remote sources.  I have heard it works well  
with win home server, and win7 networks.


Its also similar to what google and the like are doing with their  
web crawlers.


But I think this is something better left to run on top of the file  
system.  Rather than integrated into the file system.  A true  
database and crawling bot would seem to be the better method of  
implementing this.


--Original Message--
From: Orvar Korvar
Sender: zfs-discuss-boun...@opensolaris.org
To: zfs Discuss
Subject: [zfs-discuss] Dumb idea?
Sent: Oct 24, 2009 8:12 AM

Would this be possible to implement ontop ZFS? Maybe it is a dumb  
idea, I dont know. What do you think, and how to improve this?


Assume all files are put in the zpool, helter skelter. And then you  
can create arbitrary different filters that shows you the files you  
want to see.


As of now, you have files in one directory structure. This makes the  
organization of the files, hardcoded. You have /Movies/Action and  
that is it. But if you had all movies in one large zpool, and if you  
could programmatically define different structures that act as  
filters, you could have different directory structures.


Programmatically defined directory structure1, that acts on the zpool:
/Movies/Action

Programmatically defined directory structure2:
/Movies/Actors/AlPacino

etc.

Maybe this is what MS WinFS was about? Maybe tag the files? Maybe a  
relational database ontop ZFS? Maybe no directories at all? I dont  
know, just brain storming. Is this is a dumb idea? Or old idea?

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Sent from my BlackBerry® smartphone with SprintSpeed
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksums

2009-10-26 Thread Cindy Swearingen

Hi Ross,

The CR ID is 6740597:

zfs fletcher-2 is losing its carries

Integrated in Nevada build 114 and the Solaris 10 10/09 release.

This CR didn't get a companion man page bug to update the docs
so I'm working on that now.

The opensolaris.org site seems to be in the middle of its migration
so I can't refer to the public bug database.

Cindy

On 10/26/09 01:00, Ross wrote:

Thanks for the update Adam, that's good to hear.  Do you have a bug ID number 
for this, or happen to know which build it's fixed in?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Richard Elling

On Oct 25, 2009, at 1:45 AM, Orvar Korvar wrote:

I am trying to backup a large zfs file system to two different  
identical hard drives. I have therefore started two commands to  
backup myfs and when they have finished, I will backup nextfs


zfs send mypool/m...@now | zfs receive backupzpool1/now  zfs send  
mypool/m...@now | zfs receive backupzpool2/now ; zfs send mypool/ 
nex...@now | zfs receive backupzpool3/now


in parallell. The logic is that the same file data is cached and  
therefore easy to send to each backup drive.


Should I instead have done one zfs send... and waited for it to  
complete, and then started the next?


Parallel works, well, in parallel. Unless the changes are in the ARC,  
you
will be spending a lot of time waiting on disk. So having multiple  
sends in
parallel, in general, gains parallelism. If you only have a single  
HDD, you

might not notice much improvement, though.

It seems that zfs send... takes quite some time? 300GB takes 10  
hours, this far. And I have in total 3TB to backup. This means it  
will take 100 hours. Is this normal? If I had 30TB to back up, it  
would take 1000 hours, which is more than a month. Can I speed this  
up?


CR 6418042 integrated in b102 and Solaris 10 10/09 improves send  
performance.


Is rsync faster? As I have understood it, zfs send.. gives me an  
exact replica, whereas rsync doesnt necessary do that, maybe the ACL  
are not replicated, etc. Is this correct about rsync vs zfs send?


I general, rsync will be slower, especially if there are millions of  
files, because it

must stat() every file to determine those that have changed.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs recv complains about destroyed filesystem

2009-10-26 Thread Robert Milkowski


I created http://defect.opensolaris.org/bz/show_bug.cgi?id=12249


--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Brian
Why does resilvering an entire disk, yield different amounts of data that was 
resilvered each time.
I have read that ZFS only resilvers what it needs to, but in the case of 
replacing an entire disk with another formatted clean disk, you would think the 
amount of data would be the same each time a disk is replaced with an empty 
formatted disk. 
I'm getting different results when viewing the 'zpool status' info (below)



For example ( I have a two-way mirror with a small file on it )
Raidz pools behave the same.


bash-3.2# zpool replace zp c2t27d0 c2t28d0
bash-3.2# zpool status
  pool: zp
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Mon Oct 26 09:46:21 2009
config:

NAME STATE READ WRITE CKSUM
zp   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t26d0  ONLINE   0 0 0
c2t28d0  ONLINE   0 0 0 [b] 73K resilvered[/b]

errors: No known data errors
bash-3.2# 
bash-3.2# zpool replace zp c2t28d0 c2t29d0
bash-3.2# zpool status
  pool: zp
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Mon Oct 26 09:46:52 2009
config:

NAME STATE READ WRITE CKSUM
zp   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t26d0  ONLINE   0 0 0
c2t29d0  ONLINE   0 0 0  [b]83.5K resilvered[/b]

errors: No known data errors
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Bill Sommerfeld
On Mon, 2009-10-26 at 10:24 -0700, Brian wrote:
 Why does resilvering an entire disk, yield different amounts of data that was 
 resilvered each time.
 I have read that ZFS only resilvers what it needs to, but in the case of 
 replacing an entire disk with another formatted clean disk, you would think 
 the amount of data would be the same each time a disk is replaced with an 
 empty formatted disk. 
 I'm getting different results when viewing the 'zpool status' info (below)

replacing a disk adds an entry to the zpool history log, which
requires allocating blocks, which will change what's stored in the pool.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread A Darren Dunham
On Mon, Oct 26, 2009 at 10:24:16AM -0700, Brian wrote:
 Why does resilvering an entire disk, yield different amounts of data that was 
 resilvered each time.
 I have read that ZFS only resilvers what it needs to, but in the case
of replacing an entire disk with another formatted clean disk, you would
think the amount of data would be the same each time a disk is replaced
with an empty formatted disk. 

As long as the amount of data on the other side of the mirror is
identical, you should be correct.  In other words, it copies the in-use
blocks over.  It doesn't copy every block on the disk.

 I'm getting different results when viewing the 'zpool status' info (below)
 
 
 
 For example ( I have a two-way mirror with a small file on it )
 Raidz pools behave the same.
 
 
 bash-3.2# zpool replace zp c2t27d0 c2t28d0
 bash-3.2# zpool status
   pool: zp
  state: ONLINE
  scrub: resilver completed after 0h0m with 0 errors on Mon Oct 26 09:46:21 
 2009
 config:
 
 NAME STATE READ WRITE CKSUM
 zp   ONLINE   0 0 0
   mirror ONLINE   0 0 0
 c2t26d0  ONLINE   0 0 0
 c2t28d0  ONLINE   0 0 0 [b] 73K resilvered[/b]
 
 errors: No known data errors
 bash-3.2# 
 bash-3.2# zpool replace zp c2t28d0 c2t29d0
 bash-3.2# zpool status
   pool: zp
  state: ONLINE
  scrub: resilver completed after 0h0m with 0 errors on Mon Oct 26 09:46:52 
 2009
 config:
 
 NAME STATE READ WRITE CKSUM
 zp   ONLINE   0 0 0
   mirror ONLINE   0 0 0
 c2t26d0  ONLINE   0 0 0
 c2t29d0  ONLINE   0 0 0  [b]83.5K resilvered[/b]

The difference is only about 10K.  That's not much.  The live filesystem
is in flux on the disks as metadata trees are updated assuming you have
any activity at all (even reads that might be causing inode timestamps
to be rewritte).  I wouldn't consider this difference significant.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-26 Thread Jeremy Kitchen
Jeremy Kitchen wrote:
 Hey folks!
 
 We're using zfs-based file servers for our backups and we've been having
 some issues as of late with certain situations causing zfs/zpool
 commands to hang.

anyone?  this is happening right now and because we're doing a restore I
can't reboot the machine, so it's a prime opportunity to get debugging
information if it'll help.

Thanks!

-Jeremy




signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] default child filesystem quota

2009-10-26 Thread Tommy McNeely
I may be searching for the wrong thing, but I am trying to figure out a way to 
set the default quota for child file systems. I tried setting the quota on the 
top level, but that is not the desired effect. I'd like to limit, by default, 
newly created filesystems under a certain dataset to 10G (for example). I see 
this as useful for zfs home directories (are we still doing that?), and 
especially for zone roots. I searched around a little, but couldn't find what I 
was looking for. Can anyone lead me in the right direction?

Thanks in advance,
Tommy
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Change physical path to a zpool.

2009-10-26 Thread Jon Aimone

Hi,

Simple solution. I did, and it did, and things worked swell! Thanx for 
the assist.


I only wish the failure mode were a little easier to interpret... 
perhaps I'll try to file an RFE about that...


Jürgen Keil spake thusly, on or about 10/24/09 06:53:

I have a functional OpenSolaris x64 system on which I need to physically
move the boot disk, meaning its physical device path will change and
probably its cXdX name.

When I do this the system fails to boot


...
  

How do I inform ZFS of the new path?


...
  

Do I need to boot from the LiveCD and then import the
pool from its new path?



Exactly.

Boot from the livecd with the disk connected on the
new physical path, and run pfexec zpool import -f rpool,
followed by a reboot.

That'll update the zpool's label with the new physical
device path information.
  


--
~~~\0/
Cheers,
Jon.
{-%]

If you always do what you've always done, you'll always get what you've always 
gotten.
- Anon.

When someone asks you, Penny for your thoughts, and you put your two cents 
in, what happens to the other penny?
- G. Carlin (May 12, 1937 - June 22, 2008)

attachment: Jon_Aimone.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Marion Hakanson
knatte_fnatte_tja...@yahoo.com said:
 Is rsync faster? As I have understood it, zfs send.. gives me an exact
 replica, whereas rsync doesnt necessary do that, maybe the ACL are not
 replicated, etc. Is this correct about rsync vs zfs send? 

It is true that rsync (as of 3.0.5, anyway) does not preserve NFSv4/ZFS
ACL's.  It also cannot handle ZFS snapshots.

On the other hand, you can run multiple rsync's in parallel;  You can
only do that with zfs send/recv if you have multiple, independent ZFS
datasets that can be done in parallel.  So which one goes faster will
depend on your situation.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance problems with Thumper and 7TB ZFS pool using RAIDZ2

2009-10-26 Thread Marion Hakanson
opensolaris-zfs-disc...@mlists.thewrittenword.com said:
 Is it really pointless? Maybe they want the insurance RAIDZ2 provides. Given
 the choice between insurance and performance, I'll take insurance, though it
 depends on your use case. We're using 5-disk RAIDZ2 vdevs. 
 . . .
 Would love to hear other opinions on this. 

Hi again Albert,

On our Thumper, we use 7x 6-disk raidz2's (750GB drives).  It seems a good
compromise between capacity, IOPS, and data protection.  Like you, we are
afraid of the possibility of a 2nd disk failure during resilvering of these
large drives.  Our usage is a mix of disk-to-disk-to-tape backups, archival,
and multi-user (tens of users) NFS/SFTP service, in roughly that order
of load.  We have had no performance problems with this layout.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-26 Thread Mertol Ozyoney
7.x FW on 2500 and 6000 series doesnot operate the same way as 6.x FW does.
So on some/most loads ignore cache synch commands option may not improve
performance as expected. 

Best regards
Mertol 



Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com



-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
Sent: Tuesday, October 13, 2009 6:05 PM
To: Nils Goroll
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

On Tue, 13 Oct 2009, Nils Goroll wrote:

 I am trying to find out some definite answers on what needs to be done on
an 
 STK 2540 to set the Ingnore Cache Sync Option. The best I could find is
Bob's 
 Sun StorageTek 2540 / ZFS Performance Summary (Dated Feb 28, 2008, thank

 you, Bob), in which he quotes a posting of Joel Miller:

I should update this paper since the performance is now radically 
different and the StorageTek 2540 CAM configurables have changed.

 Is this information still current for F/W 07.35.44.10 ?

I suspect that the settings don't work the same as before, but don't 
know how to prove it.

 Bonus question: Is there a way to determine the setting which is currently

 active, if I don't know if the controller has been booted since the nvsram

 potentially got modified?

From what I can tell, the controller does not forget these settings 
due to a reboot or firmware update.  However, new firmware may not 
provide the same interpretation of the values.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-26 Thread Mertol Ozyoney
Hi Bob;

In all 2500 and 6000 series you can assign raid set's to a controller and
that controller becomes the owner of the set. 
Generaly not force drives switching between controllers always one
controller owns a disk, and other waits in standby. Some disks use ALUA and
re-route traffic coming to the not preferred controller to preferred
controller. While some companies market this a true active active set up,
this reduces the performance significantly if the host is not %100 ALUA
aware. While this architacture solves the problem of setting up MPXIO on
hosts. 

It's likely that sometime in future Sun may release a FW to enable ALUA on
controllers but this definetly wont improve performance. 

The advantage of 2540 against it's bigger brothers (6140 which is EOL'ed)
and competitors 2540 do use dedicated data paths for cache mirroring just
like higher end unit disks (6180,6580, 6780) improving write performance
significantly. 

Spliting load between controllers can most of the time increase performance,
but you do not need to split in two equal partitions. 

Also do not forget that first tray have dedicated data lines to the
controller so generaly it's wise not to mix those drives with other drives
on other trays. 

Best regards
Mertol  




Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com



-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
Sent: Tuesday, October 13, 2009 10:59 PM
To: Nils Goroll
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

On Tue, 13 Oct 2009, Nils Goroll wrote:

 Regarding my bonus question: I haven't found yet a definite answer if
there 
 is a way to read the currently active controller setting. I still assume
that 
 the nvsram settings which can be read with

   service -d arrayname -c read -q nvsram region=0xf2 host=0x00

 do not necessarily reflect the current configuration and that the only way
to 
 make sure the controller is running with that configuration is to reset
it.

I believe that in the STK 2540, the controllers operate Active/Active 
except that each controller is Active for half the drives and Standby 
for the others.  Each controller has a copy of the configuration 
information.  Whichever one you communicate with is likely required to 
mirror the changes to the other.

In my setup I load-share the fiber channel traffic by assigning six 
drives as active on one controller and six drives as active on the 
other controller, and the drives are individually exported with a LUN 
per drive.  I used CAM to do that.  MPXIO sees the changes and does 
map 1/2 the paths down each FC link for more performance than one FC 
link offers.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Richard Elling

On Oct 26, 2009, at 11:51 AM, Marion Hakanson wrote:


knatte_fnatte_tja...@yahoo.com said:
Is rsync faster? As I have understood it, zfs send.. gives me an  
exact
replica, whereas rsync doesnt necessary do that, maybe the ACL are  
not

replicated, etc. Is this correct about rsync vs zfs send?


It is true that rsync (as of 3.0.5, anyway) does not preserve NFSv4/ 
ZFS

ACL's.  It also cannot handle ZFS snapshots.

On the other hand, you can run multiple rsync's in parallel;  You can
only do that with zfs send/recv if you have multiple, independent ZFS
datasets that can be done in parallel.  So which one goes faster will
depend on your situation.


Yes. Your configuration and intended use impacts the decision.

Also, b119 improves stat() performance, which should help rsync
and other file-based backup software.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6775100
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fishworks on x4275?

2009-10-26 Thread Mertol Ozyoney
Hı Trevor;

 

As can be seen from my email adress and signiture below my answer will be
quite biased J 

 

To be honest, while converting every X series server with millions of
alternative configurations to a Fishwork appliance may not be extremely
difficult, it would be impossible to support them. 

So Sun have to limit the number of configurations that needs to supported to
a reasonable number.  (Even this limited number of Systems and options
will equal to unseen flexibility and number of choices )

 

However I agree that ability to convert a 4540 to a 7210 would have been
nice. 

 

Best regards

Mertol 

 

 

 


 http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com

 

 

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Trevor Pretty
Sent: Sunday, October 18, 2009 11:53 PM
To: Frank Cusack
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] fishworks on x4275?

 

Frank

I've been looking into:-
http://www.nexenta.com/corp/index.php?option=com_content
http://www.nexenta.com/corp/index.php?option=com_contenttask=blogsectioni
d=4Itemid=128 task=blogsectionid=4Itemid=128

Only played with a VM so far on my laptop, but it does seem to be an
alternative to the Sun product if you don't want to buy a S7000.

IMHO: Sun are missing a great opportunity not offering a reasonable upgrade
path from an X to an S7000.



Trevor Pretty | Technical Account Manager | T: +64 9 639 0652 | M: +64 21
666 161 
Eagle Technology Group Ltd. 
Gate D, Alexandra Park, Greenlane West, Epsom 
Private Bag 93211, Parnell, Auckland 



Frank Cusack wrote: 

Apologies if this has been covered before, I couldn't find anything
in my searching.
 
Can the software which runs on the 7000 series servers be installed
on an x4275?
 
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

 http://www.eagle.co.nz/ www.eagle.co.nz 

This email is confidential and may be legally privileged. If received in
error please destroy and immediately notify us.

attachment: image001.gif___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-26 Thread Albert Chin
On Mon, Oct 26, 2009 at 09:58:05PM +0200, Mertol Ozyoney wrote:
 In all 2500 and 6000 series you can assign raid set's to a controller and
 that controller becomes the owner of the set. 

When I configured all 32-drives on a 6140 array and the expansion
chassis, CAM automatically split the drives amongst controllers evenly.

 The advantage of 2540 against it's bigger brothers (6140 which is EOL'ed)
 and competitors 2540 do use dedicated data paths for cache mirroring just
 like higher end unit disks (6180,6580, 6780) improving write performance
 significantly. 
 
 Spliting load between controllers can most of the time increase performance,
 but you do not need to split in two equal partitions. 
 
 Also do not forget that first tray have dedicated data lines to the
 controller so generaly it's wise not to mix those drives with other drives
 on other trays. 

But, if you have an expansion chassis, and create a zpool with drives on
the first tray and subsequent trays, what's the difference? You cannot
tell zfs which vdev to assign writes to so it seems pointless to balance
your pool based on the chassis when reads/writes are potentially spread
across all vdevs.

 Best regards
 Mertol  
 
 
 
 
 Mertol Ozyoney 
 Storage Practice - Sales Manager
 
 Sun Microsystems, TR
 Istanbul TR
 Phone +902123352200
 Mobile +905339310752
 Fax +90212335
 Email mertol.ozyo...@sun.com
 
 
 
 -Original Message-
 From: zfs-discuss-boun...@opensolaris.org
 [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
 Sent: Tuesday, October 13, 2009 10:59 PM
 To: Nils Goroll
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)
 
 On Tue, 13 Oct 2009, Nils Goroll wrote:
 
  Regarding my bonus question: I haven't found yet a definite answer if
 there 
  is a way to read the currently active controller setting. I still assume
 that 
  the nvsram settings which can be read with
 
  service -d arrayname -c read -q nvsram region=0xf2 host=0x00
 
  do not necessarily reflect the current configuration and that the only way
 to 
  make sure the controller is running with that configuration is to reset
 it.
 
 I believe that in the STK 2540, the controllers operate Active/Active 
 except that each controller is Active for half the drives and Standby 
 for the others.  Each controller has a copy of the configuration 
 information.  Whichever one you communicate with is likely required to 
 mirror the changes to the other.
 
 In my setup I load-share the fiber channel traffic by assigning six 
 drives as active on one controller and six drives as active on the 
 other controller, and the drives are individually exported with a LUN 
 per drive.  I used CAM to do that.  MPXIO sees the changes and does 
 map 1/2 the paths down each FC link for more performance than one FC 
 link offers.
 
 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Trevor Pretty




Paul

Being a script hacker like you the only kludge I can think of.

A script that does something like

ls  /tmp/foo
sleep 
ls /tmp/foo.new
diff /tmp/foo /tmp/foo.new 
/tmp/files_that_have_changed
mv /tmp/foo.new /tmp/foo

Or you might be able to knock something up with bart nd zfs snapshots. I did write this which may
help?

#!/bin/sh

#set -x

# Note: No implied warranty etc. applies. 
# Don't cry if it does not work. I'm an SE not a programmer!
#
###
#
# Version 29th Jan. 2009
#
# GOAL: Show what files have changed between snapshots
#
# But of course it could be any two directories!!
#
###
#

## Set some variables
#
SCRIPT_NAME=$0
FILESYSTEM=$1
SNAPSHOT=$2
FILESYSTEM_BART_FILE=/tmp/filesystem.$$
SNAPSHOT_BART_FILE=/tmp/snapshot.$$
CHANGED_FILES=/tmp/changes.$$


## Declare some commands (just in case PATH is wrong, like cron)
#
BART=/bin/bart


## Usage
# 
Usage()
{
 echo ""
 echo ""
 echo "Usage: $SCRIPT_NAME -q filesystem snapshot "
 echo ""
 echo " -q will stop all echos and just list the changes"
  echo ""
 echo "Examples"
 echo " $SCRIPT_NAME /home/fred /home/.zfs/snapshot/fred "
 echo " $SCRIPT_NAME . /home/.zfs/snapshot/fred
" 
  echo ""
 echo ""
 exit 1
}

### Main Part ###


## Check Usage
#
if [ $# -ne 2 ]; then
 Usage
fi

## Check we have different directories
#
if [ "$1" = "$2" ]; then
 Usage
fi


## Handle dot
#
if [ "$FILESYSTEM" = "." ]; then
 cd $FILESYSTEM ; FILESYSTEM=`pwd`
fi
if [ "$SNAPSHOT" = "." ]; then
 cd $SNAPSHOT ; SNAPSHOT=`pwd`
fi

## Check the filesystems exists It should be a directory
# and it should have some files
#
for FS in "$FILESYSTEM" "$SNAPSHOT"
do
 if [ ! -d "$FS" ]; then
  echo ""
  echo "ERROR file system $FS does not exist"
  echo ""
  exit 1
 fi 
 if [ X"`/bin/ls "$FS"`" = "X" ]; then
  echo ""
  echo "ERROR file system $FS seems to be empty"
  exit 1
  echo ""
 fi
done



## Create the bart files
#

echo ""
echo "Creating bart file for $FILESYSTEM can take a while.."
cd "$FILESYSTEM" ; $BART create -R .  $FILESYSTEM_BART_FILE
echo ""
echo "Creating bart file for $SNAPSHOT can take a while.."
cd "$SNAPSHOT" ; $BART create -R .  $SNAPSHOT_BART_FILE


## Compare them and report the diff
#
echo ""
echo "Changes"
echo ""
$BART compare -p $FILESYSTEM_BART_FILE $SNAPSHOT_BART_FILE | awk
'{print $1}'  $CHANGED_FILES
/bin/more $CHANGED_FILES
echo ""
echo ""
echo ""

## Tidy kiwi
#
/bin/rm $FILESYSTEM_BART_FILE
/bin/rm $SNAPSHOT_BART_FILE
/bin/rm $CHANGED_FILES

exit 0





Paul Archer wrote:

  5:12pm, Cyril Plisko wrote:

  
  

  Question: Is there a facility similar to inotify that I can use to monitor a
directory structure in OpenSolaris/ZFS, such that it will block until a file
is modified (added, deleted, etc), and then pass the state along (STDOUT is
fine)? One other requirement: inotify can handle subdirectories being added
on the fly. So if you use it to monitor, for example, /data/images/incoming,
and a /data/images/incoming/100canon directory gets created, then the files
under that directory will automatically be monitored as well.
  


while there is no inotify for Solaris, there are similar technologies available.

Check port_create(3C) and gam_server(1)


  
  I can't find much on gam_server on Solaris (couldn't find too much on it at 
all, really), and port_create is apparently a system call. (I'm not a 
developer--if I can't write it in BASH, Perl, or Ruby, I can't write it.)
I appreciate the suggestions, but I need something a little more pret-a-porte.

Does anyone have any dtrace experience? I figure this could probably be done 
with dtrace, but I don't know enough about it to write a dtrace script 
(although I may learn if that turns out to be the best way to go). I was 
hoping that there'd be a script out there already, but I haven't turned up 
anything yet.

Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar

On 10/25/09 5:38 PM, Paul Archer wrote:

5:12pm, Cyril Plisko wrote:



while there is no inotify for Solaris, there are similar technologies
available.

Check port_create(3C) and gam_server(1)


I can't find much on gam_server on Solaris (couldn't find too much on it
at all, really), and port_create is apparently a system call. (I'm not a
developer--if I can't write it in BASH, Perl, or Ruby, I can't write it.)
I appreciate the suggestions, but I need something a little more
pret-a-porte.


Your Google-fu needs work ;-)

Main Gamin page: http://www.gnome.org/~veillard/gamin/index.html
Perl module: http://search.cpan.org/~garnacho/Sys-Gamin-0.1/lib/Sys/Gamin.pm

libev (and the EV perl module) will hide port_create() from you, but from a 
quick skim it may not have the functionality you want.


--
Carson

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Richard Elling
How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor- 
file(1)? :-)

 -- richard

On Oct 26, 2009, at 3:17 PM, Carson Gaspar wrote:


On 10/25/09 5:38 PM, Paul Archer wrote:

5:12pm, Cyril Plisko wrote:


while there is no inotify for Solaris, there are similar  
technologies

available.

Check port_create(3C) and gam_server(1)

I can't find much on gam_server on Solaris (couldn't find too much  
on it
at all, really), and port_create is apparently a system call. (I'm  
not a
developer--if I can't write it in BASH, Perl, or Ruby, I can't  
write it.)

I appreciate the suggestions, but I need something a little more
pret-a-porte.


Your Google-fu needs work ;-)

Main Gamin page: http://www.gnome.org/~veillard/gamin/index.html
Perl module: http://search.cpan.org/~garnacho/Sys-Gamin-0.1/lib/Sys/Gamin.pm

libev (and the EV perl module) will hide port_create() from you, but  
from a quick skim it may not have the functionality you want.


--
Carson

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar

On 10/26/09 3:31 PM, Richard Elling wrote:

How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor-file(1)? :-)


The docs are... ummm... skimpy is being rather polite. The docs I can find via 
Google say that they will launch some random unspecified daemons via d-bus (I 
assume gvfsd ans gvfsd-${accessmethod}). This implies that you need to start a 
d-bus session to use them. gvfsd (no man page or docs of any kind that I can 
find) is linked against libgio, which has unresolved symbols against 
port_create() and friends, which is a good sign that they don't just poll.


--
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-26 Thread Cindy Swearingen

Hi Jeremy,

Can you use the command below and send me the output, please?

Thanks,

Cindy

# mdb -k
 ::stacks -m zfs

On 10/26/09 11:58, Jeremy Kitchen wrote:

Jeremy Kitchen wrote:

Hey folks!

We're using zfs-based file servers for our backups and we've been having
some issues as of late with certain situations causing zfs/zpool
commands to hang.


anyone?  this is happening right now and because we're doing a restore I
can't reboot the machine, so it's a prime opportunity to get debugging
information if it'll help.

Thanks!

-Jeremy






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Richard Elling

On Oct 26, 2009, at 3:56 PM, Carson Gaspar wrote:


On 10/26/09 3:31 PM, Richard Elling wrote:
How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor- 
file(1)? :-)


The docs are... ummm... skimpy is being rather polite. The docs I  
can find via Google say that they will launch some random  
unspecified daemons via d-bus (I assume gvfsd ans gvfsd-$ 
{accessmethod}). This implies that you need to start a d-bus session  
to use them. gvfsd (no man page or docs of any kind that I can find)  
is linked against libgio, which has unresolved symbols against  
port_create() and friends, which is a good sign that they don't just  
poll.


I haven't dug into the details, and this has nothing to do with ZFS, but
observe the following example:

$ gvfs-monitor-file /zwimming/whee
File Monitor Event:
File = /zwimming/whee
Event = ATTRIB CHANGED
File Monitor Event:
File = /zwimming/whee
Event = CHANGED
File Monitor Event:
File = /zwimming/whee
Event = CHANGES_DONE_HINT

...while in another tab I simply did touch /zwimming/whee
gvfs-* commands seem more suitable for scripts or programs
than humans. But it doesn't look like a difficult script to write
in any of the scripting languages. I presume the gvfs-*
commands will be more portable than inotify  others...?
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-26 Thread Jeremy Kitchen
Cindy Swearingen wrote:
  Hi Jeremy,
 
  Can you use the command below and send me the output, please?
 
  Thanks,
 
  Cindy
 
  # mdb -k
  ::stacks -m zfs

ack!  it *just* fully died.  I've had our noc folks reset the machine
and I will get this info to you as soon as it happens again (I'm fairly
certain it will, if not on this specific machine, one of our other
machines!)

-Jeremy




signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread paul
 I can't find much on gam_server on Solaris (couldn't find too much on it
 at all, really), and port_create is apparently a system call. (I'm not a
 developer--if I can't write it in BASH, Perl, or Ruby, I can't write
 it.)
 I appreciate the suggestions, but I need something a little more
 pret-a-porte.

 Your Google-fu needs work ;-)

 Main Gamin page: http://www.gnome.org/~veillard/gamin/index.html

Actually, I found this page, which has this gem: At this point Gamin is
fairly tied to Linux, portability is not a primary goal at this stage but
if you have portability patches they are welcome.

Unfortunately, I'm trying for a Solaris solution. I already had a Linux
solution (the 'inotify' I started out with).

Paul

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar

On 10/26/09 5:33 PM, p...@paularcher.org wrote:

I can't find much on gam_server on Solaris (couldn't find too much on it
at all, really), and port_create is apparently a system call. (I'm not a
developer--if I can't write it in BASH, Perl, or Ruby, I can't write
it.)
I appreciate the suggestions, but I need something a little more
pret-a-porte.


Your Google-fu needs work ;-)

Main Gamin page: http://www.gnome.org/~veillard/gamin/index.html


Actually, I found this page, which has this gem: At this point Gamin is
fairly tied to Linux, portability is not a primary goal at this stage but
if you have portability patches they are welcome.


Much has changed since that text was written, including support for the event 
completion framework (port_create() and friends, introduced with Sol 10) on 
Solaris, thus the recommendation for gam_server / gamin.


$ nm /usr/lib/gam_server | grep port_create
[458]   | 134589544| 0|FUNC |GLOB |0|UNDEF  |port_create


Unfortunately, I'm trying for a Solaris solution. I already had a Linux
solution (the 'inotify' I started out with).


And we're on a Solaris mailing list, trying to give you solutions that work on 
Solaris. Don't believe everything you read on the Internet ;-)


--
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-26 Thread Adam Leventhal
With that said I'm concerned that there appears to be a fork between  
the opensource version of ZFS and ZFS that is part of the Sun/Oracle  
FishWorks 7nnn series appliances.  I understand (implicitly) that  
Sun (/Oracle) as a commercial concern, is free to choose their own  
priorities in terms of how they use their own IP (Intellectual  
Property) - in this case, the source for the ZFS filesystem.


Hey Al,

I'm unaware of specific plans for management either at Sun or at  
Oracle, but from an engineering perspective suffice it to say that it  
is simpler and therefore more cost effective to develop for a single,  
unified code base, to amortize the cost of testing those  
modifications, and to leverage the enthusiastic ZFS community to  
assist with the development and testing of ZFS.


Again, this isn't official policy, just the simple facts on the ground  
from engineering.


I'm not sure what would lead you to believe that there is fork between  
the open source / OpenSolaris ZFS and what we have in Fishworks.  
Indeed, we've made efforts to make sure there is a single ZFS for the  
reason stated above. Any differences that exist are quickly migrated  
to ON as you can see from the consistent work of Eric Schrock.


Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS near-synchronous replication...

2009-10-26 Thread Mike Watkins
Anyone have any creative solutions for near-synchronous replication between
2 ZFS hosts?
Near-synchronous, meaning RPO X---0

I realize performance will take a hit.

Thanks,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread David Magda

On Oct 26, 2009, at 20:42, Carson Gaspar wrote:

Unfortunately, I'm trying for a Solaris solution. I already had a  
Linux

solution (the 'inotify' I started out with).


And we're on a Solaris mailing list, trying to give you solutions  
that work on Solaris. Don't believe everything you read on the  
Internet ;-)


Gamin is also more portable than 'inotify', so you could have one set  
of code for multiple platforms:


http://www.freshports.org/search.php?query=gamin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-26 Thread David Turnbull
I'm having similar issues, with two AOC-USAS-L8i Supermicro 1068e  
cards mpt2 and mpt3, running 1.26.00.00IT

It seems to only affect a specific revision of disk. (???)

sd67  Soft Errors: 0 Hard Errors: 127 Transport Errors: 3416
Vendor: ATA  Product: WDC WD10EACS-00D Revision: 1A01 Serial No:
Size: 1000.20GB 1000204886016 bytes

sd58  Soft Errors: 0 Hard Errors: 83 Transport Errors: 2087
Vendor: ATA  Product: WDC WD10EACS-00D Revision: 1A01 Serial No:
Size: 1000.20GB 1000204886016 bytes

There are 8 other disks on the two controllers:
6xWDC WD10EACS-00Z Revision: 1B01 (no errors)
2xSAMSUNG HD103UJ  Revision: 1113 (no errors)

The two EACS-00D disks are in seperate enclosures with new SAS-SATA  
fanout cables.


Example error messages:

Oct 27 14:26:05 fleet scsi: [ID 107833 kern.warning] WARNING: /p...@0,0/ 
pci1002,5...@2/pci15d9,a...@0 (mpt2):

Oct 27 14:26:05 fleet   wwn for target has changed

Oct 27 14:25:56 fleet scsi: [ID 107833 kern.warning] WARNING: /p...@0,0/ 
pci1002,5...@3/pci15d9,a...@0 (mpt3):

Oct 27 14:25:56 fleet   wwn for target has changed

Oct 27 14:25:57 fleet scsi: [ID 243001 kern.warning] WARNING: /p...@0,0/ 
pci1002,5...@2/pci15d9,a...@0 (mpt2):
Oct 27 14:25:57 fleet   mpt_handle_event_sync: IOCStatus=0x8000,  
IOCLogInfo=0x31110d00


Oct 27 14:25:48 fleet scsi: [ID 243001 kern.warning] WARNING: /p...@0,0/ 
pci1002,5...@3/pci15d9,a...@0 (mpt3):
Oct 27 14:25:48 fleet   mpt_handle_event_sync: IOCStatus=0x8000,  
IOCLogInfo=0x31110d00


Oct 27 14:26:01 fleet scsi: [ID 365881 kern.info] /p...@0,0/ 
pci1002,5...@2/pci15d9,a...@0 (mpt2):

Oct 27 14:26:01 fleet   Log info 0x31110d00 received for target 1.
Oct 27 14:26:01 fleet   scsi_status=0x0, ioc_status=0x804b,  
scsi_state=0xc


Oct 27 14:25:51 fleet scsi: [ID 365881 kern.info] /p...@0,0/ 
pci1002,5...@3/pci15d9,a...@0 (mpt3):

Oct 27 14:25:51 fleet   Log info 0x31120403 received for target 2.
Oct 27 14:25:51 fleet   scsi_status=0x0, ioc_status=0x804b,  
scsi_state=0xc


On 22/10/2009, at 10:40 PM, Bruno Sousa wrote:


Hi all,

Recently i upgrade from snv_118 to snv_125, and suddently i started  
to see this messages at /var/adm/messages :


Oct 22 12:54:37 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:54:37 SAN02  mpt_handle_event: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a
Oct 22 12:56:47 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:56:47 SAN02  mpt_handle_event_sync: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a
Oct 22 12:56:47 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:56:47 SAN02  mpt_handle_event: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a
Oct 22 12:56:50 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:56:50 SAN02  mpt_handle_event_sync: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a
Oct 22 12:56:50 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:56:50 SAN02  mpt_handle_event: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a



Is this a symptom of a disk error or some change was made in the  
driver?,that now i have more information, where in the past such  
information didn't appear?


Thanks,
Bruno

I'm using a LSI Logic SAS1068E B3 and i within lsiutil i have this  
behaviour :



1 MPT Port found

   Port Name Chip Vendor/Type/RevMPT Rev  Firmware Rev   
IOC
1.  mpt0  LSI Logic SAS1068E B3 105   
011a 0


Select a device:  [1-1 or 0 to quit] 1

1.  Identify firmware, BIOS, and/or FCode
2.  Download firmware (update the FLASH)
4.  Download/erase BIOS and/or FCode (update the FLASH)
8.  Scan for devices
10.  Change IOC settings (interrupt coalescing)
13.  Change SAS IO Unit settings
16.  Display attached devices
20.  Diagnostics
21.  RAID actions
22.  Reset bus
23.  Reset target
42.  Display operating system names for devices
45.  Concatenate SAS firmware and NVDATA files
59.  Dump PCI config space
60.  Show non-default settings
61.  Restore default settings
66.  Show SAS discovery errors
69.  Show board manufacturing information
97.  Reset SAS link, HARD RESET
98.  Reset SAS link
99.  Reset port
e   Enable expert mode in menus
p   Enable paged mode
w   Enable logging

Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 20

1.  Inquiry Test
2.  WriteBuffer/ReadBuffer/Compare Test
3.  Read Test
4.  Write/Read/Compare Test
8.  Read Capacity / Read Block Limits Test
12.  Display phy counters
13.  Clear phy counters
14.  SATA SMART Read Test
15.  SEP (SCSI Enclosure Processor) Test
18.  Report LUNs Test
19.  Drive firmware download
20.  Expander firmware download
21.  Read Logical Blocks
99.  Reset port
e   Enable expert mode in menus
p   Enable paged mode
w   Enable logging

Diagnostics menu, select an option:  [1-99 or e/p/w or 0 to quit] 12

Adapter Phy 0:  Link 

Re: [zfs-discuss] ZFS near-synchronous replication...

2009-10-26 Thread Richard Elling

On Oct 26, 2009, at 7:36 PM, Mike Watkins wrote:

Anyone have any creative solutions for near-synchronous replication  
between 2 ZFS hosts?

Near-synchronous, meaning RPO X---0


Many Solaris solutions are using AVS for this. But you could use
block-level replication from a number of vendors.
http://hub.opensolaris.org/bin/view/Project+avs/


I realize performance will take a hit.


In general, yes. But it will depend quite a bit on the workload.
For normal file system workloads with writes deferred up to
30 seconds, you may not notice the replication hit.  For sync
workloads, it is more likely to be noticeable.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Anil
I haven't tried this, but this must be very easy with dtrace. How come no one 
mentioned it yet? :) You would have to monitor some specific syscalls...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Nicolas Williams
On Mon, Oct 26, 2009 at 08:53:50PM -0700, Anil wrote:
 I haven't tried this, but this must be very easy with dtrace. How come
 no one mentioned it yet? :) You would have to monitor some specific
 syscalls...

DTrace is not reliable in this sense: it will drop events rather than
overburden the system.  Also, system calls are not the only thing you
want to watch for -- you should really trace the VFS/fop rather than
syscalls for this.  In any case, port_create(3C) and gamin are the way
forward.

port_create(3C) is rather easy to use.  Searching the web for
PORT_SOURCE_FILE you'll find useful docs like:

http://blogs.sun.com/praks/entry/file_events_notification

which has example code too.

I do think it'd be useful to have command-line utility in core Solaris
that uses this facility, something like the example in Prakash's blog
(which, incidentally, _works_), but perhaps a bit more complete.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss