[zfs-discuss] ZFS and dual-pathed A5200

2006-10-18 Thread Hong Wei Liam

Hi,

I understand that ZFS leaves multipathing to MPXIO or the like. For a 
combination of dual-path A5200 with QLGC2100 HBAs (non-leadville stack), 
how would ZFS react in seeing this ?


WL
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] legato support

2006-10-18 Thread Mark Shellenbaum

Gregory Shaw wrote:
Hey, does anybody know the timeframe for when Legato Networker will 
support ZFS?




Looks like EMC has released Networker version 7.3.2 which has ZFS support.

  -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM sdd (vpath)

2006-10-18 Thread Geoffroy Doucet
You are right I am not the first one, with a little bit a research I found
this tread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-July/003937.html

> On Wed, Oct 18, 2006 at 06:57:21AM -0700, Geoffroy Doucet wrote:
>> Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like
>> powerpath, MPXIO or VxDMP.
>>
>> Here is the error message when I try to create my pool:
>> bash-3.00# zpool  create tank /dev/dsk/vpath1a
>> warning: device in use checking failed: No such device
>> internal error: unexpected error 22 at line 446 of
>> ../common/libzfs_pool.c
>> bash-3.00# zpool  create tank /dev/dsk/vpath1c
>> cannot open '/dev/dsk/vpath1c': I/O error
>> bash-3.00# zpool  create tank vpath1
>> cannot open 'vpath1': no such device in /dev/dsk
>> must be a full path or shorthand device name
>> bash-3.00# zpool  create tank vpath1c
>> cannot open '/dev/dsk/vpath1c': I/O error
>> bash-3.00# zpool  create tank vpath1a
>> warning: device in use checking failed: No such device
>> internal error: unexpected error 22 at line 446 of
>> ../common/libzfs_pool.c
>
> My guess for the first case (vpath1a) is that the IBM driver is not
> correctly implementing the necessary DDI properties for ZFS to determine
> the configuration of the device.  What bits are you using?  Off the top
> of my head, this should catch the most likely candidate:
>
>   # dtrace -n ldi_get_size:return'{trace(arg1)}'
>
> I'd also check to see if there are any updates from IBM.  From the looks
> of it, 'vpath1c' cannot even be opened from userland, so something else
> seems misconfigured in that case.
>
> - Eric
>
> --
> Eric Schrock, Solaris Kernel Development
> http://blogs.sun.com/eschrock
>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM sdd (vpath)

2006-10-18 Thread Torrey McMahon

Eric Schrock wrote:

Chances are they have already heard of this bug, as I seem to
remember it coming up before.
  



Comes up all the time on the storage lists. Especially, when people try 
to get mpxio to work with the underlying arrays instead of sdd(vpath). 
Last I recall they aren't exporting the LUNs with the proper IEEE 
Registered Extended bitsbut that was awhile ago.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM sdd (vpath)

2006-10-18 Thread Eric Schrock
On Wed, Oct 18, 2006 at 02:15:42PM -0400, Geoffroy Doucet wrote:
> Here is the ouput of your DTrace script:
> 
> # dtrace -n ldi_get_size:return'{trace(arg1)}'
> dtrace: description 'ldi_get_size:return' matched 1 probe
> CPU IDFUNCTION:NAME
>   1  24930  ldi_get_size:return-1
> 
> With the command:
> # zpool  create tank /dev/dsk/vpath1c
> warning: device in use checking failed: No such device
> internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c

Yep.  This is a bug in the IBM driver code.  There are a number of ddi
properties ("NBlocks", "nblocks", "Size", and "size") that a driver can
export we can calculate the size.  Looks like the IBM driver is
exporting none of these, so we have no (generic) way of knowing how big
the underlying device is.  I would escalate this with IBM, and point to
the ldi_get_size() source on opensolaris.org for information on what we
expect.  Chances are they have already heard of this bug, as I seem to
remember it coming up before.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM sdd (vpath)

2006-10-18 Thread Geoffroy Doucet
Here is the ouput of your DTrace script:

# dtrace -n ldi_get_size:return'{trace(arg1)}'
dtrace: description 'ldi_get_size:return' matched 1 probe
CPU IDFUNCTION:NAME
  1  24930  ldi_get_size:return-1

With the command:
# zpool  create tank /dev/dsk/vpath1c
warning: device in use checking failed: No such device
internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c



> On Wed, Oct 18, 2006 at 06:57:21AM -0700, Geoffroy Doucet wrote:
>> Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like
>> powerpath, MPXIO or VxDMP.
>>
>> Here is the error message when I try to create my pool:
>> bash-3.00# zpool  create tank /dev/dsk/vpath1a
>> warning: device in use checking failed: No such device
>> internal error: unexpected error 22 at line 446 of
>> ../common/libzfs_pool.c
>> bash-3.00# zpool  create tank /dev/dsk/vpath1c
>> cannot open '/dev/dsk/vpath1c': I/O error
>> bash-3.00# zpool  create tank vpath1
>> cannot open 'vpath1': no such device in /dev/dsk
>> must be a full path or shorthand device name
>> bash-3.00# zpool  create tank vpath1c
>> cannot open '/dev/dsk/vpath1c': I/O error
>> bash-3.00# zpool  create tank vpath1a
>> warning: device in use checking failed: No such device
>> internal error: unexpected error 22 at line 446 of
>> ../common/libzfs_pool.c
>
> My guess for the first case (vpath1a) is that the IBM driver is not
> correctly implementing the necessary DDI properties for ZFS to determine
> the configuration of the device.  What bits are you using?  Off the top
> of my head, this should catch the most likely candidate:
>
>   # dtrace -n ldi_get_size:return'{trace(arg1)}'
>
> I'd also check to see if there are any updates from IBM.  From the looks
> of it, 'vpath1c' cannot even be opened from userland, so something else
> seems misconfigured in that case.
>
> - Eric
>
> --
> Eric Schrock, Solaris Kernel Development
> http://blogs.sun.com/eschrock
>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Configuring a 3510 for ZFS

2006-10-18 Thread Torrey McMahon

Robert Milkowski wrote:


Well, I don't know - that way you end doing RAID in ZFS anyway so
probably doing just RAID-10 in ZFS without ditto block would be
better.


The win with ditto blocks is allowing you to recover from a data 
inconsistency at the fs level as opposed to dealing with a block error 
within the raid group. The problem, as discussed previously, is in the 
accounting and management of the ditto blocks.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs Performance with millions of small files in Sendmail messaging environment]

2006-10-18 Thread Noël Dellofano
The zap is really speedy when it comes to file lookups in a  
directory.  If you're really concerned about performance, and  
depending on your system you're running on, I'd probably recommend  
staying below 10 million or so files per directory.  Just since you'll  
get fastest results when we can fit most of zap blocks in the ARC as  
opposed to having to go and fetch them from disk.
*(disclaimer:  this was a benchmark i ran on bits from February on an  
opteron box with 3 GB of memory with a single scsi disk in the pool.   
So you may want to rerun with current bits for better numbers if this  
is very important to you expect to be storing millions of files per  
directory on a regular basis).


You can also go nuts on creating filesystems.  I believe the only  
thing to be forewarned of is that upon reboot,  you'll be mounting all  
these filesystems so this might slow down the reboot process somewhat  
when you get to *many thousands(due to the way the automounter  
behaves).  I believe that a fix is being worked on (may already be  
back? ) to make this process much quicker, however I'm not sure.  
Anybody know what the status of that is?


Noel

On Oct 15, 2006, at 11:08 PM, Ramneek Sethi wrote:


Thanks for your feedback.. any comments on VxFS..
Also is there any limit of files within filesystem or within directory
for optimal performance..

Thanks

Robert Milkowski wrote On 10/14/06 04:19,:

Hello Ramneek,

Friday, October 13, 2006, 6:07:22 PM, you wrote:

RS> Hello Experts

RS> Would appreciate if somebody can comment on sendmail  
environment on

RS> solaris 10.
RS> How will Zfs perform if one has millions of files in sendmail  
message

RS> store directory under zfs filesystem compared to UFS or VxFS..

Actually not sendmail but also MTA and
ZFS is about 5% better in real production than UFS+SVM.





--
Thanks & Regards,
***
  _/_/_/  _/_/  _/ _/  Ramneek Sethi
 _/  _/_/  _/_/   _/   Systems Support Engineer
_/_/_/  _/_/  _/  _/ _/Sun Microsystems India Pvt.  
Ltd.

   _/  _/_/  _/   _/_/ 5th Floor,Right Wing ,
  _/_/_/   _/_/_/   _/ _/  The Capital Court,Munirka
   New Delhi - 110067,INDIA
   Phone : 91--11-42191029
   Fax : 91-11-26160928
   Support SERVICESE-mail : [EMAIL PROTECTED]
***
For any Support Queries pls dial the Toll Free Number 1600-4254-786
***

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM sdd (vpath)

2006-10-18 Thread Eric Schrock
On Wed, Oct 18, 2006 at 06:57:21AM -0700, Geoffroy Doucet wrote:
> Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, 
> MPXIO or VxDMP. 
> 
> Here is the error message when I try to create my pool:
> bash-3.00# zpool  create tank /dev/dsk/vpath1a
> warning: device in use checking failed: No such device
> internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c
> bash-3.00# zpool  create tank /dev/dsk/vpath1c
> cannot open '/dev/dsk/vpath1c': I/O error
> bash-3.00# zpool  create tank vpath1
> cannot open 'vpath1': no such device in /dev/dsk
> must be a full path or shorthand device name
> bash-3.00# zpool  create tank vpath1c
> cannot open '/dev/dsk/vpath1c': I/O error
> bash-3.00# zpool  create tank vpath1a
> warning: device in use checking failed: No such device
> internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c

My guess for the first case (vpath1a) is that the IBM driver is not
correctly implementing the necessary DDI properties for ZFS to determine
the configuration of the device.  What bits are you using?  Off the top
of my head, this should catch the most likely candidate:

# dtrace -n ldi_get_size:return'{trace(arg1)}'

I'd also check to see if there are any updates from IBM.  From the looks
of it, 'vpath1c' cannot even be opened from userland, so something else
seems misconfigured in that case.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, home and Linux

2006-10-18 Thread Darren J Moffat

msl wrote:

Hello,
I'm trying to implement a NAS server with solaris/NFS and, of course, ZFS. But for that, 
we have a little problem... what about the /home filesystem? I mean, i have a lot of 
linux clients, and the "/home" directory is on a NFS server (today, linux). I 
want to use ZFS, and
change the "directory" home like /home/leal, to "filesystems" like
/home/leal (just like the documentation recommends). Now, a PAM module
(pam_mkhomedir) have solved that problem, creating the user home
directory (as demand). Do you have a solution to that? Like a linux PAM
module to create a zfs filesystem under /home, and not a ordinary
directory? So i can have a per user filesystem under "/home/"... Of course, with a service on the NFS solaris server... 
 I'm asking because if you agree that it is a "problem", we

can create a project in opensolaris to work on it. Or maybe, you have a
trivial solution, that i'm not seeing.
Thanks very much for your time!


Have an executable automounter map that runs on the NFS server do that 
for you.


See automount(1M) for information on how to write the maps.

You could also port the pam_mkhomedir module and have it do a zfs create 
instead of mkdir.  PAM after all started on Solaris :-)


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and IBM sdd (vpath)

2006-10-18 Thread Geoffroy Doucet
Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, 
MPXIO or VxDMP. 

Here is the error message when I try to create my pool:
bash-3.00# zpool  create tank /dev/dsk/vpath1a
warning: device in use checking failed: No such device
internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c
bash-3.00# zpool  create tank /dev/dsk/vpath1c
cannot open '/dev/dsk/vpath1c': I/O error
bash-3.00# zpool  create tank vpath1
cannot open 'vpath1': no such device in /dev/dsk
must be a full path or shorthand device name
bash-3.00# zpool  create tank vpath1c
cannot open '/dev/dsk/vpath1c': I/O error
bash-3.00# zpool  create tank vpath1a
warning: device in use checking failed: No such device
internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c


Is there anyone else that use ZFS with IBM sdd?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] Re: Configuring a 3510 for ZFS

2006-10-18 Thread Ciaran Johnston \(AT/LMI\)
These are pretty much the conclusions we reached from this discussion,
and thanks for all the input. On the 3510 we are configuring 12 nraid
LUNs - basically presenting the 12 disks to the OS as they are. In a
real scenario we will be mirroring across two 3510s anyway. We have also
decided against raidz in this scenario.

Regards,
Ciaran. 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Anantha N.
Srirama
Sent: 18 October 2006 13:11
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Re: Configuring a 3510 for ZFS

Thanks for the stimulating exchange of ideas/thoughts. I've always been
a believer of letting s/w do my RAID functions; for example in the old
days of VxVM I always preferred to do mirroring at the s/w level. It is
my belief that there is more 'meta' information available at the OS
level than at the storage level for s/w to make intelligent decisions;
dynamic recordsize in ZFS is one example.

Any thoughts on the following approach:

1. I'll configure 3511 to present multiple LUNs (mirrored internally) to
OS.
2. Lay down a ZFS pool/filesystem without RAID protection (RAIDZ...) in
the OS

With this approach I will enjoy the caching facility of 3511 and the
checksum protection afforded by ZFS.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Configuring a 3510 for ZFS

2006-10-18 Thread Anantha N. Srirama
Thanks for the stimulating exchange of ideas/thoughts. I've always been a 
believer of letting s/w do my RAID functions; for example in the old days of 
VxVM I always preferred to do mirroring at the s/w level. It is my belief that 
there is more 'meta' information available at the OS level than at the storage 
level for s/w to make intelligent decisions; dynamic recordsize in ZFS is one 
example.

Any thoughts on the following approach:

1. I'll configure 3511 to present multiple LUNs (mirrored internally) to OS.
2. Lay down a ZFS pool/filesystem without RAID protection (RAIDZ...) in the OS

With this approach I will enjoy the caching facility of 3511 and the checksum 
protection afforded by ZFS.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-18 Thread Daniel Rock

Richard Elling - PAE schrieb:

Where most people get confused is the expectation that a hot-plug
device works like a hot-swap device.


Well, seems like you should also inform your documentation team about this 
definition:


http://www.sun.com/products-n-solutions/hardware/docs/html/819-3722-15/index.html#21924

SATA hot plug is supported only for the Windows XP
Operating System (OS).


Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss