[zfs-discuss] Re: Eliminating double path with ZFS's volume manager

2007-01-15 Thread Philip Mötteli
> Looks like its got a half-way decent multipath
> design:
> http://docs.info.apple.com/article.html?path=Xsan/1.1/
> en/c3xs12.html

Great, but that is with Xsan. If I don't exchange our Hitachi with an Xsan, I 
don't have this 'cvadmin'.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Eliminating double path with ZFS's volume manager

2007-01-15 Thread Philip Mötteli
> Robert Milkowski wrote:
> >
> > 2. I belive it's definitely possible to just
> correct your config under
> > Mac OS without any need to use other fs or volume
> manager, however
> > going to zfs could be a good idea anyway
> 
> 
> That implies that MacOS has some sort of native SCSI
> multipathing like 
> Solaris Mpxio. Does such a beast exist?

That's exactly the question. I'm not aware of any. The only such thing, could 
be in Xsan. But we don't have Xsan here; we have Hitachi. And this tool called 
'Xsan Admin' is not freely available and the person at the Apple Support said, 
it wouldn't help in my case.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Eliminating double path with ZFS's volumemanager

2007-01-15 Thread Philip Mötteli
> Go poke around in the multipath Xsan storage pool
> properties. Specifies
> how Xsan uses multiple Fibre Channel paths between
> clients and storage.
> This is the equiv of Veritas DMP or [whatever we now
> call] Solaris MPxIO

You mean, I should find some configuration file? Well, I can't find one. 
Nothing like 'scsi_vhci.conf'.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Multiple Read one Writer Filesystem

2007-01-15 Thread Anton B. Rang
(It's perhaps worth noting that cachefs won't work with NFSv4, so if you want 
to try this, manually force your server and/or clients into v3.)

This will, of course, limit your scalability to whatever your NFS server can 
push through the network (modulo caching).  QFS is a better choice if you need 
scalability beyond that.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question: ZFS + Block level SHA256 ~= almost free CAS Squishing?

2007-01-15 Thread Mike Gerdts

On 1/10/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

"Dick Davies" <[EMAIL PROTECTED]> wrote on 01/10/2007 05:26:45 AM:
> On 08/01/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > I think that in addition to lzjb compression, squishing blocks that
contain
> > the same data would buy a lot of space for administrators working in
many
> > common workflows.
>
> This idea has occurred to me too - I think there are definite
> advantages to 'block re-use'.
> When you start talking about multiple similar zones, I suspect
> substantial space savings could
> be made - and if you can re-use that saved storage to provide
> additional redundancy, everyone
> would be happy.


My favorite uses come to mind (I have spent a fair amount of time
wishing for this feature):

1) Zones that start out as ZFS clones will tend to diverge as the
system patches.   This will allow them to re-converge as the patches
roll through multiple zones.

2) Environments where each person starts with the same code base (hg
pull http://hg.intevation.org/mirrors/opensolaris.org/onnv-gate/) then
build it producing substantially similar object files.

3) Disk-based backup systems (de-duplication is a buzz word here)


That issue has already come up in the thread,  SHA256 is 2^128 for random,
2^80 for targeted collisions.  That is pretty darn good,  but it would also
make sense to perform a rsync like secondary check on match using a
dissimilar crypto hash.  If we hit very unlikely chance that 2 blocks match
both sha256 and whatever other secondary hash I think that block should be
lost (act of god). =)


Reading the full block and doing a full comparison is very cheap
(given the anticipated frequency) and makes you not have to explain
that the file system has a 2^512 chance of silent data corruption.  As
slim of a chance as that is, ZFS promises to not corrupt my data and
to tell on others that do.  ZFS cannot break that promise.

Mike

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about self healing

2007-01-15 Thread Richard Elling

Kyle McDonald wrote:

Richard Elling wrote:

roland wrote:

i have come across an interesting article at :
http://www.anandtech.com/IT/showdoc.aspx?i=2859&p=5


Can anyone comment on the claims or conclusions of the article itself?

It seems to me that they are not always clear about what they are 
talking about.


Many times they say only 'SATA' and other times 'enterprise SATA' or 
'desktop SATA'
Likwise, somtimes they use the term SAS/SCSI, other times just 
'enterprise' without specifying SAS/SCSI or SATA.


I'm not clear on why the interconnect technology would have any affect 
on the reliability of the mechanics or electronics of the drive?


The interconnect doesn't have any affect on the mechanics.  I think it
is just a market segmentation description.  A rather poor one, too.

I do beleive that the manufacturer's could be targeting different 
customers with the different types of drives, but it's not clear from 
that article how Enterprise SATA drives compare to Enterprise SAS/SCSI 
drives. All I can get from the article for sure is don't use SATA 
desktop drives in a server.


Is 1 bit out of 10^14 really equal to 1 bit in 12.5TB read?


10^14 bits / 8 bits/byte = 12.5 TBytes.

Does that really translate to an 8% chance of a read error while trying 
to reconstruct a 1TB disk in a 5 disk RAID5 array?


Yes.

Something tells me that someones statistics calculations are off... I 
thought these problems were much rarer?


I believe these are rarer, for newer drives at least.  Over an expected
5 year lifetime, this error rate may be closer to reality.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: RE: On the SATA framework

2007-01-15 Thread Frank Cusack
On January 15, 2007 11:58:10 AM -0800 Andrew Pattison 
<[EMAIL PROTECTED]> wrote:

The SATA frame work has laready been integrated and is available on
Solaris 10 Update 3 and Nevada.


update 2 as well, yes?  I thought U2 is when the first SATA support
was announced, for the Marvell controller.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Eliminating double path with ZFS's volume manager

2007-01-15 Thread Jason J. W. Williams

Hi Torrey,

Looks like its got a half-way decent multipath design:
http://docs.info.apple.com/article.html?path=Xsan/1.1/en/c3xs12.html

Whether or not it works is another story I suppose. ;-)

Best Regards,
Jason

On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:

Got me. However, transport multipathing - Like Mpxio, DLM, VxDMP, etc. -
is usually separated from the filesystem layers.

Jason J. W. Williams wrote:
> Hi Torrey,
>
> I think it does if you buy Xsan. Its still a separate product isn't
> it? Thought its more like QFS + MPXIO.
>
> Best Regards,
> Jason
>
> On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
>> Robert Milkowski wrote:
>> >
>> > 2. I belive it's definitely possible to just correct your config under
>> > Mac OS without any need to use other fs or volume manager, however
>> > going to zfs could be a good idea anyway
>>
>>
>> That implies that MacOS has some sort of native SCSI multipathing like
>> Solaris Mpxio. Does such a beast exist?
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting a ZFS clone

2007-01-15 Thread Robert Milkowski
Hello Albert,

Monday, January 15, 2007, 5:55:23 PM, you wrote:

AC> I have no hands-on experience with ZFS but have a question. If the
AC> file server running ZFS exports the ZFS file system via NFS to
AC> clients, based on previous messages on this list, it is not possible
AC> for an NFS client to mount this NFS-exported ZFS file system on
AC> multiple directories on the NFS client.

AC> So, let's say I create a ZFS clone of some ZFS file system. Is it
AC> possible for an NFS client to mount the ZFS file system _and_ the
AC> clone without problems?

AC> If the clone is underneath the ZFS file system hierarchy, will
AC> mounting the ZFS file system I created the clone from allow the NFS
AC> client access to the remote ZFS file system and the clone?


You will have to explicitly mount cloned file systems on clients.
Other than that it should just work.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Eliminating double path with ZFS's volume manager

2007-01-15 Thread Torrey McMahon
Got me. However, transport multipathing - Like Mpxio, DLM, VxDMP, etc. - 
is usually separated from the filesystem layers.


Jason J. W. Williams wrote:

Hi Torrey,

I think it does if you buy Xsan. Its still a separate product isn't
it? Thought its more like QFS + MPXIO.

Best Regards,
Jason

On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:

Robert Milkowski wrote:
>
> 2. I belive it's definitely possible to just correct your config under
> Mac OS without any need to use other fs or volume manager, however
> going to zfs could be a good idea anyway


That implies that MacOS has some sort of native SCSI multipathing like
Solaris Mpxio. Does such a beast exist?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS direct IO

2007-01-15 Thread Jason J. W. Williams

Hi Roch,

You mentioned improved ZFS performance in the latest Nevada build (60
right now?)...I was curious if one would notice much of a performance
improvement between 54 and 60? Also, does anyone think the zfs_arc_max
tunable-support will be made available as a patch to S10U3, or would
that wait until U4? Thank you in advance!

Best Regards,
Jason

On 1/15/07, Roch - PAE <[EMAIL PROTECTED]> wrote:


Jonathan Edwards writes:
 >
 > On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
 >
 > >> DIRECT IO is a set of performance optimisations to circumvent
 > >> shortcomings of a given filesystem.
 > >
 > > Direct I/O as generally understood (i.e. not UFS-specific) is an
 > > optimization which allows data to be transferred directly between
 > > user data buffers and disk, without a memory-to-memory copy.
 > >
 > > This isn't related to a particular file system.
 > >
 >
 > true .. directio(3) is generally used in the context of *any* given
 > filesystem to advise it that an application buffer to system buffer
 > copy may get in the way or add additional overhead (particularly if
 > the filesystem buffer is doing additional copies.)  You can also look
 > at it as a way of reducing more layers of indirection particularly if
 > I want the application overhead to be higher than the subsystem
 > overhead.  Programmatically .. less is more.

Direct IO makes good sense when the target disk sectors are
set a priori. But in the context of ZFS, would you rather
have 10 direct disk I/Os or 10 bcopies and 2 I/O (say that
was possible).

As for read, I  can see that when  the load is cached in the
disk array and we're running  100% CPU, the extra copy might
be noticeable. Is this the   situation that longs for DIO  ?
What % of a system is spent in the copy  ? What is the added
latency that comes from the copy ? Is DIO the best way to
reduce the CPU cost of ZFS ?

The  current Nevada  code base  has  quite nice  performance
characteristics  (and  certainly   quirks); there are   many
further efficiency gains to be reaped from ZFS. I just don't
see DIO on top of  that list for now.   Or at least  someone
needs to  spell out what  is ZFS/DIO and  how much better it
is expected to be (back of the envelope calculation accepted).

Reading RAID-Z  subblocks on filesystems that  have checksum
disabled might be interesting.   That would avoid  some disk
seeks.To served  the  subblocks directly   or  not is  a
separate matter; it's  a small deal  compared to the feature
itself.  How about disabling the  DB  checksum (it can't fix
the block anyway) and do mirroring ?

-r


 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Eliminating double path with ZFS's volume manager

2007-01-15 Thread Jason J. W. Williams

Hi Torrey,

I think it does if you buy Xsan. Its still a separate product isn't
it? Thought its more like QFS + MPXIO.

Best Regards,
Jason

On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:

Robert Milkowski wrote:
>
> 2. I belive it's definitely possible to just correct your config under
> Mac OS without any need to use other fs or volume manager, however
> going to zfs could be a good idea anyway


That implies that MacOS has some sort of native SCSI multipathing like
Solaris Mpxio. Does such a beast exist?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI on a single interface?

2007-01-15 Thread Rick McNeal


On Jan 15, 2007, at 8:34 AM, Dick Davies wrote:


Hi, are there currently any plans to make an iSCSI target created by
setting shareiscsi=on on a zvol
bindable to a single interface (setting tpgt or acls)?

I can cobble something together with ipfilter,
but that doesn't give me enough granularity to say something like:

'host a can see target 1, host c can see targets 2-9', etc.

Also, am I right in thinking without this, all targets should be
visible on all interfaces?



We're working on some more interface stuff for setting up various  
properties like TPGT's and ACL for the ZVOLs which are shared through  
ZFS.


Now that I've knocked off a couple of things that have been on my  
plate I've got room to add some more. These definitely rank right up  
towards the top.




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Rick McNeal

"If ignorance is bliss, this lesson would appear to be a deliberate  
attempt on your part to deprive me of happiness, the pursuit of which  
is my unalienable right according to the Declaration of  
Independence.  I therefore assert my patriotic prerogative not to  
know this material.  I'll be out on the playground." -- Calvin



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: RE: On the SATA framework

2007-01-15 Thread Andrew Pattison
The SATA frame work has laready been integrated and is available on Solaris 10 
Update 3 and Nevada.

Cheers

Andrew.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Eliminating double path with ZFS's volume manager

2007-01-15 Thread Torrey McMahon

Robert Milkowski wrote:


2. I belive it's definitely possible to just correct your config under
Mac OS without any need to use other fs or volume manager, however
going to zfs could be a good idea anyway



That implies that MacOS has some sort of native SCSI multipathing like 
Solaris Mpxio. Does such a beast exist?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about self healing

2007-01-15 Thread Kyle McDonald

Richard Elling wrote:

roland wrote:

i have come across an interesting article at :
http://www.anandtech.com/IT/showdoc.aspx?i=2859&p=5


Can anyone comment on the claims or conclusions of the article itself?

It seems to me that they are not always clear about what they are 
talking about.


Many times they say only 'SATA' and other times 'enterprise SATA' or 
'desktop SATA'
Likwise, somtimes they use the term SAS/SCSI, other times just 
'enterprise' without specifying SAS/SCSI or SATA.


I'm not clear on why the interconnect technology would have any affect 
on the reliability of the mechanics or electronics of the drive?


I do beleive that the manufacturer's could be targeting different 
customers with the different types of drives, but it's not clear from 
that article how Enterprise SATA drives compare to Enterprise SAS/SCSI 
drives. All I can get from the article for sure is don't use SATA 
desktop drives in a server.


Is 1 bit out of 10^14 really equal to 1 bit in 12.5TB read?

Does that really translate to an 8% chance of a read error while trying 
to reconstruct a 1TB disk in a 5 disk RAID5 array?


Something tells me that someones statistics calculations are off... I 
thought these problems were much rarer?


 -Kyle


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Eliminating double path with ZFS's volume manager

2007-01-15 Thread Dominic Kay

Go poke around in the multipath Xsan storage pool properties. Specifies how
Xsan uses multiple Fibre Channel paths between clients and storage. This is
the equiv of Veritas DMP or [whatever we now call] Solaris MPxIO
/d


2007/1/15, Philip Mötteli <[EMAIL PROTECTED]>:


Hi,


> Monday, January 15, 2007, 10:44:49 AM, you wrote:
> PM> Since they have installed a second path to our Hitachi SAN, my
> PM> Mac OS X Server 4.8 mounts every SAN disk twice.
> PM> I asked everywhere, if there's a way, to correct that. And the
> PM> only answer so far was, that I need a volume manager, that can be
> PM> configured to consider two volumes as being identical.
> PM> Now that Mac OS X Leopard supports ZFS, could using ZFS be the
> PM> solution for this problem? If yes, how could I achieve this?
>
> 1. I don't know what kind of file system you already
> have on those
> disks but if you really mounted them twice you could
> have already
> corrupt those file systems

I have now de-connected one cable.  :-8


> 2. I believe it's definitely possible to just correct
> your config under
> Mac OS without any need to use other fs or volume
> manager,

Apart from the proposition about using a special type of switch, I would
be very interested in any information you have. So far not our SAN expert,
nor our Mac expert, nor the very expensive Apple Server support we have,
could help us.


> however
> going to zfs could be a good idea anyway

I think so too. I just have to wait until Leopard.


> 3. if you put those disks under ZFS it should just
> work despite of
> having second path

So you would propose, to just add one of

/dev/disk4s10
/dev/disk5s10

into the ZFS pool?
Is there no way, to explain ZFS, that both of these disks are identical
(probably not clones), so that one could profit from the path redundancy?


Thanks
Phil


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





--
Dominic Kay
+44 780 124 6099
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS direct IO

2007-01-15 Thread Roch - PAE

Jonathan Edwards writes:
 > 
 > On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
 > 
 > >> DIRECT IO is a set of performance optimisations to circumvent  
 > >> shortcomings of a given filesystem.
 > >
 > > Direct I/O as generally understood (i.e. not UFS-specific) is an  
 > > optimization which allows data to be transferred directly between  
 > > user data buffers and disk, without a memory-to-memory copy.
 > >
 > > This isn't related to a particular file system.
 > >
 > 
 > true .. directio(3) is generally used in the context of *any* given  
 > filesystem to advise it that an application buffer to system buffer  
 > copy may get in the way or add additional overhead (particularly if  
 > the filesystem buffer is doing additional copies.)  You can also look  
 > at it as a way of reducing more layers of indirection particularly if  
 > I want the application overhead to be higher than the subsystem  
 > overhead.  Programmatically .. less is more.

Direct IO makes good sense when the target disk sectors are
set a priori. But in the context of ZFS, would you rather
have 10 direct disk I/Os or 10 bcopies and 2 I/O (say that
was possible).

As for read, I  can see that when  the load is cached in the
disk array and we're running  100% CPU, the extra copy might
be noticeable. Is this the   situation that longs for DIO  ?
What % of a system is spent in the copy  ? What is the added
latency that comes from the copy ? Is DIO the best way to
reduce the CPU cost of ZFS ?

The  current Nevada  code base  has  quite nice  performance
characteristics  (and  certainly   quirks); there are   many
further efficiency gains to be reaped from ZFS. I just don't
see DIO on top of  that list for now.   Or at least  someone
needs to  spell out what  is ZFS/DIO and  how much better it
is expected to be (back of the envelope calculation accepted).

Reading RAID-Z  subblocks on filesystems that  have checksum
disabled might be interesting.   That would avoid  some disk
seeks.To served  the  subblocks directly   or  not is  a
separate matter; it's  a small deal  compared to the feature
itself.  How about disabling the  DB  checksum (it can't fix
the block anyway) and do mirroring ?

-r


 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mounting a ZFS clone

2007-01-15 Thread Albert Chin
I have no hands-on experience with ZFS but have a question. If the
file server running ZFS exports the ZFS file system via NFS to
clients, based on previous messages on this list, it is not possible
for an NFS client to mount this NFS-exported ZFS file system on
multiple directories on the NFS client.

So, let's say I create a ZFS clone of some ZFS file system. Is it
possible for an NFS client to mount the ZFS file system _and_ the
clone without problems?

If the clone is underneath the ZFS file system hierarchy, will
mounting the ZFS file system I created the clone from allow the NFS
client access to the remote ZFS file system and the clone?

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Eliminating double path with ZFS's volume manager

2007-01-15 Thread Philip Mötteli
Hi,


> Monday, January 15, 2007, 10:44:49 AM, you wrote:
> PM> Since they have installed a second path to our Hitachi SAN, my
> PM> Mac OS X Server 4.8 mounts every SAN disk twice.
> PM> I asked everywhere, if there's a way, to correct that. And the
> PM> only answer so far was, that I need a volume manager, that can be
> PM> configured to consider two volumes as being identical.
> PM> Now that Mac OS X Leopard supports ZFS, could using ZFS be the
> PM> solution for this problem? If yes, how could I achieve this?
> 
> 1. I don't know what kind of file system you already
> have on those
> disks but if you really mounted them twice you could
> have already
> corrupt those file systems

I have now de-connected one cable.  :-8


> 2. I believe it's definitely possible to just correct
> your config under
> Mac OS without any need to use other fs or volume
> manager,

Apart from the proposition about using a special type of switch, I would be 
very interested in any information you have. So far not our SAN expert, nor our 
Mac expert, nor the very expensive Apple Server support we have, could help us.


> however
> going to zfs could be a good idea anyway

I think so too. I just have to wait until Leopard.


> 3. if you put those disks under ZFS it should just
> work despite of
> having second path

So you would propose, to just add one of

/dev/disk4s10
/dev/disk5s10

into the ZFS pool?
Is there no way, to explain ZFS, that both of these disks are identical 
(probably not clones), so that one could profit from the path redundancy?


Thanks
Phil
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iSCSI on a single interface?

2007-01-15 Thread Dick Davies

Hi, are there currently any plans to make an iSCSI target created by
setting shareiscsi=on on a zvol
bindable to a single interface (setting tpgt or acls)?

I can cobble something together with ipfilter,
but that doesn't give me enough granularity to say something like:

'host a can see target 1, host c can see targets 2-9', etc.

Also, am I right in thinking without this, all targets should be
visible on all interfaces?


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Eliminating double path with ZFS's volume manager

2007-01-15 Thread Robert Milkowski
Hello Philip,

Monday, January 15, 2007, 10:44:49 AM, you wrote:

PM> Hi,


PM> Since they have installed a second path to our Hitachi SAN, my
PM> Mac OS X Server 4.8 mounts every SAN disk twice.
PM> I asked everywhere, if there's a way, to correct that. And the
PM> only answer so far was, that I need a volume manager, that can be
PM> configured to consider two volumes as being identical.
PM> Now that Mac OS X Leopard supports ZFS, could using ZFS be the
PM> solution for this problem? If yes, how could I achieve this?


1. I don't know what kind of file system you already have on those
disks but if you really mounted them twice you could have already
corrupt those file systems

2. I belive it's definitely possible to just correct your config under
Mac OS without any need to use other fs or volume manager, however
going to zfs could be a good idea anyway

3. if you put those disks under ZFS it should just work despite of
having second path

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question: ZFS + Block level SHA256 ~= almost free CAS Squishing?

2007-01-15 Thread Darren J Moffat

Pawel Jakub Dawidek wrote:

On Mon, Jan 08, 2007 at 11:00:36AM -0600, [EMAIL PROTECTED] wrote:

I have been looking at zfs source trying to get up to speed on the
internals.  One thing that interests me about the fs is what appears to be
a low hanging fruit for block squishing CAS (Content Addressable Storage).
I think that in addition to lzjb compression, squishing blocks that contain
the same data would buy a lot of space for administrators working in many
common workflows.

[...]

I like the idea, but I'd prefer to see such option to be per-pool, not
per-filesystem option.

I found somewhere in ZFS documentation that clones are nice to use for a
large number of diskless stations. That's fine, but after every upgrade,
more and more files are updated and fewer and fewer blocks are shared
between clones. Having such functionality for the entire pool would be a
nice optimization in this case. This doesn't have to be per-pool option
actually, but per-filesystem-hierarchy, ie. all file systems under
tank/diskless/.


Which actually says it is per filesystem and it is inherited, exactly 
how compression and the checksum algorithm are done today.  You can 
change it on the clone if you wish to.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Eliminating double path with ZFS's volume manager

2007-01-15 Thread Philip Mötteli
Hi,


Since they have installed a second path to our Hitachi SAN, my Mac OS X Server 
4.8 mounts every SAN disk twice.
I asked everywhere, if there's a way, to correct that. And the only answer so 
far was, that I need a volume manager, that can be configured to consider two 
volumes as being identical.
Now that Mac OS X Leopard supports ZFS, could using ZFS be the solution for 
this problem? If yes, how could I achieve this?


Thanks for your help!
Phil
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[4]: [zfs-discuss] Replacing a drive in a raidz2 group

2007-01-15 Thread Robert Milkowski
Hello Jason,

Sunday, January 14, 2007, 1:26:37 AM, you wrote:

JJWW> Hi Robert,

JJWW> Will build 54 offline the drive?

IIRC there hasn't been ZFS+FMA integration yet.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss