[zfs-discuss] Re: how to move a zfs file system between disks

2007-05-30 Thread H E
Does it sound possible at all ,
or cannot be done  with the current ZFS commands yet?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs and 2530 jbod

2007-05-30 Thread Louwtjie Burger

Hi there

I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is the
prefered HBA I suppose)

Did it work correctly?

Thank you
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: how to move a zfs file system between disks

2007-05-30 Thread Chris Gerhard
You can do this using zfs send and receive. See 
http://blogs.sun.com/chrisg/entry/recovering_my_laptop_using_zfs for an 
example.  If the file system was remote then you would need to squeeze some ssh 
commands into the script but the concept is the same.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and 2530 jbod

2007-05-30 Thread James C. McPherson

Louwtjie Burger wrote:

I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is the
prefered HBA I suppose)
Did it work correctly?


Yes, it was tested as part of our project to add support to
mpt for MPxIO. The zfs test suite was one of the required
tests in our suite.

Yes, it worked correctly.


What other questions do you have?


cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored RAID-z2

2007-05-30 Thread Will Murnane

Sorry for singling you out, Ian; I meant "Reply to All".  This list
doesn't set "reply-to"...
On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:

How about 8 two way mirrors between shelves and a couple of hot spares?

That's fine and good, but then losing just one disk from each shelf
fast enough means the whole array is gone.  Then one strong enough
power glitch could potentially kill the whole array, if your power
configuration lets that happen.  And if you unplug one shelf by
accident (or to change FC switches, cables, whatever), you're left
with no redundancy whatsoever.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: how to move a zfs file system between disks

2007-05-30 Thread Richard Elling

H E wrote:

Does it sound possible at all ,
or cannot be done  with the current ZFS commands yet?


zfs replace
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-30 Thread Robert B. Wood


On May 29, 2007, at 2:59 PM, [EMAIL PROTECTED] wrote:

When sequential I/O is done to the disk directly there is no  
performance

degradation at all.


All filesystems impose some overhead compared to the rate of raw disk
I/O.  It's going to be hard to store data on a disk unless some  
kind of

filesystem is used.  All the tests that Eric and I have performed show
regressions for multiple sequential I/O streams.  If you have data  
that

shows otherwise, please feel free to share.


[I]t does not take any additional time in ldi_strategy(),
bdev_strategy(), mv_rw_dma_start().  In some instance it actually
takes less time.   The only thing that sometimes takes additional  
time

is waiting for the disk I/O.


Let's be precise about what was actually observed.  Eric and I saw
increased service times for the I/O on devices with NCQ enabled when
running multiple sequential I/O streams.  Everything that we observed
indicated that it actually took the disk longer to service requests  
when

many sequential I/Os were queued.

-j

It could very well be that on-disc cache is being partitioned  
differently when NCQ is enabled in certain implementations.  For  
example, with NCQ disabled, on disc look ahead may be enabled,  
netting sequential I/O improvements.  Just guessing, as this level of  
disc implementation detail is vendor specific and generally  
proprietary.  I would not expect the elevator sort algorithm to  
impose any performance penalty unless it were fundamentally flawed.


There's a bit of related discussion here

I'm actually struck by the minimal gains being seen in random I/O.  A  
few years ago, when NCQ was in prototype, I saw better than 50%  
improvement in average random I/O response time with large queue  
depths.  My gut feeling is that the issue is farther up the stack .. Bob




___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshots and NFS

2007-05-30 Thread msl
Hello all,
 Sorry if you think that question is stupid, but i need to ask..
 Imagine a normal situation on a NFS server with "N" client nodes. The objects 
of the shares is software (/usr/ for instance), and the admin wants to make 
available new versions of a few packages. 
 So, would be nice if the admin could associate a NFS share and a ZFS snapshot? 
 I mean, the admin have the option to make a snapshot on that ZFS filesystem, 
make the update on the binaries, and just a few machines would see that changes.
 I know, there are a lot of ways to do that... but i think that would be nice 
(better). That would economize space, and the administration task would be very 
easy (ZFS intend to be easy). I think ZFS have solved the "Stale NFS file 
Handle" on the "mount point", and all woul be necessary would be a respawn for 
precesses already in memory (on migrated clients). So... 
 What do you think about a feature like that?useful, crazy? 
 Thanks very much for your time!

byLeal
[www.posix.brte.com.br/blog]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: how to move a zfs file system between disks

2007-05-30 Thread H E
Thanks
actually I already saw the script mentioned there.

Is it possible to use zfs send/receive when the disk is not mounted?
i.e. give it device name as paramter and not zfs partition names?

-me2unix
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-30 Thread Roch - PAE

Torrey McMahon writes:
 > Toby Thain wrote:
 > >
 > > On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
 > >
 > >> Toby Thain wrote:
 > >>>
 > >>> On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
 > >>>
 >  On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
 > > What if your HW-RAID-controller dies? in say 2 years or more..
 > > What will read your disks as a configured RAID? Do you know how to 
 > > (re)configure the >controller or restore the config without 
 > > destroying your data? Do you know for sure that a >spare-part and 
 > > firmware will be identical, or at least compatible? How good is 
 > > your service >subscription? Maybe only scrapyards and museums will 
 > > have what you had. =o
 > 
 >  Be careful when talking about RAID controllers in general. They are
 >  not created equal! ...
 >  Hardware raid controllers have done the job for many years ...
 > >>>
 > >>> Not quite the same job as ZFS, which offers integrity guarantees 
 > >>> that RAID subsystems cannot.
 > >>
 > >> Depend on the guarantees. Some RAID systems have built in block 
 > >> checksumming.
 > >>
 > >
 > > Which still isn't the same. Sigh. 
 > 
 > Yep.you get what you pay for. Funny how ZFS is free to purchase 
 > isn't it?
 > 

With RAID level block checksumming, if the data gets
corrupted on it's way  _to_ the array, that data is lost.

With ZFS and RAID-Z or Mirroring, you will recover the
data.

-r


 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored RAID-z2

2007-05-30 Thread Marion Hakanson

[EMAIL PROTECTED] said:
> On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:
> > How about 8 two way mirrors between shelves and a couple of hot spares?
> 
> That's fine and good, but then losing just one disk from each shelf fast
> enough means the whole array is gone.  Then one strong enough power glitch
> could potentially kill the whole array, if your power configuration lets that
> happen.  And if you unplug one shelf by accident (or to change FC switches,
> cables, whatever), you're left with no redundancy whatsoever. 

You'd get the kind of protection you want with:

  zpool create mirror shelf1disk1 shelf2disk1 \
   mirror shelf1disk2 shelf2disk2 \
  . . .
   mirror shelf1diskn shelf2diskn mypool

It's not the same as raidz2+1, but you would have twice the disk space as
your four-way mirror example.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-30 Thread Toby Thain


On 30-May-07, at 12:33 PM, Roch - PAE wrote:



Torrey McMahon writes:

Toby Thain wrote:


On 25-May-07, at 1:22 AM, Torrey McMahon wrote:


Toby Thain wrote:


On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:


On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:

What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know  
how to

(re)configure the >controller or restore the config without
destroying your data? Do you know for sure that a >spare-part  
and

firmware will be identical, or at least compatible? How good is
your service >subscription? Maybe only scrapyards and museums  
will

have what you had. =o


Be careful when talking about RAID controllers in general.  
They are

not created equal! ...
Hardware raid controllers have done the job for many years ...


Not quite the same job as ZFS, which offers integrity guarantees
that RAID subsystems cannot.


Depend on the guarantees. Some RAID systems have built in block
checksumming.



Which still isn't the same. Sigh.


Yep.you get what you pay for. Funny how ZFS is free to purchase
isn't it?



With RAID level block checksumming, if the data gets
corrupted on it's way  _to_ the array, that data is lost.


Or _from_. "There's many a slip 'twixt cup and lip."

--T



With ZFS and RAID-Z or Mirroring, you will recover the
data.

-r



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iscsi target secured by CHAP

2007-05-30 Thread kristof
Hey all,

I'm having the following issue:

We have been setting up ZVOL's and we share them via ISCSI

All goes well untill we want to secure this via CHAP authentication.

When we  try to do that, we never succeed in discovering the target from an 
external initiator. We tested both solaris (b57) as linux as initiator.

Below is a step by step of what we are doing:

1. create the zvol:

zfs create -V 5G stor/iscsivol1

2. share via iscsi:

zfs set shareiscsi=on stor/iscsivol1

3. create an initiator:

iscsitadm create initiator -n iqn.2006-03.com.qlayer.qpm_1  nas-03

4. Activate the acl on the target:

iscsitadm modify target -l nas-03  
iqn.1986-03.com.sun:02:7e975c57-ed79-60af-daaa-9bbbfb735404

So this part is still OK, The ACL woth only an IQN workt but whenever we try to 
add the chapname & secret via the following, we never succeed to discover the 
target.

iscsitadm modify initiator -H nas-03 nas-03
iscsitadm modify initiator -C nas-03

PS: We are running ON B63 and we are using the sendtarget discovery method.

Below is the output of the working config:

Target: home/iscsi/volume1
iSCSI Name: iqn.1986-03.com.sun:02:7e975c57-ed79-60af-daaa-9bbbfb735404
Alias: home/iscsi/volume1
Connections: 0
ACL list:
Initiator: nas-03
TPGT list:
LUN information:
LUN: 0
GUID: 0117315a137d2a00465d8ad5
VID: SUN
PID: SOLARIS
Type: disk
Size: 5.0G
Backing store: /dev/zvol/rdsk/home/iscsi/volume1
Status: online

# iscsitadm list initiator
Initiator: nas-03
iSCSI Name: iqn.2006-03.com.qlayer.qpm_1
CHAP Name: Not set

As attachement I included an output off snoop, in this file you can see that 
the initiator is trying to connect via chap. We never see the Server is sending 
back the challenge in response.

What could be going on?

Thanks for all your help!

Kristof
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help/Advice needed

2007-05-30 Thread Paul Cooper - Sun HPC High Performance Computing

I have a Solaris 11 build server with build 58 and a zfs scratch
filesystem. When trying to upgrade to build 63 using liveupgrade
I get the following upon reboot. The machine never comes up. Just
keeps giving the error/warning below.

Is there something I am doing wrong?


WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  Received invalid reply frame address 0x480

WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  Received invalid reply frame address 0x480

WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  Received invalid reply frame address 0x480

WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  Received invalid reply frame address 0x480

WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/scs returned as context 
reply in slot 72
WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  NULL command returned as context reply in slot 72
WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  NULL command returned as context reply in slot 72
WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  NULL command returned as context reply in slot 72
WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  NULL command returned as context reply in slot 72
WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],60/[EMAIL PROTECTED] (mpt0):
  NULL command returned as context reply in slot 72

--
__

  Paul Cooper   email:  [EMAIL PROTECTED]
  System Administrator  Direct: 781-442-2238
  CR - Lab Technologies FAX:781-442-1542
  Office Bur03-4667 Pager:  781-226-1106
  Sun Microsystems  [EMAIL PROTECTED]
  1 Network Drive UBUR03-411   Home:   781-643-1855
  Burlington, MA 01803  

__

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-05-30 Thread Mark A. Carlson

http://ask.slashdot.org/article.pl?sid=07/05/30/0135218&from=rss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored RAID-z2

2007-05-30 Thread Ian Collins
Will Murnane wrote:
> Sorry for singling you out, Ian; I meant "Reply to All".  This list
> doesn't set "reply-to"...
> On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:
>> How about 8 two way mirrors between shelves and a couple of hot spares?
> That's fine and good, but then losing just one disk from each shelf
> fast enough means the whole array is gone.  
Only if you lost the same two drives in each shelf, same as any other
striped mirror.  I guess the ideal solution in this case would be the
ability to use mirrors as raidz components.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored RAID-z2

2007-05-30 Thread Ian Collins
Will Murnane wrote:
> Sorry for singling you out, Ian; I meant "Reply to All".  This list
> doesn't set "reply-to"...
> On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:
>> How about 8 two way mirrors between shelves and a couple of hot spares?
> That's fine and good, but then losing just one disk from each shelf
> fast enough means the whole array is gone.  
Only if you lost the same two drives in each shelf, same as any other
striped mirror.  I guess the ideal solution in this case would be the
ability to use mirrors as raidz components.

Ian


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-05-30 Thread Toby Thain


On 30-May-07, at 4:28 PM, Mark A. Carlson wrote:


http://ask.slashdot.org/article.pl?sid=07/05/30/0135218&from=rss



One highly rated comment features some of the first real ZFS FUD I've  
seen in the wild. Does this signify that ZFS is being taken seriously  
now? :)


--Toby


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-05-30 Thread Jerry Kemp

What comment in particular was that?

Jerry K


Toby Thain wrote:


On 30-May-07, at 4:28 PM, Mark A. Carlson wrote:


http://ask.slashdot.org/article.pl?sid=07/05/30/0135218&from=rss



One highly rated comment features some of the first real ZFS FUD I've 
seen in the wild. Does this signify that ZFS is being taken seriously 
now? :)


--Toby


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored RAID-z2

2007-05-30 Thread Richard Elling

The reliability calculations for these scenarios are described in several
articles on my blog.
http://blogs.sun.com/relling

You do get additional, mirror-like reliability for using the copies
property, also described in my blog.

Personally, I'd go with mirroring across the shelves.  KISS.
 -- richard

Ian Collins wrote:

Will Murnane wrote:

Sorry for singling you out, Ian; I meant "Reply to All".  This list
doesn't set "reply-to"...
On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:

How about 8 two way mirrors between shelves and a couple of hot spares?

That's fine and good, but then losing just one disk from each shelf
fast enough means the whole array is gone.  

Only if you lost the same two drives in each shelf, same as any other
striped mirror.  I guess the ideal solution in this case would be the
ability to use mirrors as raidz components.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Multiple OpenSolaris boxes with access to zfs pools

2007-05-30 Thread Jeff Bachtel
I have a simple fibre channel SAN setup, with 2 disc arrays and 2
SunFire boxes attached to a FC switch. Each disc array holds a ZFS
pool which should be mounted by one OpenSolaris system, and not the
other.

One of the two pairs was a recent addition to the FC switch (it was
previously direct-attached), and on boot the default filesystem/local
SMF service failed. We tracked it down to "zfs mount -a" being
executed in /lib/svc/method/fs-local and failing, while trying to read
the zfs pool already open and locked by the other OpenSolaris system.

A suggestion given as a permanent resolution of this problem was to do
a "zpool export" on the pools that should not be mounted on each
system. A simple diagram to illustrate our setup:

d1(zpool1) <- switch1 -> comp1
d2(zpool2) <- switch1 -> comp2

Because comp1 already has zpool1 mounted, comp2 seems unable to export
zpool1 to prevent messiness on boot (presuming that the export being
stored in the zpool.cache would prevent the failure).

I can enable WWN/lun masking on d1 and d2 such that comp1 and comp2
see only those luns they should be mounting, but I thought I'd ask if
the "best" way to handle this would be to do a temporary export of
zpool1 on comp1, then import/export zpool1 on comp2 (and likewise
import/export zpool2 on comp1), or if there was some other, more "zfs"
way to handle this. If the somewhat convoluted technique described is
the only way to handle things, would submitting a RFE for an "export
this pool even if you don't technically know about it" option be
amiss? While it's not a huge issue for me to temporarily export zpool1
in this case, I could see it becoming a problem as more pools get
added to the SAN.

Thanks,

Jeff Bachtel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-05-30 Thread Toby Thain


On 30-May-07, at 6:31 PM, Jerry Kemp wrote:


What comment in particular was that?


Sorry, I should have cited it. Blew my chance to moderate by posting  
to the thread :)


http://ask.slashdot.org/comments.pl?sid=236627&cid=19319903

I computed the FUD factor by sorting the items into known bugs; fixed  
bugs; and incorrect.


--Toby



Jerry K


Toby Thain wrote:

On 30-May-07, at 4:28 PM, Mark A. Carlson wrote:

http://ask.slashdot.org/article.pl?sid=07/05/30/0135218&from=rss
One highly rated comment features some of the first real ZFS FUD  
I've seen in the wild. Does this signify that ZFS is being taken  
seriously now? :)

--Toby

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] current state of play with ZFS boot and install?

2007-05-30 Thread Carl Brewer
Out of curiosity, I'm wondering if Lori, or anyone else who actually writes the 
stuff, has any sort of a 'current state of play' page that describes the latest 
OS ON release and how it does ZFS boot and installs? There's blogs all over the 
place, of course, which have a lot of stale information, but is there a 'the 
current release supports this, and this is how you install it' page anywhere, 
or somewhere in particular to watch?  

I've been playing with ZFS boot since around b34 or whenever it was that it 
first started to be able to be used as a boot partition with the temporary ufs 
partition hack, but I understand it's moved beyond that.

I've been downloading and playing with the ON builds every now and then, but 
haven't found (haven't looked in the right places?) anywhere where each build 
has "this is what this build does differently, this is what works and how" 
documented.

can someone belt me with a cluestick please?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS + ISCSI + LINUX QUESTIONS

2007-05-30 Thread Nathan Huisman

= PROBLEM

To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.

= BUDGET

Currently I have about 25-30k to start the project, more could be
allocated in the next fiscal year for perhaps a backup solution.

= TIMEFRAME

I have 8 days to cut a P.O. before our fiscal year ends.

= STORAGE REQUIREMENTS

5-10tb of redundant fairly high speed storage


= QUESTION #1

What is the best way to mirror two zfs pools in order to achieve a sort
of HA storage system? I don't want to have to physically swap my disks
into another system if any of the hardware on the ZFS server dies. If I
have the following configuration what is the best way to mirror these in
near real time?

BOX 1 (JBOD->ZFS) BOX 2 (JBOD-ZFS)

I've seen the zfs send and recieve commands but I'm not sure how well
that would work with a close to real time mirror.


= QUESTION #2

Can ZFS be exported via iscsi and then imported as a disk to a linux
system and then be formated with another file system. I wish to use ZFS
as a block level file systems for my virtual machines. Specifically
using xen. If this is possible, how stable is this? How is error
checking handled if the zfs is exported via iscsi and then the block
device formated to ext3? Will zfs still be able to check for errors?
If this is possible and this all works, then are there ways to expand a
zfs iscsi exported volume and then expand the ext3 file system on the
remote host?

= QUESTION #3

How does zfs handle a bad drive? What process must I go through in
order to take out a bad drive and replace it with a good one?

= QUESTION #4

What is a good way to back up this HA storage unit? Snapshots will
provide an easy way to do it live, but should it be dumped into a tape
library, or an third offsite zfs pool using zfs send/recieve or ?

= QUESTION #5

Does the following setup work?

BOX 1 (JBOD) -> iscsi export -> BOX 2 ZFS.

In other words, can I setup a bunch of thin storage boxes with low cpu
and ram instead of using sas or fc to supply the jbod to the zfs server?



I appreciate any advice or answers you might have.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + ISCSI + LINUX QUESTIONS

2007-05-30 Thread Dale Ghent

On May 31, 2007, at 12:15 AM, Nathan Huisman wrote:


= PROBLEM

To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.




Here are some tips from me. I notice you mention iSCSI a lot so I'll  
stick to that...


Q1: The best way to mirror in real time is to do it from the  
consumers of the storage, ie, your iSCSI clients. Implement two  
storage servers (say, two x4100s with attached disk) and put their  
disk into zpools. The two servers do not have to know about each  
other. Configure ZFS file systems identically on both and export them  
to the client that'll use it. Use the software mirroring feature on  
the client to mirror these iSCSI shares (eg: dynamic disks on  
Windows, LVM on Linux, SVM on Solaris).


What this gives you are two storage servers (ZFS-backed, serving out  
iSCSI shares) and the client(s) take a share from each and mirror  
them... if one of the ZFS servers were to go kaput, the other is  
still there actively taking in and serving data. From the client's  
perspective, it'll just look like one side of the mirror went down  
and after you get the downed ZFS server back up, you would initiate  
normal mirror reattachment procedure on the client(s).


This will also allow you to patch your ZFS servers without downtime  
incurred on your clients.


The disk storage on your two ZFS+iSCSI servers could be anything.  
Given your budget and space needs, I would suggest looking at the  
Apple Xserve RAID with 750GB drives. You're a .edu, so the price of  
these things will likely please you (I just snapped up two of them at  
my .edu for a really insane price).


Q2: The client will just see the iSCSI share as a raw block device.  
Put your ext3/xfs/jfs on it as you please... to ZFS on the it is just  
data. That's the only way you can use iSCSI, really it's block  
level, remember. On ZFS, the iSCSI backing store is one large sparse  
file.


Q3: See the zpool man page, specifically the 'zpool replace ...'  
command.


Q4: Since (or if) you're doing iSCSI, ZFS snapshots will be of no  
value to you since ZFS can't see into those iSCSI backing store  
files. I'll assume that you have a backup system in place for your  
existing infrastructure (Networker, NetBackup or what have you) so  
back up the stuff from the *clients* and not the ZFS servers. Just  
space the backup schedule out if you have multiple clients so that  
the ZFS+iSCSI servers aren't overloaded with all its clients reading  
data suddenly with backup time rolls around.


Q5: Sure, nothing would stop you from doing that sort of config, but  
it's something that would make Rube Goldberg smile. Keep out any  
unneeded complexity and condense the solution.


Excuse my ASCII art skills, but consider this:

[JBOD/ARRAY]---(fc)--->[ZFS/iSCSI server 1]---(iscsi share)- 
[Client]

  [mirroring the]
[JBOD/ARRAY]---(fc)--->[ZFS/iSCSI server 2]---(iscsi share)- 
[ two   shares ]


Kill one of the JBODs or arrays, OR the ZFS+iSCSI servers, and your  
clients are still in good shape as long as their software mirroring  
facility behaves.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + ISCSI + LINUX QUESTIONS

2007-05-30 Thread Will Murnane

Questions I don't know answers to are omitted.  "I am but a nestling."

On 5/31/07, Nathan Huisman <[EMAIL PROTECTED]> wrote:

= STORAGE REQUIREMENTS

5-10tb of redundant fairly high speed storage

What does "high speed" mean?  How many users are there for this
system?  Are they accessing it via Ethernet? FC? Something else?  Why
the emphasis on iscsi?


= QUESTION #2

Can ZFS be exported via iscsi and then imported as a disk to a linux
system and then be formated with another file system[?]

Yes. It's in OpenSolaris but not (as I understand it) in Solaris
direct from Sun.  If running OpenSolaris isn't an issue (but it
probably is) it works out of the box.


= QUESTION #3

How does zfs handle a bad drive? What process must I go through in
order to take out a bad drive and replace it with a good one?

ZFS only notices drives are dead when they're really dead - they can't
be opened.  If a drive is causing intermittent problems (returning bad
data and so forth) it won't get noticed, but ZFS will recover the
blocks from mirrors or parity.  "zpool replace" should take care of
the replacement procedure, or you could keep hot spares online.  I
can't comment on hotswapping drives while the machine is on; does this
work in general, or require special hardware?


= QUESTION #4

What is a good way to back up this HA storage unit? Snapshots will
provide an easy way to do it live, but should it be dumped into a tape
library, or an third offsite zfs pool using zfs send/recieve or ?

ZFS will be no help if all you've got is iscsi targets.  You need
something that knows what those targets hold; whatever client-OS-based
stuff you use other places will do.  Otherwise you end up
storing/backing up a lot more than you need to - filesystem metadata,
et cetera.


= QUESTION #5

Does the following setup work?

BOX 1 (JBOD) -> iscsi export -> BOX 2 ZFS.

In other words, can I setup a bunch of thin storage boxes with low cpu
and ram instead of using sas or fc to supply the jbod to the zfs server?

As Dale mentions, this seems overly complicated.   Consuming iscsi and
producing "different" iscsi doesn't sound like a good idea to me.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + ISCSI + LINUX QUESTIONS

2007-05-30 Thread Sanjeev Bagewadi

Nathan,

Some answers inline...

Nathan Huisman wrote:


= PROBLEM

To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.

= BUDGET

Currently I have about 25-30k to start the project, more could be
allocated in the next fiscal year for perhaps a backup solution.

= TIMEFRAME

I have 8 days to cut a P.O. before our fiscal year ends.

= STORAGE REQUIREMENTS

5-10tb of redundant fairly high speed storage


= QUESTION #1

What is the best way to mirror two zfs pools in order to achieve a sort
of HA storage system? I don't want to have to physically swap my disks
into another system if any of the hardware on the ZFS server dies. If I
have the following configuration what is the best way to mirror these in
near real time?

BOX 1 (JBOD->ZFS) BOX 2 (JBOD-ZFS)

I've seen the zfs send and recieve commands but I'm not sure how well
that would work with a close to real time mirror.


If you want close to realtime mirroring (across pools in this case) AVS 
would

be a better option in my opinion.
Refer to : http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/




= QUESTION #2

Can ZFS be exported via iscsi and then imported as a disk to a linux
system and then be formated with another file system. I wish to use ZFS
as a block level file systems for my virtual machines. Specifically
using xen. If this is possible, how stable is this? How is error
checking handled if the zfs is exported via iscsi and then the block
device formated to ext3? Will zfs still be able to check for errors?
If this is possible and this all works, then are there ways to expand a
zfs iscsi exported volume and then expand the ext3 file system on the
remote host?


Yes, you can create volumes (ZVOL) in a Zpool and export them over iscsi.
The ZVOL would guarantee the data consistency at the block level.

Expanding the ZVOL should be possible. However, I am not sure if/how 
iSCSI behaves here.

You might need to try it out.



= QUESTION #3

How does zfs handle a bad drive? What process must I go through in
order to take out a bad drive and replace it with a good one?


# zpool replace   

The other option would be configure hot-spares and they will kickin 
automatically

when a bad-drive is detected.



= QUESTION #4

What is a good way to back up this HA storage unit? Snapshots will
provide an easy way to do it live, but should it be dumped into a tape
library, or an third offsite zfs pool using zfs send/recieve or ?

= QUESTION #5

Does the following setup work?

BOX 1 (JBOD) -> iscsi export -> BOX 2 ZFS.

In other words, can I setup a bunch of thin storage boxes with low cpu
and ram instead of using sas or fc to supply the jbod to the zfs server?


Should be feasible. Just that you would then need a robust LAN and that 
would be flooded.


Thanks and regards,
Sanjeev.

--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss (04/16 - 04/30)

2007-05-30 Thread Eric Boutilier

For background on what this is, see:

http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200

=
zfs-discuss 04/16 - 04/30
=

Size of all threads during period:

Thread size Topic
--- -
 46   Preferred backup mechanism for ZFS?
 40   ZFS for Linux (NO LISCENCE talk, please)
 34   zfs boot image conversion kit is posted
 31   Status Update before Reinstall?
 20   ZFS+NFS on storedge 6120 (sun t4)
 19   ZFS Boot: Dividing up the name space
 17   HowTo: UPS + ZFS & NFS + no fsync
 16   ZFS on the desktop
 16   Multi-tera, small-file filesystems
 15   concatination & stripe - zfs?
 15   Experience with Promise Tech. arrays/jbod's?
 14   Testing of UFS, VxFS and ZFS
 13   Bottlenecks in building a system
 12   Very Large Filesystems
 11   storage type for ZFS
 11   LZO compression?
 11   Help me understand ZFS caching
 10   zfs send/receive question
 10   ZFS disables nfs/server on a host
 10   Update/append of compressed files
 10   Permanently removing vdevs from a pool
  9   XServe Raid & Complex Storage Considerations
  8   zfs performance on fuse (Linux) compared to other fs
  8   slow sync on zfs
  8   device name changing
  8   ZFS performance model for sustained, contiguous writes?
  8   How much do we really want zpool remove?
  7   opensol-20060605 # zpool iostat -v 1
  7   ZFS and Linux
  6   cow performance penatly
  6   Preferred backup mechanism for ZFS?)
  5   zfs receive and setting properties like compression
  5   need some explanation
  5   learn to quote
  5   ZFS improvements
  5   ZFS boot: 3 smaller glitches with console, /etc/dfs/sharetab and 
/dev/random
  5   What tags are supported on a zvol?
  5   Restrictions on ZFS boot?
  5   NFSd and dtrace
  4   zpool status -v
  4   zfs question as to sizes
  4   ZFS on FreeBSD vs Solaris...
  4   Volume copy of master system with zfs
  4   How to bind the oracle 9i data file to zfs volumes
  4   Generic "filesystem code" list/community for opensolaris ?
  4   FYI: X4500 (aka thumper) sale
  4   B62 AHCI and ZFS
  3   zfs submounts and permissions with autofs
  3   zfs block allocation strategy
  3   software RAID vs. HW RAID - part III
  3   snapshot features
  3   am I completely insane, or will this work?
  3   adding a disk
  3   ZFS, Multiple Machines and NFS
  3   ZFS on slices
  3   ZFS on NetBSD (was: zfs performance on fuse...)
  3   ZFS boot from compressed zfs
  3   Scrubbing a zpool built on LUNs
  3   SPARC: no cache synchronize
  3   Probability Failure & Calculator
  3   Outdated FAQ entry
  3   Now that FreeBSD has ZFS (basically)
  2   tape-backup software (was: Very Large Filesystems)
  2   raidz pool with a slice on the boot disk
  2   crashed remote system trying to do zfs send / receive
  2   ZFS panic caused by an exported zpool??
  2   ZFS copies and fault tolerance
  2   Metaslab allocation control?
  2   Cheap Array Enclosure for ZFS pool?
  2   Bitrot and panics
  2   6410 expansion shelf
  1   zpool list and df -k difference
  1   zfs performance on fuse (Linux) compared to
  1   zfs
  1   unsubscribe
  1   striping with ZFS and SAN(AMS500 HDS)
  1   solaris - ata over ethernet - zfs - HPC
  1   rootpool notes
  1   problem mounting one of the zfs file system during boot
  1   problem interpreting build_live_dvd.conf.sample file
  1   patched DVD ISO image
  1   hostid/hostname now stored on the label
  1   disruption to IO of zpool causes reboot/boot issue
  1   ZFS status -v and status -x are not in sync
  1   ZFS and Oracle db production deployment
  1   ZFS agent for Symantec/VERITAS VCS
  1   Snapshots properties.
  1   Slow attribute change operations
  1   Samba and ZFS ACL Question
  1   Puzzling ZFS behavior with COMPRESS option
  1   Experiences with zfs/iscsi on T2000s and X4500s?
  1   Drobo
  1   ARC, mmap, pagecache...
  1   A big Thank You to the ZFS team!
  1   120473-05
  1   *** High Praise for ZFS and NFS services ***


Posting activity by person for period:

# of posts  By
--   -