Re: [zfs-discuss] ZFS Administration

2008-04-09 Thread Aaron Epps
Thanks for the info, it looks like this is exactly what I need. However, I'm 
curious as to why the guys at Sun that are working on this aren't building the 
ZFS GUI Administration into the existing, web-based ZFS Administration tool? It 
seems that they're developing a GUI for the GNOME desktop. Anyone know why this 
is?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send takes 3 days for 1TB?

2008-04-09 Thread Nathan Kroenert
Indeed -

If it was 100Mb/s ethernet, 1TB would take near enough 24 hours just to 
push that much data...

Would be great to see some details of the setup and where the bottleneck 
was. I'd be surprised if ZFS has anything to do with the transfer rate...

But an interesting read anyways. :)

Nathan.



Nicolas Williams wrote:
> On Wed, Apr 09, 2008 at 11:38:03PM -0400, Jignesh K. Shah wrote:
>> Can zfs send utilize multiple-streams of data transmission (or some sort 
>> of multipleness)?
>>
>> Interesting read for background
>> http://people.planetpostgresql.org/xzilla/index.php?/archives/338-guid.html
>>
>> Note: zfs send takes 3 days for 1TB to another system
> 
> Huh?  That article doesn't describe how they were moving the zfs send
> stream, whether the limit was the network, ZFS or disk I/O.  In fact,
> it's bereft of numbers.  It even says that the transfer time is not
> actually three days but "upwards of 24 hours."
> 
> Nico

-- 
//
// Nathan Kroenert  [EMAIL PROTECTED] //
// Technical Support Engineer   Phone:  +61 3 9869-6255 //
// Sun Services Fax:+61 3 9869-6288 //
// Level 3, 476 St. Kilda Road  //
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send takes 3 days for 1TB?

2008-04-09 Thread Nicolas Williams
On Wed, Apr 09, 2008 at 11:38:03PM -0400, Jignesh K. Shah wrote:
> Can zfs send utilize multiple-streams of data transmission (or some sort 
> of multipleness)?
> 
> Interesting read for background
> http://people.planetpostgresql.org/xzilla/index.php?/archives/338-guid.html
> 
> Note: zfs send takes 3 days for 1TB to another system

Huh?  That article doesn't describe how they were moving the zfs send
stream, whether the limit was the network, ZFS or disk I/O.  In fact,
it's bereft of numbers.  It even says that the transfer time is not
actually three days but "upwards of 24 hours."

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs send takes 3 days for 1TB?

2008-04-09 Thread Jignesh K. Shah
Can zfs send utilize multiple-streams of data transmission (or some sort 
of multipleness)?

Interesting read for background
http://people.planetpostgresql.org/xzilla/index.php?/archives/338-guid.html

Note: zfs send takes 3 days for 1TB to another system


Regards,
Jignesh

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot's - limitation

2008-04-09 Thread Lori Alt


Sharon Daraby wrote:

>Hi,
>
>1. Does the ZFS boot's limitation still single disk or mirrored config, 
>and SMI label only ?
>  
>
yes

>2. There is plan to enable ZFS boot from RaidZ / RaidZ2 ?
>
yes, but it won't be in the first release.

lori

>
>Thanks in advance
>Sharon,
>___
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>  
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Administration

2008-04-09 Thread Keith Bierman

On Apr 9, 2008, at 6:54 PM, Wee Yeh Tan wrote:
> I'm just thinking out loud.  What would be the advantage of having
> periodic snapshot taken within ZFS vs invoking it from an external
> facility?

I suspect that the people requesting this really want a unified  
management tool (GUI and possibly CLI). Whether the actual  
implementation were inside of the filesystem code, or implemented via  
cron or equivalent is probably irrelevant.

Their point, I think, is that we've got this nice "management free"  
technology ... except for these bits that still have to be done  
independently, and (to the non-unix experienced somewhat) arcane. If  
we aspire to achieve the sort of user friendlyness that is the Mac,  
then there's work to be in this area ;>




-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
 Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Administration

2008-04-09 Thread Wee Yeh Tan
I'm just thinking out loud.  What would be the advantage of having
periodic snapshot taken within ZFS vs invoking it from an external
facility?

On Thu, Apr 10, 2008 at 1:21 AM, sean walmsley
<[EMAIL PROTECTED]> wrote:
> I haven't used it myself, but the following blog describes an automatic 
> snapshot facility:
>
>  http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
>
>  I agree that it would be nice to have this type of functionality built into 
> the base product, however.
>
>
>
>
>  This message posted from opensolaris.org
>  ___
>  zfs-discuss mailing list
>  zfs-discuss@opensolaris.org
>  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Just me,
Wire ...
Blog: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS boot's - limitation

2008-04-09 Thread Sharon Daraby
Hi,

1. Does the ZFS boot's limitation still single disk or mirrored config, 
and SMI label only ?

2. There is plan to enable ZFS boot from RaidZ / RaidZ2 ?

Thanks in advance
Sharon,
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incorrect/conflicting suggestion in error message on a faulted pool

2008-04-09 Thread Neil Perrin
Haudy,

Thanks for reporting this bug and helping to improve ZFS.
I'm not sure either how you could have added a note to an
existing report. Anyway I've gone ahead and done that for you
in the "Related Bugs" field. Though opensolaris doesn't reflect it yet

Neil.


Haudy Kazemi wrote:
> I have reported this bug here: 
> http://bugs.opensolaris.org/view_bug.do?bug_id=6685676
> 
> I think this bug may be related, but I do not see where to add a note to 
> an existing bug report: 
> http://bugs.opensolaris.org/view_bug.do?bug_id=6633592
> (both bugs refer to ZFS-8000-2Q however my report shows a FAULTED pool 
> instead of a DEGRADED pool.)
> 
> Thanks,
> 
> -hk
> 
> Haudy Kazemi wrote:
>> Hello,
>>
>> I'm writing to report what I think is an incorrect or conflicting 
>> suggestion in the error message displayed on a faulted pool that does 
>> not have redundancy (equiv to RAID0?).  I ran across this while testing 
>> and learning about ZFS on a clean installation of NexentaCore 1.0.
>>
>> Here is how to recreate the scenario:
>>
>> [EMAIL PROTECTED]:~$ mkfile 200m testdisk1 testdisk2
>> [EMAIL PROTECTED]:~$ sudo zpool create mybigpool $PWD/testdisk1 
>> $PWD/testdisk2
>> Password:
>> [EMAIL PROTECTED]:~$ zpool status mybigpool
>>   pool: mybigpool
>>  state: ONLINE
>>  scrub: none requested
>> config:
>>
>> NAME  STATE READ WRITE CKSUM
>> mybigpool ONLINE   0 0 0
>>   /export/home/kaz/testdisk1  ONLINE   0 0 0
>>   /export/home/kaz/testdisk2  ONLINE   0 0 0
>>
>> errors: No known data errors
>> [EMAIL PROTECTED]:~$ sudo zpool scrub mybigpool
>> [EMAIL PROTECTED]:~$ zpool status mybigpool
>>   pool: mybigpool
>>  state: ONLINE
>>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:09:29 2008
>> config:
>>
>> NAME  STATE READ WRITE CKSUM
>> mybigpool ONLINE   0 0 0
>>   /export/home/kaz/testdisk1  ONLINE   0 0 0
>>   /export/home/kaz/testdisk2  ONLINE   0 0 0
>>
>> errors: No known data errors
>>
>> Up to here everything looks fine.  Now lets destroy one of the virtual 
>> drives:
>>
>> [EMAIL PROTECTED]:~$ rm testdisk2
>> [EMAIL PROTECTED]:~$ zpool status mybigpool
>>   pool: mybigpool
>>  state: ONLINE
>>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:09:29 2008
>> config:
>>
>> NAME  STATE READ WRITE CKSUM
>> mybigpool ONLINE   0 0 0
>>   /export/home/kaz/testdisk1  ONLINE   0 0 0
>>   /export/home/kaz/testdisk2  ONLINE   0 0 0
>>
>> errors: No known data errors
>>
>> Okay, still looks fine, but I haven't tried to read/write to it yet.  
>> Try a scrub.
>>
>> [EMAIL PROTECTED]:~$ sudo zpool scrub mybigpool
>> [EMAIL PROTECTED]:~$ zpool status mybigpool
>>   pool: mybigpool
>>  state: FAULTED
>> status: One or more devices could not be opened.  Sufficient replicas 
>> exist for
>> the pool to continue functioning in a degraded state.
>> action: Attach the missing device and online it using 'zpool online'.
>>see: http://www.sun.com/msg/ZFS-8000-2Q
>>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:10:36 2008
>> config:
>>
>> NAME  STATE READ WRITE CKSUM
>> mybigpool FAULTED  0 0 0  
>> insufficient replicas
>>   /export/home/kaz/testdisk1  ONLINE   0 0 0
>>   /export/home/kaz/testdisk2  UNAVAIL  0 0 0  cannot 
>> open
>>
>> errors: No known data errors
>> [EMAIL PROTECTED]:~$
>>
>> There we go.  The pool has faulted as I expected to happen because I 
>> created it as a non-redundant pool.  I think it was the equivalent of a 
>> RAID0 pool with checksumming, at least it behaves like one.  The key to 
>> my reporting this is that the "status" message says "One or more devices 
>> could not be opened.  Sufficient replicas exist for the pool to continue 
>> functioning in a degraded state." while the message further down to the 
>> right of the pool name says "insufficient replicas".
>>
>> The verbose status message is wrong in this case.  From other forum/list 
>> posts looks like that status message is also used for degraded pools, 
>> which isn't a problem, but here we have a faulted pool.  Here's an 
>> example of the same status message used appropriately: 
>> http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/031298.html
>>
>> Is anyone else able to reproduce this?  And if so, is there a ZFS bug 
>> tracker to report this too? (I didn't see a public bug tracker when I 
>> looked.)
>>
>> Thanks,
>>
>> Haudy Kazemi
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.or

Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Brandon High
On Wed, Apr 9, 2008 at 11:19 AM, Richard Elling <[EMAIL PROTECTED]> wrote:
>  I just get my laptop within WiFi range and mount :-).  I don't see any
>  benefit to a wire which is slower than Ethernet, when an Ethernet
>  port is readily available on almost all modern laptops.

I think what Bob meant was something like Apple's Firewire target mode.

If you turn on the machine while holding down the "T" key, the machine
presents itself as a firewire drive. You can plug it in and access the
disk without booting. Since the host is not booted into an OS, there
is still only one machine accessing the filesystem.

In theory, you could do this today with a ZFS filesystem on a Mac,
since the target mode ability is in the machine's firmware. To do it
with another brand of machine, you'd need a boot image that presented
the drive(s) as a firewire target. There may be micro linux images
that fit on a USB key and allow this.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-09 Thread Jonathan Loran

Just to report back to the list...  Sorry for the lengthy post

So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more 
or less work as expected.  If I unplug one side of the mirror - unplug 
or power down one of the iSCSI targets -  I/O to the zpool stops for a 
while, perhaps a minute, and then things free up again.  zpool commands 
seem to get unworkably slow, and error messages fly by on the console 
like fire ants running from a flood.  Worst of all, plugging the faulted 
mirror back in (before removing the mirror from the pool)  it's very 
hard to bring the faulted device back online:

prudhoe # zpool status
  pool: test
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: resilver completed with 0 errors on Tue Apr  8 16:34:08 2008
config:

NAMESTATE READ WRITE CKSUM
testDEGRADED 0 0 0
  mirrorDEGRADED 0 0 0
c2t1d0  FAULTED  0 2.88K 0  corrupted data
c2t1d0  ONLINE   0 0 0

errors: No known data errors

> Comment: why are there now two instances of c2t1d0??  <<


prudhoe # zpool replace test c2t2d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t1d0s0 is part of active ZFS pool test. Please see zpool(1M).

prudhoe # zpool replace -f test c2t2d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c2t1d0s0 is part of active ZFS pool test. Please see zpool(1M).

prudhoe # zpool remove test c2t2d0
cannot remove c2t2d0: no such device in pool

prudhoe # zpool offline test c2t2d0
cannot offline c2t2d0: no such device in pool

prudhoe # zpool online test c2t2d0
cannot online c2t2d0: no such device in pool

>>  OK, get more drastic <<

prudhoe # zpool clear test

prudhoe # zpool status
  pool: test
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: resilver completed with 0 errors on Tue Apr  8 16:34:08 2008
config:

NAMESTATE READ WRITE CKSUM
testDEGRADED 0 0 0
  mirrorDEGRADED 0 0 0
c2t1d0  FAULTED  0 0 0  corrupted data
c2t1d0  ONLINE   0 0 0

errors: No known data errors

>  Frustration setting in.  The error counts are zero, but 
> still 
two instances of c2t1d0 listed... 

prudhoe # zpool export test

prudhoe # zpool import test

prudhoe # zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
test   12.9G   9.54G   3.34G74%  ONLINE -

prudhoe # zpool status
  pool: test
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 1.11% done, 0h20m to go
config:

NAMESTATE READ WRITE CKSUM
testONLINE   0 0 0
  mirrorONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0

errors: No known data errors


>  Finally resilvering with the right devices.  The thing I really don't 
> like here is the pool had to be exported and then imported to make this 
> work.  For an NFS server, this is not really acceptable.  Now I know this 
> is ol' Solaris 10u4, but still, I'm surprised I needed to export/import 
> the pool to get it working correctly again.  Anyone know what I did 
> wrong?  Is there a canonical way to online the previously faulted device?

Anyway, It looks like for now, I can get some sort of HA our of this iSCSI 
mirror.  The other pluses is the pool can self heal, and reads will be spread 
across both units.  

Cheers,

Jon

--- P.S.  Playing with this more before sending this message, if you can detach 
the faulted mirror before putting it back online, it all works well.  Hope that 
nothing bounces on your network when you have a failure:

 unplug one iscsi mirror, then: 

prudhoe # zpool status -v
  pool: test
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: scrub completed with 0 errors on Wed Apr  9 1

Re: [zfs-discuss] ZFS ACE limit?

2008-04-09 Thread Mark Shellenbaum
Paul B. Henson wrote:
> One of my colleagues was testing our ZFS prototype (S10U4), and was
> wondering what was the limit for ACE's on a ZFS ACL.
> 
> Empirically, he determined that he could not add more than 1024 ACE's
> either locally or via NFSv4 from a Solaris client (from a Linux NFSv4
> client, it failed adding the 209th ACL, and for that matter, after the
> failure to add that last entry, it was no longer able to read the ACL,
> Complaining that "getxattr" failed).
> 
> I haven't found any information regarding a documented limit for either ZFS
> itself or NFSv4.
> 
> Is 1024 the limit? Is there any tuning that can be done to either increase
> or decrease that?
> 

The limit is 1024, which is the same for ufs POSIX draft ACLs.  It can't 
currently be changed.

   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS ACE limit?

2008-04-09 Thread Paul B. Henson

One of my colleagues was testing our ZFS prototype (S10U4), and was
wondering what was the limit for ACE's on a ZFS ACL.

Empirically, he determined that he could not add more than 1024 ACE's
either locally or via NFSv4 from a Solaris client (from a Linux NFSv4
client, it failed adding the 209th ACL, and for that matter, after the
failure to add that last entry, it was no longer able to read the ACL,
Complaining that "getxattr" failed).

I haven't found any information regarding a documented limit for either ZFS
itself or NFSv4.

Is 1024 the limit? Is there any tuning that can be done to either increase
or decrease that?

Thanks...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Bob Friesenhahn
On Wed, 9 Apr 2008, Richard Elling wrote:

> I just get my laptop within WiFi range and mount :-).  I don't see any
> benefit to a wire which is slower than Ethernet, when an Ethernet
> port is readily available on almost all modern laptops.

Under Windows or Mac, is this as convenient as pugging in a USB or 
Firewire disk or does it require system administrator type knowledge?

If you go to Starbucks, does your laptop attempt to mount your iSCSI 
volume on a (presumably) unreachable network?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Richard Elling
Bob Friesenhahn wrote:
> On Wed, 9 Apr 2008, Ross wrote:
>   
>> Well the first problem is that USB cables are directional, and you 
>> don't have the port you need on any standard motherboard.  That
>> 
>
> Thanks for that info.  I did not know that.
>
>   
>> Adding iSCSI support to ZFS is relatively easy since Solaris already 
>> supported TCP/IP and iSCSI.  Adding USB support is much more 
>> difficult and isn't likely to happen since afaik the hardware to do 
>> it just doens't exist.
>> 
>
> I don't believe that Firewire is directional but presumably the 
> Firewire support in Solaris only expects to support certain types of 
> devices.  My workstation has Firewire but most systems won't have it.
>
> It seemed really cool to be able to put your laptop next to your 
> Solaris workstation and just plug it in via USB or Firewire so it can 
> be used as a "removable" storage device.  Or Solaris could be used on 
> appropriate hardware to create a more reliable portable storage 
> device.  Apparently this is not to be and it will be necessary to deal 
> with iSCSI instead.
>   

I just get my laptop within WiFi range and mount :-).  I don't see any
benefit to a wire which is slower than Ethernet, when an Ethernet
port is readily available on almost all modern laptops.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Administration

2008-04-09 Thread sean walmsley
I haven't used it myself, but the following blog describes an automatic 
snapshot facility:

http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10

I agree that it would be nice to have this type of functionality built into the 
base product, however.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Jonathan Edwards

On Apr 9, 2008, at 11:46 AM, Bob Friesenhahn wrote:
> On Wed, 9 Apr 2008, Ross wrote:
>>
>> Well the first problem is that USB cables are directional, and you
>> don't have the port you need on any standard motherboard.  That
>
> Thanks for that info.  I did not know that.
>
>> Adding iSCSI support to ZFS is relatively easy since Solaris already
>> supported TCP/IP and iSCSI.  Adding USB support is much more
>> difficult and isn't likely to happen since afaik the hardware to do
>> it just doens't exist.
>
> I don't believe that Firewire is directional but presumably the
> Firewire support in Solaris only expects to support certain types of
> devices.  My workstation has Firewire but most systems won't have it.
>
> It seemed really cool to be able to put your laptop next to your
> Solaris workstation and just plug it in via USB or Firewire so it can
> be used as a "removable" storage device.  Or Solaris could be used on
> appropriate hardware to create a more reliable portable storage
> device.  Apparently this is not to be and it will be necessary to deal
> with iSCSI instead.
>
> I have never used iSCSI so I don't know how difficult it is to use as
> temporary "removable" storage under Windows or OS-X.

i'm not so sure what you're really after, but i'm guessing one of two  
things:

1) a global filesystem?  if so - ZFS will never be globally accessible  
from 2 hosts at the same time without an interposer layer such as NFS  
or Lustre .. zvols could be exported to multiple hosts via iSCSI or FC- 
target but that's only 1/2 the story ..
2) an easy way to export volumes?  agree - there should be some sort  
of semantics that would a signal filesystem is removable and trap on  
USB events when the media is unplugged .. of course you'll have  
problems with uncommitted transactions that would have to roll back on  
the next plug, or somehow be query-able

iSCSI will get you a block/character device level sharing from a zvol  
(pseudo device) or the equivalent of a blob filestore .. you'd have to  
format it with a filesystem, but that filesystem could be a global one  
(eg: QFS) and you could multi-host natively that way.

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incorrect/conflicting suggestion in error message on a faulted pool

2008-04-09 Thread Haudy Kazemi
I have reported this bug here: 
http://bugs.opensolaris.org/view_bug.do?bug_id=6685676

I think this bug may be related, but I do not see where to add a note to 
an existing bug report: 
http://bugs.opensolaris.org/view_bug.do?bug_id=6633592
(both bugs refer to ZFS-8000-2Q however my report shows a FAULTED pool 
instead of a DEGRADED pool.)

Thanks,

-hk

Haudy Kazemi wrote:
> Hello,
>
> I'm writing to report what I think is an incorrect or conflicting 
> suggestion in the error message displayed on a faulted pool that does 
> not have redundancy (equiv to RAID0?).  I ran across this while testing 
> and learning about ZFS on a clean installation of NexentaCore 1.0.
>
> Here is how to recreate the scenario:
>
> [EMAIL PROTECTED]:~$ mkfile 200m testdisk1 testdisk2
> [EMAIL PROTECTED]:~$ sudo zpool create mybigpool $PWD/testdisk1 $PWD/testdisk2
> Password:
> [EMAIL PROTECTED]:~$ zpool status mybigpool
>   pool: mybigpool
>  state: ONLINE
>  scrub: none requested
> config:
>
> NAME  STATE READ WRITE CKSUM
> mybigpool ONLINE   0 0 0
>   /export/home/kaz/testdisk1  ONLINE   0 0 0
>   /export/home/kaz/testdisk2  ONLINE   0 0 0
>
> errors: No known data errors
> [EMAIL PROTECTED]:~$ sudo zpool scrub mybigpool
> [EMAIL PROTECTED]:~$ zpool status mybigpool
>   pool: mybigpool
>  state: ONLINE
>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:09:29 2008
> config:
>
> NAME  STATE READ WRITE CKSUM
> mybigpool ONLINE   0 0 0
>   /export/home/kaz/testdisk1  ONLINE   0 0 0
>   /export/home/kaz/testdisk2  ONLINE   0 0 0
>
> errors: No known data errors
>
> Up to here everything looks fine.  Now lets destroy one of the virtual 
> drives:
>
> [EMAIL PROTECTED]:~$ rm testdisk2
> [EMAIL PROTECTED]:~$ zpool status mybigpool
>   pool: mybigpool
>  state: ONLINE
>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:09:29 2008
> config:
>
> NAME  STATE READ WRITE CKSUM
> mybigpool ONLINE   0 0 0
>   /export/home/kaz/testdisk1  ONLINE   0 0 0
>   /export/home/kaz/testdisk2  ONLINE   0 0 0
>
> errors: No known data errors
>
> Okay, still looks fine, but I haven't tried to read/write to it yet.  
> Try a scrub.
>
> [EMAIL PROTECTED]:~$ sudo zpool scrub mybigpool
> [EMAIL PROTECTED]:~$ zpool status mybigpool
>   pool: mybigpool
>  state: FAULTED
> status: One or more devices could not be opened.  Sufficient replicas 
> exist for
> the pool to continue functioning in a degraded state.
> action: Attach the missing device and online it using 'zpool online'.
>see: http://www.sun.com/msg/ZFS-8000-2Q
>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:10:36 2008
> config:
>
> NAME  STATE READ WRITE CKSUM
> mybigpool FAULTED  0 0 0  
> insufficient replicas
>   /export/home/kaz/testdisk1  ONLINE   0 0 0
>   /export/home/kaz/testdisk2  UNAVAIL  0 0 0  cannot 
> open
>
> errors: No known data errors
> [EMAIL PROTECTED]:~$
>
> There we go.  The pool has faulted as I expected to happen because I 
> created it as a non-redundant pool.  I think it was the equivalent of a 
> RAID0 pool with checksumming, at least it behaves like one.  The key to 
> my reporting this is that the "status" message says "One or more devices 
> could not be opened.  Sufficient replicas exist for the pool to continue 
> functioning in a degraded state." while the message further down to the 
> right of the pool name says "insufficient replicas".
>
> The verbose status message is wrong in this case.  From other forum/list 
> posts looks like that status message is also used for degraded pools, 
> which isn't a problem, but here we have a faulted pool.  Here's an 
> example of the same status message used appropriately: 
> http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/031298.html
>
> Is anyone else able to reproduce this?  And if so, is there a ZFS bug 
> tracker to report this too? (I didn't see a public bug tracker when I 
> looked.)
>
> Thanks,
>
> Haudy Kazemi
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Bob Friesenhahn
On Wed, 9 Apr 2008, Ross wrote:
>
> Well the first problem is that USB cables are directional, and you 
> don't have the port you need on any standard motherboard.  That

Thanks for that info.  I did not know that.

> Adding iSCSI support to ZFS is relatively easy since Solaris already 
> supported TCP/IP and iSCSI.  Adding USB support is much more 
> difficult and isn't likely to happen since afaik the hardware to do 
> it just doens't exist.

I don't believe that Firewire is directional but presumably the 
Firewire support in Solaris only expects to support certain types of 
devices.  My workstation has Firewire but most systems won't have it.

It seemed really cool to be able to put your laptop next to your 
Solaris workstation and just plug it in via USB or Firewire so it can 
be used as a "removable" storage device.  Or Solaris could be used on 
appropriate hardware to create a more reliable portable storage 
device.  Apparently this is not to be and it will be necessary to deal 
with iSCSI instead.

I have never used iSCSI so I don't know how difficult it is to use as 
temporary "removable" storage under Windows or OS-X.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Darren J Moffat
Ross wrote:
> I'm not sure how this is a ZFS function?  You're talking about using ZFS to 
> create a USB drive?  So you'd want a small box running ZFS with a USB 
> interface that you can just plug into other computers to access the storage?
> 
> Well the first problem is that USB cables are directional, and you don't have 
> the port you need on any standard motherboard.  That means you'll need a 
> custom interface board of some kind to give you access to the right kind of 
> USB plug, and then you'd need custom drivers to run that.
> 
> So you need custom hardware followed by a custom USB driver.  However once 
> you've done that you just read ZFS the same way you would read any filesystem.
> 
> Adding iSCSI support to ZFS is relatively easy since Solaris already 
> supported TCP/IP and iSCSI.  Adding USB support is much more difficult and 
> isn't likely to happen since afaik the hardware to do it just doens't exist.

What do you mean by adding iSCSI support to ZFS ?

Solaris already has iSCSI, ZFS already knows how to share ZVOLs as iSCSI 
targets, ZFS pools can be created on an iSCSI target.  What more is 
needed for iSCSI and ZFS integration ?

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Ross
I'm not sure how this is a ZFS function?  You're talking about using ZFS to 
create a USB drive?  So you'd want a small box running ZFS with a USB interface 
that you can just plug into other computers to access the storage?

Well the first problem is that USB cables are directional, and you don't have 
the port you need on any standard motherboard.  That means you'll need a custom 
interface board of some kind to give you access to the right kind of USB plug, 
and then you'd need custom drivers to run that.

So you need custom hardware followed by a custom USB driver.  However once 
you've done that you just read ZFS the same way you would read any filesystem.

Adding iSCSI support to ZFS is relatively easy since Solaris already supported 
TCP/IP and iSCSI.  Adding USB support is much more difficult and isn't likely 
to happen since afaik the hardware to do it just doens't exist.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss