[zfs-discuss] ZFS vs VXFS

2007-07-12 Thread Vishal Dhuru

Hi ,
I am looking for customer shareable presentation on the ZFS vs VxFS , 
Any pointers to URL or direct attached prezo is highly appreciated !


thanks,
vishal Dhuru

  * Vishal Dhuru *
Service Account Manager
*Sun Microsystems India pvt.Ltd.*
C5 "A" Wing, F2000
Bandra-Kurla Complex,Bandra(E)
Mumbai 400051 INDIA
WORK +91 22 66978111 x88119
CELL +91 9820322393
Fax +91 22 66978211
Email [EMAIL PROTECTED]


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Al Hopper
On Thu, 12 Jul 2007, Bill Sommerfeld wrote:

> On Thu, 2007-07-12 at 10:45 -0700, Bart Smaalders wrote:
>> 
>> For those of us who've been swapping to zvols for some time, can
>> you describe the failure modes?
>> 
>
> I asked about this during the zfs boot inception review -- the high
> level answer is occasional deadlock in low-memory situations (zfs needs
> to allocate memory to free memory via pageout/swapout, but the system
> doesn't have any to give zfs)
>
> the relevant bug appears to be:
>
> 6528296 system hang while zvol swap space shorted

Yep - I've seen this on Sol 10 Update 3 with swap on the dedicated UFS 
boot disk and a 2-way ZFS mirror that is doing everything else - 
including a zvol based swap.  The box will eventually get too slow to 
be useable and the only fix is to reboot.  Usually this bug will be 
"tickled" after running a (weekly) zpool scrub (on ~ 270Gb of data).

The box is an AMD x4400 with 4Gb of RAM.  It drives 22" and 30" 
monitors (Nvidia) and runs about 18 Gnome workspaces.  Its my "window" 
into the world.  To date it has averaged about 6 weeks between 
reboots.  I know that zfs could be tuned - but update 4 is not too far 
away.  Aside from this minor irritation ... this box is a pure 
pleasure to work on and ZFS (and snapshots) totally rocks.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another zfs dataset [was: Plans for swapping to part of a pool]

2007-07-12 Thread Bill Sommerfeld
On Thu, 2007-07-12 at 16:27 -0700, Richard Elling wrote:
> I think we should up-level this and extend to the community for comments.
> The proposal, as I see it, is to create a simple,
yes

>  contiguous (?) 
as I understand the proposal, not necessarily contiguous. 

> space which sits in a zpool.  As such, it does inherit the behaviour of the
> zpool.  But it does not inherit the behaviour of a file system or zvol
> (no snapshots, copies, etc.)
> 
> While the original reason for this was swap, I have a sneaky suspicion
> that others may wish for this as well, or perhaps something else.
> Thoughts?  (database folks, jump in :-)

the record will hopefully show that I predicted that the database folks
would want to use this when Lori described the concept during ARC
review...

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Torrey McMahon
I really don't want to bring this up but ...

Why do we still tell people to use swap volumes? Would we have the same 
sort of issue with the dump device so we need to fix it anyway?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Another zfs dataset [was: Plans for swapping to part of a pool]

2007-07-12 Thread Richard Elling
Lori Alt wrote:
  
>>> Treat a pseudo-zvol like you would a slice. 
>>
>> So these new zvol-like things don't support snapshots, etc, right?
>> I take it they work by allowing overwriting of the data, correct?
> yes, and yes
>> Are these a zslice?
> I suppose we could call them that.  That's better than pseudo-zvol.
> "rzvol" has also been suggested.

I think we should up-level this and extend to the community for comments.
The proposal, as I see it, is to create a simple, contiguous (?) space
which sits in a zpool.  As such, it does inherit the behaviour of the
zpool.  But it does not inherit the behaviour of a file system or zvol
(no snapshots, copies, etc.)

While the original reason for this was swap, I have a sneaky suspicion
that others may wish for this as well, or perhaps something else.
Thoughts?  (database folks, jump in :-)
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [AVS] Question concerning reverse synchronization of a zpool

2007-07-12 Thread Jim Dunham
Ralf,

> Ralf Ramge wrote:
>> Questions:
>>
>> a) I don't understand why the kernel panics at the moment. the zpool
>> isn't mounted on both systems, the zpool itself seems to be fine  
>> after a
>> reboot ... and switching the primary and secondary hosts just for
>> resyncing seems to force a full sync, which isn't an option.
>>
>> b) I'll try a "sndradm -m -r" the next time ... but I'm not sure if I
>> like that thought. I would accept this if I replaced the primary host
>> with another server, but having to do a 24 TB full sync just  
>> because the
>> replication itself had been disabled for a few minutes would be  
>> hard to
>> swallow. Or did I do something wrong?
>>
>>
> I've answered these questions myself at the meantime (with a nice
> employee fo Sun Hamburg giving me the hint). For Google: during a
> reverse sync, neither side of the replication is allowed to have the
> zpool imported, because after the reverse sync finishes, SNDR enters
> replication mode. This renders reverse syncs useless for HA scenarios,
> switch primary & secondary instead.

This is close, but not the actual scenario, and actual answer is much  
better then one would expect.

Just prior to issuing a reverse sync, neither side of the replication  
is allowed to have the, zpool imported. This step is VERY IMPORTANT,  
since ZFS will detect SNDR replicated writes, and since these writes  
were not issued by the local ZFS, ZFS will assume these writes are  
some form of data corruption, since the checksums won't match, and  
panic the system.

Instantly after issuing a reverse sync, zpool(s) on the SNDR primary  
node can now be imported, without waiting. Although there may be  
minutes, hours or days of change that need to be replicated from the  
SNDR secondary volumes to the SNDR primary volumes, SNDR supports on- 
demand pull of unreplicated changes.

This improves one's MTTR (Mean Time To Recover), at the cost of  
performance for the duration of reverse sync, where the duration is a  
function of the amount of change that happened while running on the  
secondary node's volumes.

Also any changes made to the primary volume, will be replicated to  
the secondary volumes during this period, so that the very instant  
the reverse synchronization operation is complete, both sides of the  
replica will be identical.


> -- 
>
> Ralf Ramge
> Senior Solaris Administrator, SCNA, SCSA
>
> Tel. +49-721-91374-3963
> [EMAIL PROTECTED] - http://web.de/
>
> 1&1 Internet AG
> Brauerstraße 48
> 76135 Karlsruhe
>
> Amtsgericht Montabaur HRB 6484
>
> Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich,  
> Andreas Gauger, Matthias Greve, Robert Hoffmann, Norbert Lang,  
> Achim Weiss
> Aufsichtsratsvorsitzender: Michael Scheeren
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Jim Dunham
Solaris, Storage Software Group

Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Phone x24042 / 781-442-4042
Email: [EMAIL PROTECTED]
http://blogs.sun.com/avs



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Bill Sommerfeld
On Thu, 2007-07-12 at 10:45 -0700, Bart Smaalders wrote:
> 
> For those of us who've been swapping to zvols for some time, can
> you describe the failure modes?
> 

I asked about this during the zfs boot inception review -- the high
level answer is occasional deadlock in low-memory situations (zfs needs
to allocate memory to free memory via pageout/swapout, but the system
doesn't have any to give zfs)

the relevant bug appears to be:

6528296 system hang while zvol swap space shorted







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Lori Alt

>>>  
>> Treat a pseudo-zvol like you would a slice. 
>
>
> So these new zvol-like things don't support snapshots, etc, right?
> I take it they work by allowing overwriting of the data, correct?
yes, and yes
>
> Are these a zslice?
I suppose we could call them that.  That's better than pseudo-zvol.
"rzvol" has also been suggested.
>
> 
> For those of us who've been swapping to zvols for some time, can
> you describe the failure modes?
> 
>
See bug 6528296 (system hang while zvol swap space shorted).

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs "no dataset available"

2007-07-12 Thread Kwang-Hyun Baek
Hi, I just built a kernel from source yesterday and try to run the new kernel 
and now it's giving me "no dataset available" error.  My non-root user home 
directory is mounted on a ZFS filesystem and now I can't get to it.  zpool 
status shows that my pool needs to be upgraded so I did, but after the upgrade, 
I still get "no dataset available".  I was running Sol Express B62 before and I 
got the source from mercurial yesterday.  The B62 was live-upgraded partition 
and it's the 2nd (and only active) slice.

Please HELP!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Bart Smaalders
Lori Alt wrote:
> Darren J Moffat wrote:
>> As part of the ARC inception review for ZFS crypto we were asked to 
>> follow up on PSARC/2006/370 which indicates that swap & dump will be 
>> done using a means other than a ZVOL.
>>
>> Currently I have the ZFS crypto project allowing for ephemeral keys to 
>> support using a ZVOL as a swap device.
>>
>> Since it seems that we won't be swapping on ZVOLS I need to find out 
>> more how we will be providing swap and dump space in a root pool.
>>   
> The current plan is to provide what we're calling (for lack of a
> better term.  I'm open to suggestions.) a "pseudo-zvol".  It's
> preallocated space within the pool, logically concatenated by
> a driver to appear like a disk or a slice.  It's meant to be a low
> overhead way to emulate a slice within a pool.  So no COW or
> related zfs features are provided, except for the ability to change
> its size without having to re-partition a disk.  A pseudo-zvol
> will support both swap and dump.
> 
> It will also be possible to use a slice for swapping, just as is
> done now with ufs roots.  But we're hoping that the overhead of
> a pseudo-zvol will be low enough that administrators will
> take advantage of it to simplify installation (it allows a user
> to dedicate an entire disk to a root pool, without having to
> carve out part of it for swapping.)
> 
> Eventually, swapping on true zvols might be supported (the
> problems with swapping to zvols are considered bugs), but
> fixing those bugs are a bigger task than we want to take on
> for the zfs-boot project.  We decided on pseudo-zvols as
> a lower-risk approach for the time being.
> 
>> I suspect that the best answer to encrypted swap is that we do it 
>> independently of which filesystem/device is being used as the swap 
>> device - ie do it inside the VM system.
>> '
>>   
> Treat a pseudo-zvol like you would a slice. 


So these new zvol-like things don't support snapshots, etc, right?
I take it they work by allowing overwriting of the data, correct?

Are these a zslice?


For those of us who've been swapping to zvols for some time, can
you describe the failure modes?


- Bart

-- 
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Darren J Moffat
Thanks for the info.  As for name suggestions here are a few:

RAW
RVOL
RZVOL

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Lori Alt
Darren J Moffat wrote:
> As part of the ARC inception review for ZFS crypto we were asked to 
> follow up on PSARC/2006/370 which indicates that swap & dump will be 
> done using a means other than a ZVOL.
>
> Currently I have the ZFS crypto project allowing for ephemeral keys to 
> support using a ZVOL as a swap device.
>
> Since it seems that we won't be swapping on ZVOLS I need to find out 
> more how we will be providing swap and dump space in a root pool.
>   
The current plan is to provide what we're calling (for lack of a
better term.  I'm open to suggestions.) a "pseudo-zvol".  It's
preallocated space within the pool, logically concatenated by
a driver to appear like a disk or a slice.  It's meant to be a low
overhead way to emulate a slice within a pool.  So no COW or
related zfs features are provided, except for the ability to change
its size without having to re-partition a disk.  A pseudo-zvol
will support both swap and dump.

It will also be possible to use a slice for swapping, just as is
done now with ufs roots.  But we're hoping that the overhead of
a pseudo-zvol will be low enough that administrators will
take advantage of it to simplify installation (it allows a user
to dedicate an entire disk to a root pool, without having to
carve out part of it for swapping.)

Eventually, swapping on true zvols might be supported (the
problems with swapping to zvols are considered bugs), but
fixing those bugs are a bigger task than we want to take on
for the zfs-boot project.  We decided on pseudo-zvols as
a lower-risk approach for the time being.

> I suspect that the best answer to encrypted swap is that we do it 
> independently of which filesystem/device is being used as the swap 
> device - ie do it inside the VM system.
> '
>   
Treat a pseudo-zvol like you would a slice. 

Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cluster File System Use Cases

2007-07-12 Thread Brian Hechinger
On Wed, Feb 28, 2007 at 09:54:37AM -0600, Dean Roehrich wrote:
> On Wed, Feb 28, 2007 at 07:23:44AM -0800, Thomas Roach wrote:
> 
> And yes, we're actively pushing the SAM-QFS code through the open-source
> process.  Here's the first blog entry:
> 
> http://blogs.sun.com/samqfs/entry/welcome_to_sam_qfs_weblog

I see that libSAM has been release.  How long until we see QFS out in the
wild?

-brian
-- 
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly."  -- Jonathan 
Patschke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool on USB flash disk

2007-07-12 Thread Menno Lageman
Martin Man wrote:
> [EMAIL PROTECTED] wrote:
>> [EMAIL PROTECTED] wrote:
>>> it might be a faq or known problem, but it's rather dangerous, is this
>>> being worked ON? usb stick removal should not panic the kernel, should
>> it?
>>
>> I think the default behavior is that if the pool is unprotected (or at an
>> unprotected state via redundancy failure on mirror or raidz(2)) and you
>> lose a device the system panics. This is a known issue/bug/feature (pick
>> one depending on your view) that has been discussed multiple times on the
>> list.
> 
> discussed yes, I think I remember that, reported? being worked on?

Martin,

see:
6322646 ZFS should gracefully handle all devices failing (when writing)

Menno
-- 
Menno Lageman - Sun Microsystems - http://blogs.sun.com/menno
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Again ZFS with expanding LUNs!

2007-07-12 Thread Gernot Stocker
Hello,
I know, that you had this discussion a view days ago but I'm in the 
installation phase of our new production servers and I intend to migrate the 
data from UFS volumes to ZFS volumes in near future. For doing this it must be 
ABSOLUTELY sure that I can resize the SAN LUNs because during the last 4 years 
I had to double the LUN size every year. I tried to resize a test volume 
following some hints from this forum but didn't succeed.

My Environment and procedure of doing it:
* Sunfire X4600 running Solaris 10 (11/06) accessing through MPXIO an Compaq 
EVA3000 SAN

* Procedure of creation:
  > zpool create evatestpool c5t600508B4000104ED00016143d0
  > zfs create evatestpool/testvol1
  > zfs set mountpoint=/testmnt/testvol1 evatestpool/testvol1

* Until here everything is fine, but than the resize had to follow:
  > zpool export evatestpool
  > than the resize of the LUN was performed within the SAN
  > format -e
  > Searching for disks...done
  > 
  > 
  > AVAILABLE DISK SELECTIONS:
  >0. c1t0d0 
  >   /[EMAIL PROTECTED],0/pci108e,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
  >1. c3t0d0 
  >   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
  >2. c5t600508B4000104ED00016143d0 
  >   /scsi_vhci/[EMAIL PROTECTED]
  >  Specify disk (enter its number): 2
  >  selecting c5t600508B4000104ED00016143d0
  > [disk formatted]
  > 
  > [...]
  >  format> l
  > [0] SMI Label
  > [1] EFI Label
  > Specify Label type[1]:
  > Ready to label disk, continue? y
  > 
  > format> q
  > zpool import evatestpool

And the size is still the same.
Did I miss something? 

Thanks for your answer,
Gernot
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool on USB flash disk

2007-07-12 Thread Martin Man
[EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] wrote:
>> it might be a faq or known problem, but it's rather dangerous, is this
>> being worked ON? usb stick removal should not panic the kernel, should
> it?
> 
> I think the default behavior is that if the pool is unprotected (or at an
> unprotected state via redundancy failure on mirror or raidz(2)) and you
> lose a device the system panics. This is a known issue/bug/feature (pick
> one depending on your view) that has been discussed multiple times on the
> list.

discussed yes, I think I remember that, reported? being worked on?

> -Wade

thanx,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool on USB flash disk

2007-07-12 Thread Wade . Stuart




[EMAIL PROTECTED] wrote on 07/12/2007 07:28:29 AM:

> Hi all,
>
> Nevada build 67, USB flash voayager, ...
>
> created zpool on one of the FDISK partitions on the flash drive, zpool
> import export works fine,
>
> tried to take the USB stick out of the system while the pool is mounted,
> ..., 3 seconds, bang, kernel down, core dumped, friendly reboot on the
> way...
>
> it might be a faq or known problem, but it's rather dangerous, is this
> being worked ON? usb stick removal should not panic the kernel, should
it?

I think the default behavior is that if the pool is unprotected (or at an
unprotected state via redundancy failure on mirror or raidz(2)) and you
lose a device the system panics. This is a known issue/bug/feature (pick
one depending on your view) that has been discussed multiple times on the
list.

-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to list pools that are not imported

2007-07-12 Thread Martin Man
Menno Lageman wrote:
> Martin Man wrote:
>>
>> I insert the stick, and how can I figure out what poools are available 
>> for 'zpool import' without knowing their name?
>>
>> zpool list does not seem to be listing those,
>>
> 
> A plain 'zpool import' should do the trick.

yep, works like a charm, that one was easy, thanx... :-)

> 
> Menno

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to list pools that are not imported

2007-07-12 Thread Tim Foster
Hi Martin,

On Thu, 2007-07-12 at 14:50 +0200, Martin Man wrote:
> again might be a FAQ, but imagine that I have a pool on USB stick,
> 
> I insert the stick, and how can I figure out what poools are available 
> for 'zpool import' without knowing their name?
> 
> zpool list does not seem to be listing those,

"zpool import" should show the pools that are available for import -
does this help ?

cheers,
tim
-- 
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [AVS] Question concerning reverse synchronization of a zpool

2007-07-12 Thread Ralf Ramge
Ralf Ramge wrote:
> Questions:
>
> a) I don't understand why the kernel panics at the moment. the zpool 
> isn't mounted on both systems, the zpool itself seems to be fine after a 
> reboot ... and switching the primary and secondary hosts just for 
> resyncing seems to force a full sync, which isn't an option.
>
> b) I'll try a "sndradm -m -r" the next time ... but I'm not sure if I 
> like that thought. I would accept this if I replaced the primary host 
> with another server, but having to do a 24 TB full sync just because the 
> replication itself had been disabled for a few minutes would be hard to 
> swallow. Or did I do something wrong?
>
>   
I've answered these questions myself at the meantime (with a nice 
employee fo Sun Hamburg giving me the hint). For Google: during a 
reverse sync, neither side of the replication is allowed to have the 
zpool imported, because after the reverse sync finishes, SNDR enters 
replication mode. This renders reverse syncs useless for HA scenarios, 
switch primary & secondary instead.

> c) What performance can I expect from a X4500, 40 disks zpool, when 
> using slices, compared to LUNs? Any experiences?
>
>   
Any input to the question will still be appreciated :-)

-- 

Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA

Tel. +49-721-91374-3963 
[EMAIL PROTECTED] - http://web.de/

1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe

Amtsgericht Montabaur HRB 6484

Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Andreas Gauger, 
Matthias Greve, Robert Hoffmann, Norbert Lang, Achim Weiss 
Aufsichtsratsvorsitzender: Michael Scheeren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to list pools that are not imported

2007-07-12 Thread Menno Lageman
Martin Man wrote:
> Hi all,
> 
> again might be a FAQ, but imagine that I have a pool on USB stick,
> 
> I insert the stick, and how can I figure out what poools are available 
> for 'zpool import' without knowing their name?
> 
> zpool list does not seem to be listing those,
> 

A plain 'zpool import' should do the trick.

Menno
-- 
Menno Lageman - Sun Microsystems - http://blogs.sun.com/menno
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to list pools that are not imported

2007-07-12 Thread Martin Man
Hi all,

again might be a FAQ, but imagine that I have a pool on USB stick,

I insert the stick, and how can I figure out what poools are available 
for 'zpool import' without knowing their name?

zpool list does not seem to be listing those,

thanx,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM's TSM

2007-07-12 Thread Hans-Juergen Schnitzer

John wrote:

Our main problem with TSM and ZFS is currently that
there seems to be
no efficient way to do a disaster restore when the
backup
resides on tape - due to the large number of
filesystems/TSM filespaces.
The graphical client (dsmj) does not work at all and
with dsmc one
has to start a separate restore session for each
filespace.
This results in a unpractical large number of tape
mounts.

Hans


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
ss


Not sure I quite follow...   Here's my dsmc output:

tsm> q file
  # Last Incr Date  TypeFile Space Name
--- --  ---
  1   07/11/07   15:08:21   UFS /   
  2   00/00/00   00:00:00   UNKNOWN  /sapdb  
  3   07/11/07   15:08:35   UFS /users  
  4   07/11/07   15:08:35   UFS /vol1   
  5   07/11/07   15:12:21   UNKNOWN  /zone_appsvt/sap

  6   07/11/07   15:19:21   UNKNOWN  /zone_docft0/docft0_index_01
  7   07/11/07   15:19:21   UNKNOWN  /zone_docft0/docu
  8   07/11/07   15:17:35   UNKNOWN  /zone_sapapb/backups
  9   07/11/07   15:17:34   UNKNOWN  /zone_sapapb/dc_data
 10   07/11/07   15:17:34   UNKNOWN  /zone_sapapb/sap
 11   07/11/07   15:13:59   UNKNOWN  /zone_sapapb/sapdb
 12   07/11/07   15:16:56   UNKNOWN  /zone_sapapb/sapmnt
 13   07/11/07   15:12:06   UNKNOWN  /zone_sapjdt/sap
 14   07/11/07   15:12:06   UNKNOWN  /zone_sapjdt/sapmnt
 15   07/11/07   15:12:16   UFS /zones/appsvt
 16   07/11/07   15:17:51   UFS /zones/docft0
 17   07/11/07   15:13:18   UFS /zones/sapapb
 18   07/11/07   15:08:38   UFS /zones/sapjdt

So yes, my ZFS filesystems are of type "UNKNOW" but I go into dsmj and I can see them 
all.  I can select mutlipes... I'm assuming as long as the ZFS filesystems are there everything 
would restore properly...  I've got an e-mail into our TSM admin to see if the "restore" 
has been tested yet...  (We're still in the testing phase... LOL)
 
 


We tried to restore 3TB from about 1500 TSM filespaces. dsmj does
simply hang after a while without restoring anything.
According to the IBM support dsmj is not suitable for large data
restores and pointed us to dsmc.

Hans



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS pool on USB flash disk

2007-07-12 Thread Martin Man
Hi all,

Nevada build 67, USB flash voayager, ...

created zpool on one of the FDISK partitions on the flash drive, zpool 
import export works fine,

tried to take the USB stick out of the system while the pool is mounted, 
..., 3 seconds, bang, kernel down, core dumped, friendly reboot on the 
way...

it might be a faq or known problem, but it's rather dangerous, is this 
being worked ON? usb stick removal should not panic the kernel, should it?

thanx,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM's TSM

2007-07-12 Thread John
> 
> Our main problem with TSM and ZFS is currently that
> there seems to be
> no efficient way to do a disaster restore when the
> backup
> resides on tape - due to the large number of
> filesystems/TSM filespaces.
> The graphical client (dsmj) does not work at all and
> with dsmc one
> has to start a separate restore session for each
> filespace.
> This results in a unpractical large number of tape
> mounts.
> 
> Hans
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

Not sure I quite follow...   Here's my dsmc output:

tsm> q file
  # Last Incr Date  TypeFile Space Name
--- --  ---
  1   07/11/07   15:08:21   UFS /   
  2   00/00/00   00:00:00   UNKNOWN  /sapdb  
  3   07/11/07   15:08:35   UFS /users  
  4   07/11/07   15:08:35   UFS /vol1   
  5   07/11/07   15:12:21   UNKNOWN  /zone_appsvt/sap
  6   07/11/07   15:19:21   UNKNOWN  /zone_docft0/docft0_index_01
  7   07/11/07   15:19:21   UNKNOWN  /zone_docft0/docu
  8   07/11/07   15:17:35   UNKNOWN  /zone_sapapb/backups
  9   07/11/07   15:17:34   UNKNOWN  /zone_sapapb/dc_data
 10   07/11/07   15:17:34   UNKNOWN  /zone_sapapb/sap
 11   07/11/07   15:13:59   UNKNOWN  /zone_sapapb/sapdb
 12   07/11/07   15:16:56   UNKNOWN  /zone_sapapb/sapmnt
 13   07/11/07   15:12:06   UNKNOWN  /zone_sapjdt/sap
 14   07/11/07   15:12:06   UNKNOWN  /zone_sapjdt/sapmnt
 15   07/11/07   15:12:16   UFS /zones/appsvt
 16   07/11/07   15:17:51   UFS /zones/docft0
 17   07/11/07   15:13:18   UFS /zones/sapapb
 18   07/11/07   15:08:38   UFS /zones/sapjdt

So yes, my ZFS filesystems are of type "UNKNOW" but I go into dsmj and I can 
see them all.  I can select mutlipes... I'm assuming as long as the ZFS 
filesystems are there everything would restore properly...  I've got an e-mail 
into our TSM admin to see if the "restore" has been tested yet...  (We're still 
in the testing phase... LOL)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Darren J Moffat
As part of the ARC inception review for ZFS crypto we were asked to 
follow up on PSARC/2006/370 which indicates that swap & dump will be 
done using a means other than a ZVOL.

Currently I have the ZFS crypto project allowing for ephemeral keys to 
support using a ZVOL as a swap device.

Since it seems that we won't be swapping on ZVOLS I need to find out 
more how we will be providing swap and dump space in a root pool.

I suspect that the best answer to encrypted swap is that we do it 
independently of which filesystem/device is being used as the swap 
device - ie do it inside the VM system.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss