Re: [zfs-discuss] Re: Re: Deterioration with zfs performance and recent zfs bits?

2007-06-05 Thread Robert Milkowski
Hello Jürgen,

Monday, June 4, 2007, 7:09:59 PM, you wrote:

>> > Patching zfs_prefetch_disable = 1 has helped
>> It's my belief this mainly aids scanning metadata. my
>> testing with rsync and yours with find (and seen with
>> du & ; zpool iostat -v 1 ) pans this out..
>> mainly tracked in bug 6437054 vdev_cache: wise up or die
>> http://www.opensolaris.org/jive/thread.jspa?messageID=42212
>> 
>> so to link your code, it might help, but if one ran
>> a clean down the tree, it would hurt compile times.



JK> I think the slowdown that I'm observing is due to the changes
JK> that have been made for 6542676 "ARC needs to track meta-data
JK> memory overhead".

JK> There is now a limit of 1/4 of arc size ("arc_meta_limit")
JK> for zfs meta-data.

Not good - I have some systems with TBs of meta-data mostly.
I guess there's some tunable...

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Deterioration with zfs performance and recent zfs bits?

2007-06-05 Thread Jürgen Keil
> Hello Jürgen,
> 
> Monday, June 4, 2007, 7:09:59 PM, you wrote:
> 
> >> > Patching zfs_prefetch_disable = 1 has helped
> >> It's my belief this mainly aids scanning metadata. my
> >> testing with rsync and yours with find (and seen with
> >> du & ; zpool iostat -v 1 ) pans this out..
> >> mainly tracked in bug 6437054 vdev_cache: wise up or die
> >>   http://www.opensolaris.org/jive/thread.jspa?messageID=42212
> >> 
> >> so to link your code, it might help, but if one ran
> >> a clean down the tree, it would hurt compile times.
> 
> 
> JK> I think the slowdown that I'm observing is due to the changes
> JK> that have been made for 6542676 "ARC needs to track meta-data
> JK> memory overhead".
> JK >
> JK> There is now a limit of 1/4 of arc size ("arc_meta_limit")
> JK> for zfs meta-data.
> 
> Not good - I have some systems with TBs of meta-data mostly.
> I guess there's some tunable...

AFAICT, you can patch the kernel global variable "arc_meta_limit"
at run time, using mdb -wk (variable should be visible in build 66
or newer)

But you can't tune it via an /etc/system "set" command.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS and dynamic LUN reconfiguration

2007-06-05 Thread Yan
so does anyone know how to  the LUN(s) part of its pool
and detect new size of the LUN ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot manual setup in b65

2007-06-05 Thread Tim Foster
hi Doug,

On Mon, 2007-06-04 at 12:25 -0700, Douglas Atique wrote:
> I have been trying to setup a boot ZFS filesystem since b63 and found
> out about bug 6553537 that was preventing boot from ZFS filesystems
> starting from b63. First question is whether b65 has solved the
> problem as was planned on the bug page.

I'll verify this today (note that this was only for netinstall/pfinstall
ZFS boot - manually setup ZFS boot should work fine.)

>  Second question is: as I cannot boot successfully from a ZFS
> filesystem after following the ZFS Boot Manual Setup instructions
> (http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/) due
> to a panic down the call chain of vfs_mountroot, what else (other than
> the bug, that is) could be wrong?

There's a number of things you could check:

 1. Is your root pool one of the supported types (mirror or single-disk)

 2. There's a bug with compression at the moment - the root pool,
and the top level pool need to have compression set to off.
( 6538017 )

 3. Check that you've got an SMI label on the pool you're trying to
boot from ( more at 
http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling )

 4. Can you make sure your bios is booting from the correct device

 5. (a bit more drastic) Can you run the script pointed to at the top of
that page and setup ZFS boot that way, which could account for pilot
error in following the manual steps.

After that, could you verify that by changing the grub menu entry
in //boot/grub/menu.lst ( eg. change the "title" line in the ZFS
boot entry, adding some random text) that you see those changes
reflected in the menu that grub actually displays ?

Let me know if any of these suggestions help ?

cheers,
tim


-- 
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS and dynamic LUN reconfiguration

2007-06-05 Thread Selim Daoud

from what I know this operation goes via an zpool export, re-label
(with format) , then zpool import
it's not online

On 6/5/07, Yan <[EMAIL PROTECTED]> wrote:

so does anyone know how to  the LUN(s) part of its pool
and detect new size of the LUN ?


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot manual setup in b65

2007-06-05 Thread Tim Foster
Hi Doug,

On Tue, 2007-06-05 at 06:45 -0700, Douglas Atique wrote:
> Hi, Tim. Thanks for your hints. 

No problem

> Comments on each one follow (marked with "Doug:" and in blue).

html mail :-/

> Tim Foster <[EMAIL PROTECTED]> wrote:
> There's a number of things you could check:
> 
> 1. Is your root pool one of the supported types (mirror or
> single-disk)
> 
> Doug:I don't know. I have created a pool in a slice of my main
> disk. This is the layout:

I assume your pool has just one slice in it. zpool status -v 
tells you the pool layout. I've a similar (but less complicated) layout
on my machine here (with nv_64, admittedly)

Did you installgrub the new zfs-capable version of grub onto c0d0s0?
(I'm assuming you did, otherwise the "bootfs" keyword in the ZFS entry
would fail)  I haven't tried booting a ZFS dataset from grub installed
on a UFS disk

> 2. There's a bug with compression at the moment - the root
> pool,
> and the top level pool need to have compression set to off.
> ( 6538017 )
> Doug: I don't set compression on deliberately. Could it be on
> by default?

Nope, I don't think so - check with "zfs get compression "

> 3. Check that you've got an SMI label on the pool you're
> trying to
> boot from ( more at 
> 
> http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling )
> Doug: I guess it is, because of the many slices. But how could
> I check (read-only, non-destructively)

Sounds like you've already got an SMI label if you can boot a UFS-rooted
system from that disk.

> 4. Can you make sure your bios is booting from the correct
> device
> Doug: I'm sure. That's the only disk I have. S10 and Solaris
> Express from UFS all boot correctly.

Okay.

> 5. (a bit more drastic) Can you run the script pointed to at
> the top of
> that page and setup ZFS boot that way, which could account for
> pilot
> error in following the manual steps.
> Doug: Haven't tried that, but I would really like to do it by
> hand to make sure I understand what is going on.

I agree.

> After that, could you verify that by changing the grub menu
> entry
> in //boot/grub/menu.lst ( eg. change the "title" line in the
> ZFS
> boot entry, adding some random text) that you see those
> changes
> reflected in the menu that grub actually displays ?
> Doug: This is my ZFS entry in my menu.lst:
> root (hd0,0,f)
> bootfs snv/b65
> kernel$ /boot/platform/i86pc/kernel/$ISADIR/unkx -B
> $ZFS-BOOTFS
> module$ /platform/i86pc/$ISADIR/boot_archive

And this entry shows up when you boot in grub ?

There's a typo in the above btw, "unkx", but I'm sure that was just an
error pasting into this mail (otherwise you wouldn't have even got the
banner printed)

Does your /etc/vfstab file on the snv/b65 ZFS filesystem contain a
single entry for /, which should look like:

snv/b65 - / zfs - no -

and your ufs root entry should have been changed to :

/dev/dsk/c1d0s0 /dev/rdsk/c1d0s0 /ufsroot ufs - yes -

(or removed)
Can't think of anything else that might be wrong unfortunately.

cheers,
tim
> 
-- 
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Stop a resilver

2007-06-05 Thread John
I'm in  bit of a bind...

I did a "replace" and the resilver has started properly. Unfortunately I need 
to now abort the "replace".  Is there way to do this?  Can I do some thing like 
take the new device "offline"?

thank
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS Boot manual setup in b65

2007-06-05 Thread mario heimel
hi,


from the following link there is no problem with b65

http://www.opensolaris.org/os/community/zfs/boot/netinstall
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stop a resilver

2007-06-05 Thread Bill Moore
Did you try issuing:

zpool detach your_pool_name new_device

That should detach the new device and stop the resilver.  If you just
want to stop the resilver (and leave the device), you should be able to
do:

zpool scrub -s your_pool_name

Which will stop the scrub/resilver.



--Bill


On Tue, Jun 05, 2007 at 07:57:27AM -0700, John wrote:
> I'm in  bit of a bind...
> 
> I did a "replace" and the resilver has started properly. Unfortunately
> I need to now abort the "replace".  Is there way to do this?  Can I do
> some thing like take the new device "offline"?
> 
> thank
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-06-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

eric kustarz wrote:
> There's going to be some very good stuff for ZFS in s10u4, can you
> please update the issues *and* features when it comes out?

Of course. That was my commitment when I decided to create the "beware"
section in the wikipedia article.

Would be very nice if the improvements would be documented anywhere :-)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRmWUcZlgi5GaxT1NAQLUuwP/WNIQMLKEwRLBbU4ALZcL2dWK+/s1JUcb
ItYJsCmF3h4dSAufPpiXw7T7ZaRfVYxR/D9W4VY3/SilIgZ8zb+Tip0TP7DWlqs8
K9RHndK+8QsYMpK3J8koNGeaN/vE0EGxe8xdvQghRg4hU93Y2rW/73OxGU0KJNkd
KQPODW2SKfo=
=S4/H
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-06-05 Thread eric kustarz


Would be very nice if the improvements would be documented  
anywhere :-)




Cindy has been doing a good job of putting the new features into the  
admin guide:

http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

Check out the "What's New in ZFS?" section.

eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-06-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

eric kustarz wrote:
> Cindy has been doing a good job of putting the new features into the
> admin guide:
> http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
> 
> Check out the "What's New in ZFS?" section.

I will update the wikipedia entry when Solaris10U4 be published :)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRmWYG5lgi5GaxT1NAQJtzgQAmT0FQ/1ciQYAqi2unOjPkBMe8fkkI08Y
ux19N+ONvDHp742im5ZPaWrpa5Ns+42+SWziIOPaPYC27DaV2vqLz1gun53LLRPi
/gRo2AFCgKGvmHBM2qsL9Ch8kepMSm4pUmWLjG81eIq+1R5wjo5Dv4Nld0YITS9u
EdrfG6VU6pE=
=pa0L
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive incremental

2007-06-05 Thread Matthew Ahrens

Vic Engle wrote:

Hi All,

Just curious about how the incremental send works. Is it changed blocks or
files and how are the changed blocks or files identified?


It's done at the DMU layer, based on blocks of objects.  We use the 
block-pointer relationships (ie, the on-disk structure of files) to quickly 
find the only the changed blocks.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs send/receive incremental

2007-06-05 Thread Matthew Ahrens

Starfox wrote:

First time around, create a snapshot and send it to remote: zfs snapshot
master/[EMAIL PROTECTED] zfs send master/[EMAIL PROTECTED] | ssh mirror zfs recv
backup/mirrorfs

Once that's done, [EMAIL PROTECTED], correct?


More accurately, master/[EMAIL PROTECTED] == backup/[EMAIL PROTECTED]

> So now I'm running the script manually

and it's complaining that the incremental source doesn't match, and
apparently no way to tell which [EMAIL PROTECTED] is the source short of
trial-and-error even if I kept all my incremental mirrors?  Is there a
clean way around this?


I'd say either (a) have your script check to see if the send|recv was 
successful, or (b) have it check what snapshots are available at the other 
side, and start sending incrementals from there.  Eg:


$lastsnap = ssh remote zfs list -o name $fs | tail -1 | cut '@' ...
do {
zfs snapshot [EMAIL PROTECTED]
zfs send -i $lastsnap [EMAIL PROTECTED] | ssh remote zfs recv -d 
pool/recvd
} while (no errors)



I guess a better scenario would be that I accidentally destroyed the last
[EMAIL PROTECTED] (so no source snapshot to -i).  Would my only recourse it to
bite the bullet and do a zfs send [EMAIL PROTECTED] (aka full)?


No, if you have the old snaps on both sides, you can simply send an 
incremental from the last common snap (eg, zfs send -i mirrorX-1 [EMAIL PROTECTED])



Another question is if I wanted to send a snapshot as a snapshot, I can
do: zfs snapshot master/[EMAIL PROTECTED] zfs send master/[EMAIL PROTECTED] | 
ssh
mirror zfs recv backup/[EMAIL PROTECTED]

And now [EMAIL PROTECTED]@savesnap, right?  But that would involve
sending the whole stream, which is however much data the filesystem was
consuming at that point in time?


Yes.  But this will create a new filesystem backup/mirrorfs on the receiving 
side.  I'm not sure I understand your goal -- zfs send always sends a 
snapshot, whether incremental or full.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Wikipedia article (was: Slashdot Article)

2007-06-05 Thread Matthew Ahrens

Jesus Cea wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

eric kustarz wrote:

Cindy has been doing a good job of putting the new features into the
admin guide:
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

Check out the "What's New in ZFS?" section.


I will update the wikipedia entry when Solaris10U4 be published :)


I really believe that ZFS *is* the OpenSolaris version, and citing 
deficiencies which do not exist in ZFS (the OpenSolaris version) is not 
appropriate.  Perhaps a simple disclaimer: "New features and bug fixes are 
integrated into Solaris 10 several months after they are in OpenSolaris". 
But, I don't feel it would be appropriate for me to edit the Wikipedia page, 
so I'll just make my comments here.


By comparison, would you consider the development branch of the linux kernel 
(2.odd-number releases) to not exist for the sake of Wikipedia documentation?


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot manual setup in b65

2007-06-05 Thread Marko Milisavljevic

I have also been trying to figure out the best strategy regarding ZFS
boot... I currently have a single disk UFS boot and RAID-Z for data. I plan
on getting a mirror for boot, but I still don't understand what my options
are regarding:

- Should I set up one zfs slice for the entire drive and mimic live update
functionality with writable clones? Or use multiple slices, each with a ZFS
boot environment?

- Is it reasonable to expect that this scheme will eventually be the way ZFS
boot and Live Upgrade will work in "official" release so I don't have to
reinstall entire system?

- Are there any other drawbacks to going with ZFS boot at this time?

As a side note, and the reason I am so thankful to people who created ZFS, I
will tell a brief story... I used to have a Windows XP machine with a
motherboard with onboard Sil3112A SATA chipset, and Seagate 200GB
7200.7drive that contained much data. I had spent months over time
ripping a few
hundred CDs that my wife and I had in our collection, and they were stored
in .APE format (compressed, lossless, and checksummed). I had at the same
time made a rip in mp3 format for iPod/iTunes, so I rarely had reason to
access lossless files - they were there for long term backup and
convenience. Occasionally I would realize that one of them refused to
decompress (failed checksum),  but I figured it is a bug somewhere and
re-ripped it and hoped it wouldn't happen again. Then I realized that too
many had this problem, and started to systematically decompress them, only
to find out that around 25-30% of the files were damaged - at least hundred
hours of ripping and cataloguing down the drain. While researching this
issue, I found out that there were incompatibilities between controller and
the drive, and that people on Linux had to hack the drivers to get around
this problem (google Mod15Write). Windows drivers were also fixed at some
point - don't know when - and if it weren't for large, checksummed files
that disk was full of, I could have gone on for years without realizing that
data is getting corrupted. (it was only a few bits at a time - a tiny % of
total number of bits, but when you have 500MB files...). This motherboard is
still alive and is currently running OpenSolaris (not using on-board SATA
controller), and the drive is happily chugging along on a ICH7-based
motherboard in OSX. Moral of the story being that even very mainstream and
well-regarded hardware that seems a perfectly sensible purchase at the time
(The very popular ASUS A7N8X-E Deluxe motherboard with Seagate SATA drives
of the same period) can turn out to be a disaster, and you won't know until
it is too late. Not to sound too sappy, but right now with a 1yr old, I have
too many precious digital photos and videos and losing them is not an
option. I use a combination of DVD and online backups, but none of it is any
good if data is saliently rotting at the source. Thank you, ZFS team.

On 6/4/07, Douglas Atique <[EMAIL PROTECTED]> wrote:


Hi,
I have been trying to setup a boot ZFS filesystem since b63 and found out
about bug 6553537 that was preventing boot from ZFS filesystems starting
from b63. First question is whether b65 has solved the problem as was
planned on the bug page. Second question is: as I cannot boot successfully
from a ZFS filesystem after following the ZFS Boot Manual Setup instructions
(http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/) due to
a panic down the call chain of vfs_mountroot, what else (other than the bug,
that is) could be wrong?

-- Douglas


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + ISCSI + LINUX QUESTIONS

2007-06-05 Thread Bill Sommerfeld
On Thu, 2007-05-31 at 13:27 +0100, Darren J Moffat wrote:

> > What errors and error rates have you seen?
> 
> I have seen switches flip bits in NFS traffic such that the TCP checksum 
> still match yet the data was corrupted.  One of the ways we saw this was 
> when files were being checked out of SCCS, the SCCS checksum failed. 
> Other ways we saw it was the compiler failing to compile untouched code.

To be specific, we found that an ethernet switch in one of our
development labs had a tendency to toggle a particular bit in packets
going through it.   The problem was originally suspected to be a data
corruption problem within solaris itself and got a lot of attention as a
result.

In the cases I examined (corrupted source file after SCCS checkout)
there were complementary changes (0->1 and 1->0) in the same bit in
bytes which were 256, 512, or 1024 bytes apart in the source file.

Because of the mathematics of the 16-bit ones-complement checksum used
by TCP, the packet checksummed to the same value after the switch made
these two offsetting changes.  (I believe that the switch was either
inserting or removing a vlan tag so the ethernet CRC had to be
recomputed by the switch). 

Once we realized that this was going on we went back, looked at the
output of netstat -s, and noticed that the systems in this lab had been
dropping an abnormally high number of packets due to bad TCP checksums;
only a few of the broken packets were making it through, but there were
enough of them to disrupt things in the lab.

The problem went away when the suspect switch was taken out of service.

- Bill






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Where does the zpool info for it's label get saved, man page m. EFI label?

2007-06-05 Thread John Brewer
I reloaded my system on c0d0 messed up the rounding cyl, then when I reloaded 
the format command would only work on c0d1 and not c0d0, so then booted on the 
CD then redefined on c0d0 slice 3 - 7, to the original partition values. 

Where does the zpool info get saved, EFI label is mentioned in the man page? 
lucky my zfs file system was on c0d1 slice 5.
# zpool import
  pool: zones
id: 4567711835620380868
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
The pool may be active on on another system, but can be imported using
the '-f' flag.
config:

zones   ONLINE
  c0d1s5ONLINE

# df -k /zones
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c0d0s0  8068883 3603709 438448646%/

# zpool import -f zones
# df -k /zones
Filesystemkbytesused   avail capacity  Mounted on
zones61415424 38142886 2327222163%/zones
#
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ARC and patents

2007-06-05 Thread Kasper Nielsen

Hi there,

I was looking at using something very similar to arc.c 
 
for an open source project.
However, I'm a bit worried about the patent IBM is holding on the ARC 
data structure.

http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220040098541%22.PGNR.&OS=DN/20040098541&RS=DN/20040098541
I remember PostgreSQL dropping their ARC implementation for 2Q some time 
ago.
But I was hoping, that someone on this list might have some constructive 
input on this issue?


cheers
 Kasper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss