Re: [zfs-discuss] FYI - proposing storage pm project

2008-11-04 Thread Nathan Kroenert
Not wanting to hijack this thread, but...

I'm a simple man with simple needs. I'd like to be able to manually spin 
down my disks whenever I want to...

Anyone come up with a way to do this? ;)

Nathan.

Jens Elkner wrote:
> On Mon, Nov 03, 2008 at 02:54:10PM -0800, Yuan Chu wrote:
> Hi,
>   
>>   a disk may take seconds or
>>   even tens of seconds to come on line if it needs to be powered up
>>   and spin up.
> 
> Yes - I really hate this on my U40 and tried to disable PM for HDD[s]
> completely. However, haven't found a way to do this (thought
> /etc/power.conf is the right place, but either it doesn't work as
> explained or is not the right place).
> 
> HDD[s] are HITACHI HDS7225S Revision: A9CA
> 
> Any hints, how to switch off PM for this HDD?
> 
> Regards,
> jel.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression on for zpool boot disk?

2008-11-04 Thread Fajar A. Nugraha
Krzys wrote:
> compression is not supported for rootpool?
>
> # zpool create rootpool c1t1d0s0
> # zfs set compression=gzip-9 rootpool
>   

I think gzip compression is not supported on zfs root. Try compression=on.

Regards,

Fajar


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] compression on for zpool boot disk?

2008-11-04 Thread Krzys
compression is not supported for rootpool?

# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
# lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
ERROR: ZFS pool  does not support boot environments
#

why? are there any plans to have compression on that disk available? how about 
encryption will that be available on zfs boot disk at some point too?

Thank you.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP Smart Array and b99?

2008-11-04 Thread Fajar A. Nugraha
Fajar A. Nugraha wrote:
> After that, I decided to upgrade my zpool (b98 has zpool v13). Surprise,
> surpise, the system now doesn't boot at all. Apparently I got hit by
> this "bug" :
> http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB
>
>   

b98 CD works as expected. I made a little change to
http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB :
update_grub needs "-R /mnt" when booting from CD.

> The recovery procedure involve booting from CD/DVD. Thinking that
> perhaps HP's update would solve cpqary3 problem, I tried adding cpqary3
> v 1.91 to osol-0811-99.iso. 
v1.91 apparently fixed warnings about driver byte alignment. I'm using
it now on b98. It still doesn't work on b99 and 100.

Regards,

Fajar


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gzip compression throttles system?

2008-11-04 Thread David Gwynne

On 05/11/2008, at 2:22 PM, Ian Collins wrote:

> Bob Friesenhahn wrote:
>> On Wed, 5 Nov 2008, David Gwynne wrote:
>>
>>
>>> be done in a very short time. perhaps you can amortize that cost by
>>> doing it when the data from userland makes it into the kernel.  
>>> another
>>> idea could be doing the compression when you reach a relatively low
>>> threshold of uncompressed data in the cache. ie, as soon as you get
>>> 1MB of data in the cache, compress it then, rather than waiting till
>>> you have 200MB of data in the cache that needs to be compressed  
>>> RIGHT
>>> NOW.
>>>
>>
>> This is counter-productive.  ZFS's lazy compression approach ends up
>> doing a lot less compression in the common case where files are
>> updated multiple times before ZFS decides to write to disk.  If your
>> advice is followed, then every write will involve compression, rather
>> than the summation of perhaps thousands of writes.
>>
>>
> But gzip has a significant impact when doing a zfs receive. It would  
> be
> interesting to see how an amortised compression scheme would work in
> this case.  Currently writing to a filesystem with gzip compression
> takes more than twice the time than a to one with lzjb compression  
> on a
> quiet x4540.  There isn't any noticeable difference between lzjb and  
> no
> compression.

i dont think lzjb needs big memory allocations like gzip does. those  
memory allocs cause a ton of xcals on a system, which cant be good for  
interactivity.

dlg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gzip compression throttles system?

2008-11-04 Thread Ian Collins
Bob Friesenhahn wrote:
> On Wed, 5 Nov 2008, David Gwynne wrote:
>
>   
>> be done in a very short time. perhaps you can amortize that cost by
>> doing it when the data from userland makes it into the kernel. another
>> idea could be doing the compression when you reach a relatively low
>> threshold of uncompressed data in the cache. ie, as soon as you get
>> 1MB of data in the cache, compress it then, rather than waiting till
>> you have 200MB of data in the cache that needs to be compressed RIGHT
>> NOW.
>> 
>
> This is counter-productive.  ZFS's lazy compression approach ends up 
> doing a lot less compression in the common case where files are 
> updated multiple times before ZFS decides to write to disk.  If your 
> advice is followed, then every write will involve compression, rather 
> than the summation of perhaps thousands of writes.
>
>   
But gzip has a significant impact when doing a zfs receive. It would be
interesting to see how an amortised compression scheme would work in
this case.  Currently writing to a filesystem with gzip compression
takes more than twice the time than a to one with lzjb compression on a
quiet x4540.  There isn't any noticeable difference between lzjb and no
compression.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS problem recognition script?

2008-11-04 Thread Vincent Fox
Has anyone done a script to check for filesystem problems?

On our existing UFS infrastructure we have a cron job run metacheck.pl 
periodically so we get email if an SVM setup has problems.

We can scratch something like this together but if someone else already has.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ufs to zfs root file system migration

2008-11-04 Thread Krzys

I did upgrade my Solaris and wanted to move from ufs to zfs, I did read about 
it 
a little but I am not sure about all the steps...

Anyway I do understand that I cannot use whole disk as zpool, so I cannot use 
c1t1d0 but I do have to use c1t1d0s0 instead is that correct?

also all documents that I've found say how to use LU to do this task, but I was 
wondering how can I do my migration when I have few partitions.

Those are my partitions:
/dev/dsk/c1t0d0s016524410 11581246 477792071%/
/dev/dsk/c1t0d0s616524410 9073610 728555656%/usr
/dev/dsk/c1t0d0s116524410 1997555 1436161113%/var
/dev/dsk/c1t0d0s781287957 1230221 79244857 2%/export/home

when I create root zpool do I need to create pool for all of those partitions?
do I need to format my disk and give my slice s0 all the space?

in LU environment how would I specify to move / /usr /var and possibly 
/export/home to go to that one pool? how about swap and dump pools? I did not 
see examples or info how that could be acomplished. I would appreciate some 
hints or maybe there is already someplace document out there, I just was unable 
to locate it...

Greatly appreciate your help in pointing me to the right direction.

Regards,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] change in zpool_get_prop_int

2008-11-04 Thread Thomas Maier-Komor
Hi,

I'm observing a change in the values returned by zpool_get_prop_int. In Solaris 
10 update 5 this function returned the values for ZPOOL_PROP_CAPACITY in bytes, 
but in update 6 (i.e. nv88?) it seems to be returning the value in kB.

Both Solaris versions were shipped with libzfs.so.2. So how can one distinguish 
between those two variants. 

Any comments on this change?

- Thomas
P.S.: I know this is a private interface, but it is quite handy for my system 
observation tool sysstat...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Files from the future are not accessible on ZFS

2008-11-04 Thread A Darren Dunham
On Tue, Nov 04, 2008 at 05:52:33AM -0800, Ivan Wang wrote:
> > $ /usr/bin/amd64/ls -l  .gtk-bookmarks
> > -rw-r--r--   1 user opc0 oct. 16  2057
> > .gtk-bookmarks
> > 
> > This is a bit absurd. I thought Solaris was fully 64
> > bit. I hope those tools will be integrated soon.

Solaris runs on non-64bit capable hardware, and it doesn't use fat
binaries, so both 32-bit and 64-bit binaries exist in some cases.

> I am not sure if this is expected, I thought ls should be actually a
> hard link to isaexec and system picks applicable ISA transparently?

I think that will work, but isaexec is not free for execution time.  My
guess is that someone decided that the execution cost wasn't worth it.
But I have no evidence if that's the real reason.

However, as a test, I created a 2038+ timestamp file on zfs, verified
that /usr/bin/ls would fail to display it, and I moved /usr/bin/ls to
/usr/bin/i86/ls and linked /usr/lib/isaexec to /usr/bin/ls.  (Move it to
/usr/bin/sparcv7/ls on a SPARC host).

/usr/bin/ls now automatically finds and runs the 64-bit version and
displays the file just fine.  I made no attempt to calculate how much
longer it takes to run now.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fwd: Re: FYI: s10u6 LU issues

2008-11-04 Thread Lori Alt


Would you send the messages that appeared with
the failed ludelete?

Lori

Dick Hoogendijk wrote:


Newsgroups: comp.unix.solaris
From: Dick Hoogendijk <[EMAIL PROTECTED]>
Subject: Re: FYI: s10u6 LU issues

quoting cindy (Mon, 3 Nov 2008 13:01:07 -0800 (PST)):
 


Besides the release notes, I'm collecting many issues here as well:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
   



Hi Cindy,

As you are collecting weird things with ZFS I think I have one for you:
I have s10u6 fully running off ZFS, including my sparse zones.
Yesterday I did a "lucreate -n zfsBE" and the new snapshots/clones were
created as expected. The clones were fully mountable too and seemed in
order. That pleased me, because on UFS the zones never were handled OK
with LU and now they were.

However, today I did a ludelete zfsBE and it was/is refused.
Here are some messages from lustatus, ludelete, zfs list and df.
The BE cannot be deleted and I can't understand the errors.
I do know how to get rid of zfsBE (/etc/lutab and /etc/lu/ICF.x) and I
can remove the zfs data by removing the clone/snapshot, I guess. But
ludelete should be able to do it too, don't you think?
To be able to create == to be able to delete ;-)

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a baby thumper?

2008-11-04 Thread Bob Friesenhahn
On Tue, 4 Nov 2008, Gary Mills wrote:

> On Tue, Nov 04, 2008 at 03:31:16PM -0700, Carl Wimmi wrote:
>>
>> There isn't a de-populated version.
>>
>> Would X4540 with 250 or 500 GB drives meet your needs?

Other than the number of drives offered, it seems that the X4540 is a 
substantially different product.  It uses different backplane and 
drive interface hardware and technologies.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a baby thumper?

2008-11-04 Thread Tim
Well, what's the end goal?  What are you testing for that you need from the
thumper?

I/O interfaces?  CPU?  Chipset?  If you need *everything* you don't have any
other choice.

--Tim

On Tue, Nov 4, 2008 at 5:11 PM, Gary Mills <[EMAIL PROTECTED]> wrote:

> On Tue, Nov 04, 2008 at 03:31:16PM -0700, Carl Wimmi wrote:
> >
> > There isn't a de-populated version.
> >
> > Would X4540 with 250 or 500 GB drives meet your needs?
>
> That might be our only choice.
>
> --
> -Gary Mills--Unix Support--U of M Academic Computing and
> Networking-
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] QUESTIONS from EMC: EFI and SMI Disk Labels

2008-11-04 Thread Sharlene Wong





All,

my apologies in advance for the wide distribution - it
was recommended that I contact these aliases but if
there is a more appropriate one, please let me know...

I have received the following EFI disk-related
questions from the EMC PowerPath team who
would like to provide more complete support
for EFI disks on Sun platforms...

I would appreciate help in answering these questions...

thanks...
-sharlene
~~~








1.
Reserving the minor numbers that would have been used for the "h" slice
for the "wd"
    nodes instead creates a "hole". Is this the long term design and
therefore not likely to change?

2. The efi_alloc_and_read(3EXT) function provides a pointer to a
dk_gpt_t structure that
    contains an array of efi_nparts dk_part structures.
(efi_partition.h) Will efi_nparts be 15 or 16?


3. Will dk_part[7] correspond to [EMAIL PROTECTED],0:h or to [EMAIL PROTECTED],0:wd or is it
somehow reserved? 

4. Does the fact that efi_nparts is a member of the dk_gpt_t structure
suggest that it may
    not always be { 15 | 16 }? (If so, we should probably query it and
create the number
    of nodes indicated). 

5. If efi_nparts can be bigger than 16, what happens to the naming of
the partition
    nodes ("q" through "t")? If smaller? 

6. When changing label types (SMI -> EFI, EFI -> SMI), I assume
the DKIOCSVTOC and
    DKIOCSETEFI ioctls are being trapped by the driver to cause the
proper device nodes
    to be created, what is the preferred method for cleaning up the
device nodes? e.g. is a
    reboot always required? Changing from an EFI to an SMI disk label
leaves the wd node
    hanging around, a reboot deletes it. Is this a bug? How are the
/dev/dsk and /dev/rdsk
    links manipulated at label change time, is format involved in some
way other than calling
    the two aforementioned ioctls or is it accomplished entirely within
the driver? If so, using
    what mechanism? 

7. For disks that are not immediately accessible (i.e. iSCSI, not ready
disk or fabric) what
    is done to create the proper device nodes and links? Which of the
following is Sparc/x86
    doing and how?


   a. The attach is failed, and no device nodes are created. The attach
is reattempted at
   some point in the future triggered by some mechanism (different
for sparc AND x86?). 
   If so, what event is used to force an attach?




  
b. The attach is failed and a default set of nodes is created and
adjusted when the device is
   accessable?




  
c. The attach is successful and everything is cleaned up later when the
device is available. 

8. Is it possible to query the path_to_inst data via a system call to
determine the minors created
    for a disk during early boot? This question is
related to question 7; If the Solaris disk driver
    makes a default
assumption about the label type before the device is available and
creates
    nodes accordingly, how can this decision be determined? If
the device configuration is delayed
    until the device is available the
question is not germane.


9. On Opteron, does a plug and play event trigger the attach, ensuring
that the proper nodes
    are created as the label type will be available?

10. On Opteron, why do the cXtXdXp[1-4] links/nodes not address the
corresponding fdisk
 partitions? Multiple fdisk partitions can be created but are not
addressable unless each
 in turn is set as the active partition and then addressed via p0. 

11. An EFI labeled disk must constitute the whole disk, so there can
only be one fdisk partition,
  any existing fdisk partitions are overwritten. Is this by design?
It seems to violate the
  definition of fdisk partitions?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a baby thumper?

2008-11-04 Thread Gary Mills
On Tue, Nov 04, 2008 at 03:31:16PM -0700, Carl Wimmi wrote:
> 
> There isn't a de-populated version.
> 
> Would X4540 with 250 or 500 GB drives meet your needs?

That might be our only choice.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fwd: Re: FYI: s10u6 LU issues

2008-11-04 Thread Dick Hoogendijk
Newsgroups: comp.unix.solaris
From: Dick Hoogendijk <[EMAIL PROTECTED]>
Subject: Re: FYI: s10u6 LU issues

quoting cindy (Mon, 3 Nov 2008 13:01:07 -0800 (PST)):
> Besides the release notes, I'm collecting many issues here as well:
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Hi Cindy,

As you are collecting weird things with ZFS I think I have one for you:
I have s10u6 fully running off ZFS, including my sparse zones.
Yesterday I did a "lucreate -n zfsBE" and the new snapshots/clones were
created as expected. The clones were fully mountable too and seemed in
order. That pleased me, because on UFS the zones never were handled OK
with LU and now they were.

However, today I did a ludelete zfsBE and it was/is refused.
Here are some messages from lustatus, ludelete, zfs list and df.
The BE cannot be deleted and I can't understand the errors.
I do know how to get rid of zfsBE (/etc/lutab and /etc/lu/ICF.x) and I
can remove the zfs data by removing the clone/snapshot, I guess. But
ludelete should be able to do it too, don't you think?
To be able to create == to be able to delete ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ | SunOS 10u6 10/08 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a baby thumper?

2008-11-04 Thread Carl Wimmi
Hi Gary:

There isn't a de-populated version.

Would X4540 with 250 or 500 GB drives meet your needs?

- Carl

Gary Mills wrote:
> One of our storage guys would like to put a thumper into service, but
> he's looking for a smaller model to use for testing.  Is there something
> that has the same CPU, disks, and disk controller as a thumper, but
> fewer disks?  The ones I've seen all have 48 disks.
>
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is there a baby thumper?

2008-11-04 Thread Gary Mills
One of our storage guys would like to put a thumper into service, but
he's looking for a smaller model to use for testing.  Is there something
that has the same CPU, disks, and disk controller as a thumper, but
fewer disks?  The ones I've seen all have 48 disks.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gzip compression throttles system?

2008-11-04 Thread David Gwynne

On 05/11/2008, at 3:27 AM, Bob Friesenhahn wrote:

> On Wed, 5 Nov 2008, David Gwynne wrote:
>
>> be done in a very short time. perhaps you can amortize that cost by
>> doing it when the data from userland makes it into the kernel.  
>> another
>> idea could be doing the compression when you reach a relatively low
>> threshold of uncompressed data in the cache. ie, as soon as you get
>> 1MB of data in the cache, compress it then, rather than waiting till
>> you have 200MB of data in the cache that needs to be compressed RIGHT
>> NOW.
>
> This is counter-productive.  ZFS's lazy compression approach ends up  
> doing a lot less compression in the common case where files are  
> updated multiple times before ZFS decides to write to disk.  If your  
> advice is followed, then every write will involve compression,  
> rather than the summation of perhaps thousands of writes.

when the compression happens can be an arbitrary line in the sand. on  
one side of the line you're mitigating compression per write by  
deferring the compression as long as possible, and on the other side  
you want to spread the cost of compression out over time to avoid  
locking up the machine by doing lots of little compressions more often.

this is a similar trade off zfs makes with its cache and its flush  
interval.

the flush interval is adjustable in zfs, why cant the "compression  
interval/watermark" be invented and made adjustable as well?

dlg

>
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10U6 and x4500 thumper sata controller

2008-11-04 Thread Paul B. Henson
On Sat, 1 Nov 2008, Mertol Ozyoney wrote:

> I also need this information.
> Thanks a lot for keeping me on the loop also

I didn't hear anything back on this, so I went ahead and opened an SR on my
contract, it's #66126640. With an @sun.com address I think you'd have
better information sources than me though...


> > S10U6 was released this morning (whoo-hooo!), and I was wondering if
> > someone in the know could verify that it contains all the
> > fixes/patches/IDRs for the x4500 sata problems?

-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lost Disk Space

2008-11-04 Thread Matthew Ahrens
Ben Rockwood wrote:
> I've been struggling to fully understand why disk space seems to vanish.  
> I've dug through bits of code and reviewed all the mails on the subject that 
> I can find, but I still don't have a proper understanding of whats going on.  
> 
> I did a test with a local zpool on snv_97... zfs list, zpool list, and zdb 
> all seem to disagree on how much space is available.  In this case its only a 
> discrepancy of about 20G or so, but I've got Thumpers that have a discrepancy 
> of over 6TB!
> 
> Can someone give a really detailed explanation about whats going on?
> 
> block traversal size 670225837056 != alloc 720394438144 (leaked 50168601088)
> 
> bp count:15182232
> bp logical:672332631040  avg:  44284
> bp physical:   669020836352  avg:  44066compression:   1.00
> bp allocated:  670225837056  avg:  44145compression:   1.00
> SPA allocated: 720394438144 used: 96.40%
> 
> Blocks  LSIZE   PSIZE   ASIZE avgcomp   %Total  Type
> 12   120K   26.5K   79.5K   6.62K4.53 0.00  deferred free
>  1512 512   1.50K   1.50K1.00 0.00  object directory
>  3  1.50K   1.50K   4.50K   1.50K1.00 0.00  object array
>  116K   1.50K   4.50K   4.50K   10.67 0.00  packed nvlist
>  -  -   -   -   -   --  packed nvlist size
> 72  8.45M889K   2.60M   37.0K9.74 0.00  bplist
>  -  -   -   -   -   --  bplist header
>  -  -   -   -   -   --  SPA space map header
>974  4.48M   2.65M   7.94M   8.34K1.70 0.00  SPA space map
>  -  -   -   -   -   --  ZIL intent log
>  96.7K  1.51G389M777M   8.04K3.98 0.12  DMU dnode
> 17  17.0K   8.50K   17.5K   1.03K2.00 0.00  DMU objset
>  -  -   -   -   -   --  DSL directory
> 13  6.50K   6.50K   19.5K   1.50K1.00 0.00  DSL directory child 
> map
> 12  6.00K   6.00K   18.0K   1.50K1.00 0.00  DSL dataset snap map
> 14  38.0K   10.0K   30.0K   2.14K3.80 0.00  DSL props
>  -  -   -   -   -   --  DSL dataset
>  -  -   -   -   -   --  ZFS znode
>  2 1K  1K  2K  1K1.00 0.00  ZFS V0 ACL
>  5.81M   558G557G557G   95.8K1.0089.27  ZFS plain file
>   382K   301M200M401M   1.05K1.50 0.06  ZFS directory
>  9  4.50K   4.50K   9.00K  1K1.00 0.00  ZFS master node
> 12   482K   20.0K   40.0K   3.33K   24.10 0.00  ZFS delete queue
>  8.20M  66.1G   65.4G   65.8G   8.03K1.0110.54  zvol object
>  1512 512  1K  1K1.00 0.00  zvol prop
>  -  -   -   -   -   --  other uint8[]
>  -  -   -   -   -   --  other uint64[]
>  -  -   -   -   -   --  other ZAP
>  -  -   -   -   -   --  persistent error log
>  1   128K   10.5K   31.5K   31.5K   12.19 0.00  SPA history
>  -  -   -   -   -   --  SPA history offsets
>  -  -   -   -   -   --  Pool properties
>  -  -   -   -   -   --  DSL permissions
>  -  -   -   -   -   --  ZFS ACL
>  -  -   -   -   -   --  ZFS SYSACL
>  -  -   -   -   -   --  FUID table
>  -  -   -   -   -   --  FUID table size
>  5  3.00K   2.50K   7.50K   1.50K1.20 0.00  DSL dataset next 
> clones
>  -  -   -   -   -   --  scrub work queue
>  14.5M   626G623G624G   43.1K1.00   100.00  Total
> 
> 
> real21m16.862s
> user0m36.984s
> sys 0m5.757s
> 
> ===
> Looking at the data:
> [EMAIL PROTECTED] ~$ zfs list backup && zpool list backup
> NAME USED  AVAIL  REFER  MOUNTPOINT
> backup   685G   237K27K  /backup
> NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
> backup   696G   671G  25.1G96%  ONLINE  -
> 
> So zdb says 626GB is used, zfs list says 685GB is used, and zpool list says 
> 671GB is used.  The pool was filled to 100% capacity via dd, this is 
> confirmed, I can't write data, but yet zpool list says its only 96%. 

Unconsumed reservations would cause the space used according to "zfs list" to 
be more than according to "zpool list".  Also I assume you are not using 
RAID-Z.  As Jeff mentioned, zdb is not reliable on pools that are changing.

A percentage of the total space is reserved for pool overhead and is not 
allocatable, but shows up as available in "zpool list".

--matt
___
zfs-discuss mailing li

Re: [zfs-discuss] problems with ludelete

2008-11-04 Thread Ian Collins
Hernan Freschi wrote:
> Hi, I'm not sure if this is the right place to ask. I'm having a little 
> trouble deleting old solaris installs:
>
> [EMAIL PROTECTED]:~]# lustatus
> Boot Environment   Is   Active ActiveCanCopy
> Name   Complete NowOn Reboot Delete Status
> --  -- - -- --
> b90yes  no noyes-
> snv95  yes  no noyes-
> snv101 yes  yesyes   no -
>   
Which of these are ZFS and which UFS?

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] L2ARC in S10 10/08

2008-11-04 Thread Robert Milkowski
Hello Richard,

Tuesday, November 4, 2008, 6:12:48 PM, you wrote:

RE> Robert Milkowski wrote:
>> Hello zfs-discuss,
>>
>>   Looks like it is not supported there - what are the current plan to
>>   bring L2ARC to Solaris 10?
>>   

RE> L2ARC did not make Solaris 10 10/08 (aka update 6).  I think the plans for
RE> update 7 are still being formed.

I hoped it would be deliverd as a patch first before u7

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] L2ARC in S10 10/08

2008-11-04 Thread Richard Elling
Robert Milkowski wrote:
> Hello zfs-discuss,
>
>   Looks like it is not supported there - what are the current plan to
>   bring L2ARC to Solaris 10?
>   

L2ARC did not make Solaris 10 10/08 (aka update 6).  I think the plans for
update 7 are still being formed.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problems with ludelete

2008-11-04 Thread Cyril Plisko
On Tue, Nov 4, 2008 at 7:24 PM, Hernan Freschi <[EMAIL PROTECTED]> wrote:
> Hi, I'm not sure if this is the right place to ask. I'm having a little 
> trouble deleting old solaris installs:
>
> [EMAIL PROTECTED]:~]# lustatus
> Boot Environment   Is   Active ActiveCanCopy
> Name   Complete NowOn Reboot Delete Status
> --  -- - -- --
> b90yes  no noyes-
> snv95  yes  no noyes-
> snv101 yes  yesyes   no -
> [EMAIL PROTECTED]:~]# lu
> lu  lucancellucreateludeletelufslistlumount 
> lustatusluupgrade
> luactivate  lucompare   lucurr  ludesc  lumake  lurename
> luumountluxadm
> [EMAIL PROTECTED]:~]# lustatus
> Boot Environment   Is   Active ActiveCanCopy
> Name   Complete NowOn Reboot Delete Status
> --  -- - -- --
> b90yes  no noyes-
> snv95  yes  no noyes-
> snv101 yes  yesyes   no -
> [EMAIL PROTECTED]:~]# ludelete b90
> System has findroot enabled GRUB
> Checking if last BE on any disk...
> ERROR: lulib_umount: failed to umount BE: .
> ERROR: This boot environment  is the last BE on the above disk.
> ERROR: Deleting this BE may make it impossible to boot from this disk.
> ERROR: However you may still boot solaris if you have BE(s) on other disks.
> ERROR: You *may* have to change boot-device order in the BIOS to accomplish 
> this.
> ERROR: If you still want to delete this BE , please use the force option 
> (-f).
> Unable to delete boot environment.
> [EMAIL PROTECTED]:~]# ludelete snv95
> System has findroot enabled GRUB
> Checking if last BE on any disk...
> ERROR: lulib_umount: failed to umount BE: .
> ERROR: This boot environment  is the last BE on the above disk.
> ERROR: Deleting this BE may make it impossible to boot from this disk.
> ERROR: However you may still boot solaris if you have BE(s) on other disks.
> ERROR: You *may* have to change boot-device order in the BIOS to accomplish 
> this.
> ERROR: If you still want to delete this BE , please use the force 
> option (-f).
> Unable to delete boot environment.
>
> if anyone could help me I'd appreciate it.
>

Herman,

look at your /etc/lu/ICF.* files. Find the one for the snv95 boot env.
(You'll easily identify it by looking into the file)
You'll see the list of the filesystems to be mounted with that boot
environment. Make sure that this list is in correct order. I.e. the
parent directory is listed _before_ its children. Reorder if needed
and retry the ludelete.

Example:

incorrect order:

snv_98:/export/home/imp:rpool/export/home/imp:zfs:0
snv_98:/export/home/impux:rpool/export/home/impux:zfs:0
snv_98:/export/home:rpool/export/home:zfs:0

correct order:

snv_98:/export/home:rpool/export/home:zfs:0
snv_98:/export/home/imp:rpool/export/home/imp:zfs:0
snv_98:/export/home/impux:rpool/export/home/impux:zfs:0

Hope it helps.

-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gzip compression throttles system?

2008-11-04 Thread Bob Friesenhahn
On Wed, 5 Nov 2008, David Gwynne wrote:

> be done in a very short time. perhaps you can amortize that cost by
> doing it when the data from userland makes it into the kernel. another
> idea could be doing the compression when you reach a relatively low
> threshold of uncompressed data in the cache. ie, as soon as you get
> 1MB of data in the cache, compress it then, rather than waiting till
> you have 200MB of data in the cache that needs to be compressed RIGHT
> NOW.

This is counter-productive.  ZFS's lazy compression approach ends up 
doing a lot less compression in the common case where files are 
updated multiple times before ZFS decides to write to disk.  If your 
advice is followed, then every write will involve compression, rather 
than the summation of perhaps thousands of writes.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] problems with ludelete

2008-11-04 Thread Hernan Freschi
Hi, I'm not sure if this is the right place to ask. I'm having a little trouble 
deleting old solaris installs:

[EMAIL PROTECTED]:~]# lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
b90yes  no noyes-
snv95  yes  no noyes-
snv101 yes  yesyes   no -
[EMAIL PROTECTED]:~]# lu
lu  lucancellucreateludeletelufslistlumount 
lustatusluupgrade
luactivate  lucompare   lucurr  ludesc  lumake  lurename
luumountluxadm
[EMAIL PROTECTED]:~]# lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
b90yes  no noyes-
snv95  yes  no noyes-
snv101 yes  yesyes   no -
[EMAIL PROTECTED]:~]# ludelete b90
System has findroot enabled GRUB
Checking if last BE on any disk...
ERROR: lulib_umount: failed to umount BE: .
ERROR: This boot environment  is the last BE on the above disk.
ERROR: Deleting this BE may make it impossible to boot from this disk.
ERROR: However you may still boot solaris if you have BE(s) on other disks.
ERROR: You *may* have to change boot-device order in the BIOS to accomplish 
this.
ERROR: If you still want to delete this BE , please use the force option 
(-f).
Unable to delete boot environment.
[EMAIL PROTECTED]:~]# ludelete snv95
System has findroot enabled GRUB
Checking if last BE on any disk...
ERROR: lulib_umount: failed to umount BE: .
ERROR: This boot environment  is the last BE on the above disk.
ERROR: Deleting this BE may make it impossible to boot from this disk.
ERROR: However you may still boot solaris if you have BE(s) on other disks.
ERROR: You *may* have to change boot-device order in the BIOS to accomplish 
this.
ERROR: If you still want to delete this BE , please use the force option 
(-f).
Unable to delete boot environment.

if anyone could help me I'd appreciate it.

Thanks,
Hernan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] L2ARC in S10 10/08

2008-11-04 Thread Robert Milkowski
Hello zfs-discuss,

  Looks like it is not supported there - what are the current plan to
  bring L2ARC to Solaris 10?

-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
 http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best method do securely wipe a zpool?

2008-11-04 Thread Darren J Moffat
Sean Alderman wrote:
> Greetings,
>   I have been evaluating an X4540 server.  I now have to return it.  I'm 
> curious about all of your thoughts on the best method for securely wiping the 
> data might be.

Use the purge command from the analyze submenu of format(1M) on each disk.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best method do securely wipe a zpool?

2008-11-04 Thread Sean Alderman
Greetings,
  I have been evaluating an X4540 server.  I now have to return it.  I'm 
curious about all of your thoughts on the best method for securely wiping the 
data might be.

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-04 Thread Robert Rodriguez
Hello Marc.  Thank you so much for your help with this.  Although the process 
took a full two days (I've got the Supermicro AOC 8 port PCI-X card in a PCI 
slot, dog-slow write speeds) it went off without a hitch thanks to your 
excellent guide.

# zpool list mp
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
mp4.53T  2.18T  2.35T48%  ONLINE  -

# zpool status mp
  pool: mp
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mp  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c4d0ONLINE   0 0 0
c5d0ONLINE   0 0 0
c6d0ONLINE   0 0 0
c7t7d0  ONLINE   0 0 0
c7t6d0  ONLINE   0 0 0

errors: No known data errors

# df -h /Media
Filesystem size   used  avail capacity  Mounted on
mp/MP_Media3.6T   1.7T   1.8T49%/Media
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gzip compression throttles system?

2008-11-04 Thread David Gwynne

On 11/05/2007, at 4:54 AM, Bill Sommerfeld wrote:

> On Thu, 2007-05-10 at 10:10 -0700, Jürgen Keil wrote:
>> Btw: In one experiment I tried to boot the kernel under kmdb
>> control (-kd), patched "minclsyspri := 61" and used a
>> breakpoint inside spa_active() to patch the spa_zio_* taskq
>> to use prio 60 when importing the gzip compressed pool
>> (so that the gzip compressed pool was using prio 60 threads
>> and usb and other stuff was using prio >= 61 threads).
>> That didn't help interactive performance...
>
> oops.  sounds like cpu-intensive compression (and encryption/ 
> decryption
> once that's upon us) should ideally be handed off to worker threads  
> that
> compete on a "fair" footing with compute-intensive userspace  
> threads, or
> (better yet) are scheduled like the thread which initiated the I/O.

i just installed s10u6 which now has gzip compression on zfs datasets.  
very exciting, my samfs disk vsn can store a bit more now. however,  
despite being able to store more it does lock up a lot while it does  
the compression.

perhaps doing the compression at a different point in the io pipeline  
would be better. at the moment it is done when zfs decides to flush  
the buffers to disk, which ends up being a lot of work that needs to  
be done in a very short time. perhaps you can amortize that cost by  
doing it when the data from userland makes it into the kernel. another  
idea could be doing the compression when you reach a relatively low  
threshold of uncompressed data in the cache. ie, as soon as you get  
1MB of data in the cache, compress it then, rather than waiting till  
you have 200MB of data in the cache that needs to be compressed RIGHT  
NOW.

either way, my opinion is you want to do all the compression work  
spread over a longer period of time, rather than all at once when you  
flush the cache to the spindles. i can tolerate 90% idle cpus. i cant  
tolerate cpus that are 98% idle except for every 5 seconds when theyre  
0% idle for 2 seconds and i cant even see keypresses on a serial  
console cos there's no time for userland to run in.

dlg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ufsrestore to ZFS

2008-11-04 Thread Joerg Schilling
Mark Shellenbaum <[EMAIL PROTECTED]> wrote:

> >>> You can, but I don't think ufsdump is ACL aware.
> >> ufsdump is ACL aware since 12 years.
> >>
> >> The problem may be in ufsrestore that IIRC only supports POSIX draft
> >> ACLs.
> >>
> >> If ZFS is able to translate this to NFSv4 ACLs, you may have luck.
> > 
> > So it might be better to use something like:
> > 
> > # cd /dir1
> > # find . -print -depth | cpio -Ppdm /dir2
> > [dir1 on UFS; dir2 on ZFS]
> > 
>
> ufsrestore will translate a UFS ACL to a ZFS ACL.  Its all handled in 
> the acl_set() interface.

If acl_set() may be used as a direct replacement for:

acl(info->f_name, SETACL, aclcount, aclp) 

this may be an interesting option for star.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Files from the future are not accessible on ZFS

2008-11-04 Thread Ivan Wang
> I see, thanks.
> And as Jörg said, I only need a 64 bit binary. I
> didn't know, but there is one for ls, and it does
> work as expected:
> 
> $ /usr/bin/amd64/ls -l  .gtk-bookmarks
> -rw-r--r--   1 user opc0 oct. 16  2057
> .gtk-bookmarks
> 
> This is a bit absurd. I thought Solaris was fully 64
> bit. I hope those tools will be integrated soon.
> 

I am not sure if this is expected, I thought ls should be actually a hard link 
to isaexec and system picks applicable ISA transparently?

indeed wierd.

Ivan.

> Thanks for the pointers!
> 
> Laurent
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] Help with bizarre S10U5 / zfs / iscsi / thumper / Oracle RAC problem

2008-11-04 Thread Jim Dunham
George,

> I'm looking for any pointers or advice on what might have happened
> to cause the following problem...

To run Oracle RAC on iSCSI Target LUs, accessible by three or more  
iSCSI Initiator nodes, requires support for SCSI-3 Persistent  
Reservations. This functionality was added to OpenSolaris at build  
snv_74, and current being backported to Solaris 10, available in S10u7  
next year.

The weird behavior seen below with 'dd', is likely Oracle's desire to  
continually repair one of its many redundant header blocks.

- Jim

FWIW, you don't need a file that contains zeros, as /dev/zero works  
just fine, and it is infinitely large.

>
>
> Setup:
> Two X4500 / Sol 10 U5 iSCSI servers, four T1000 S10 U4 -> U5 Oracle  
> RAC
> DB heads iSCSI clients.
>
> iSCSI set up using zfs volumes, set shareiscsi=on,
> (slightly wierd thing) partitioned disks to get max spindles
> available for "pseudo-RAID 10" performance zpools (500 gb disks,
> 465 usable, partitioned 115 GB for "fast" db, 345 for "archive" db,
> 5 gb for "utility" used for OCR and VOTE partitions in RAC).
> Disks on each server set up the same way, active zpool disks
> in 7 "fast" pools ("fast" partition on target 1 on each SATA
> controller all together in one pool, target 2 on each in second  
> pool, etc)
> 7 "archive" pools and 7 "utility" pools.  "fast" and "utility" are
> zpool pseudo-RAID 10  "archive" raid-Z.  Fixed size zfs volumes
> built to full capacity of each pool.
>
> The clients were S10U4 when we first spotted this, we upgraded them
> all to S10U5 as soon as we noticed that, but the problem happened
> again last week.  The X4500s have been S10U5 since they were  
> installed.
>
>
> Problem:
> Both servers have experienced a failure mode which initially
> manifested as a Oracle RAC crash and proved via testing to be
> an ignored iSCSI write to "fast" partitions.
>
> Test case:
> (/tmp/zero is a 1-k file full of zero)
> # dd if=/dev/rdsk/c2t42d0s6 bs=1k count=1
> nÉçORCLDISK
> FDATA_0008FDATAFDATA_0008ö*Én¨ö*íSô¼>Ú
> ö*5|1+0 records in
> 1+0 records out
> # dd of=/dev/rdsk/c2t42d0s6 if=/tmp/zero bs=1k count=1
> 1+0 records in
> 1+0 records out
> # dd if=/dev/rdsk/c2t42d0s6 bs=1k count=1
> nÉçORCLDISK
> FDATA_0008FDATAFDATA_0008ö*Én¨ö*íSô¼>Ú
> ö*5|1+0 records in
> 1+0 records out
> #
>
>
> Once this started happening, the same write behavior appears  
> immediately
> on all clients, including new ones which had not previously been
> connected to the iSCSI server.
>
> We can write a block of all 0's, or A's, out to any of the other iSCSI
> devices other than the problem one, and read it back fine.  But the
> misbehaving one consistently refuses to actually commit writes,
> though it takes the write and returns.  All reads get the old data.
>
> zpool status, zfs list, /var/adm/messages, everything else we look
> at on the servers say they're all happy and fine.  But obviously
> there's something very wrong with the particular volume / pool
> which is giving us problems.
>
> A coworker fixed it the first time by running a manual resilver,
> once that was underway writes did the right thing again.  But that
> was just a random shot in the dark - we saw no errors or clear
> reason to resilver.
>
> We saw it again, and it blew up the just-about-to-go-live database,
> and we had to cut over to SAN storage to hit the deploy window.
>
> It's happend on both the X4500s we were using for iSCSI, so it's
> not a single point hardware issue.
>
> I have preserved the second failed system in error mode in case
> someone has ideas for more diagnostics.
>
> I have an open support ticket, but so far no hint at a solution.
>
> Anyone on list have ideas?
>
>
> Thanks
>
> - -george william herbert
> [EMAIL PROTECTED]
> -- 
> This message posted from opensolaris.org
> ___
> storage-discuss mailing list
> [EMAIL PROTECTED]
> http://mail.opensolaris.org/mailman/listinfo/storage-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] does disksize matter for a zfs mirror

2008-11-04 Thread dick hoogendijk

Right now s10u6 runs on ZFS off disk c1d0s0. A check proved that even
zones are copied good at last! Hurray. So, I have no more need of
c0d0s0. For data integrity is is very wise to use this disk in a
mirror, right?

Only problem: the system (zfs) disk is 320Mb; the older c0d0s0 drive is
only 250Mb. Is this a problem?

If not I'll attach the disk asap.

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
++ http://nagual.nl/ | SunOS 10u6 10/08 ++

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] TimeSlider and ZFS auto snapshot problem into indiana

2008-11-04 Thread Lars Timmann
I fixed it by setting the *LK* to *NL* in /etc/shadow so that zfssnap can 
execute cronjobs.
And I added this line to /etc/user_attr:
zfssnaptype=role;auths=solaris.smf.manage.zfs-auto-snapshot;profiles=ZFS 
File System Management
to give zfssnap the rights to snapshot.
Maybe there is a smarter way but it works.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS wrecked our system...

2008-11-04 Thread Thomas Maier-Komor
Christiaan Willemsen schrieb:
> 
>> do the disks show up as expected in format?
>>
>> Is your root pool just a single disk or is it a mirror of mutliple
>> disks? Did you attach/detach any disks to the root pool before rebooting?
>>   
> No, we did nothing at all to the pools. The root pool is a hardware
> mirror, not a zfs mirror.
> 
> Actually, it looks like Opensolaris can't find any of the disk.

There was recently a thread were someone had an issue importing a
known-to-be-healthy pool after a BIOS update. It turned out that the new
BIOS had a different host protected area on the disks and therefore
delivered a different disk size to OS. I'd check the controller and BIOS
settings that are concerned with disks. Any change in this area might
lead to this effect.

Additionally, I think it is not a good idea to use a RAID controller to
mirror disks for ZFS. Like this a silently corrupted sector cannot be
corrected by ZFS. In contrast if you give ZFS both disks as individual
disks and create a ZPOOL mirror, ZFS is able to detect corrupted sectors
and correct them from the health side of the mirror. A hardware mirror
will never know which side of the mirror is good and which is bad...


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS wrecked our system...

2008-11-04 Thread Thomas Maier-Komor
Christiaan Willemsen schrieb:
> Since the last reboot, our system wont boot anymore. It hangs at the "Use is 
> subject to license terms." line for a few minutes, and then gives an error 
> that it can't find the device it needs for making the root pool, and 
> eventually reboots.
> 
> We did not change anything to the system or to the Adaptec controller
> 
> So I tried the OpenSolaris boot CD. It also takes a few minutes to boot (this 
> was never before the case), halting at the exact same line as the normal boot.
> 
> It also complains about drives being offline, but this actually cannot be the 
> case, all drives are working fine..
> 
> When I get to a console, and do a zpool import, it can't find any pool. There 
> should be two pools, one for booting, and another one for the data. 
> 
> This is all on SNV_98...

do the disks show up as expected in format?

Is your root pool just a single disk or is it a mirror of mutliple
disks? Did you attach/detach any disks to the root pool before rebooting?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS wrecked our system...

2008-11-04 Thread Christiaan Willemsen
Since the last reboot, our system wont boot anymore. It hangs at the "Use is 
subject to license terms." line for a few minutes, and then gives an error that 
it can't find the device it needs for making the root pool, and eventually 
reboots.

We did not change anything to the system or to the Adaptec controller

So I tried the OpenSolaris boot CD. It also takes a few minutes to boot (this 
was never before the case), halting at the exact same line as the normal boot.

It also complains about drives being offline, but this actually cannot be the 
case, all drives are working fine..

When I get to a console, and do a zpool import, it can't find any pool. There 
should be two pools, one for booting, and another one for the data. 

This is all on SNV_98...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs compression - btrfs compression

2008-11-04 Thread Darren J Moffat
[EMAIL PROTECTED] wrote:
> On Mon, Nov 03, 2008 at 12:33:52PM -0600, Bob Friesenhahn wrote:
>> On Mon, 3 Nov 2008, Robert Milkowski wrote:
>>> Now, the good filter could be to use MAGIC numbers within files or
>>> approach btrfs come up with, or maybe even both combined.
>> You are suggesting that ZFS should detect a GIF or JPEG image stored 
>> in a database BLOB.  That is pretty fancy functionality. ;-)
> 
> Maybe some general approach (not strictly GIF- or JPEG-oriented)
> could be useful.
> 
> Give people a choice and they will love ZFS even more.

But what is the choice you guys are asking for ?

You can already control the compression setting on a per dataset basis 
what more do you really need ?

Remembering that the more knobs there are to turn the harder it is to 
reason about what is going on and predict behaviour.

I just don't see what the problem is with the current situation.

You either want compression or you don't, this works at the block level 
in ZFS and it already has checks and balances in place to ensure that it 
doesn't needlessly burn CPU (decompression is usually what is more 
expensive than compression).

Can someone who cares about this put a proposal together to show what 
this would look like from a ZFS properties view and what it would 
actually do, because I'm not getting it.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HP Smart Array and b99?

2008-11-04 Thread Fajar A. Nugraha
Per subject, has abyone sucessfully used b99 with HP hardware?

I've been using opensolaris for some time on HP Blade. Installing from
os200805 back in June works fine, with the caveat that I had to manually
add cpqary3 driver (v 1.90 was available back then).

After installation, I regulary upgrade with "pkg image-update". In
general it works fine except for some warnings about cpqary3 on boot.
However,  upgrading to b99 doesn't work (it hangs on boot, perhaps
something to do with cpqary3). I was able to revert back to the previous
installation (b98).

After that, I decided to upgrade my zpool (b98 has zpool v13). Surprise,
surpise, the system now doesn't boot at all. Apparently I got hit by
this "bug" :
http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB

The recovery procedure involve booting from CD/DVD. Thinking that
perhaps HP's update would solve cpqary3 problem, I tried adding cpqary3
v 1.91 to osol-0811-99.iso. Still didn't work. So I tried original
os200805 and b96. Doesn't work due to to zpool version (I should've seen
this one coming).

I'm trying b98 now, wish me luck :-P .

Regards,

Fajar


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss