Re: [zfs-discuss] import zpool error if use loop device as vdev

2007-09-18 Thread George Wilson
By default, 'zpool import' looks only in /dev/dsk. Since you are using 
/dev/lofi you will need to use 'zpool import -d /dev/lofi' to import 
your pool.

Thanks,
George

sunnie wrote:
> Hey, guys
>   I just do the test for use loop device as vdev for zpool
> Procedures as followings:
> 1)  mkfile -v 100m disk1
>  mkfile -v 100m disk2
> 
> 2)  lofiadm -a disk1 /dev/lofi
>  lofiadm -a disk2 /dev/lofi
> 
> 3)  zpool create pool_1and2  /dev/lofi/1 and /dev/lofi/2
> 
> 4)  zpool export pool_1and2
> 
> 5) zpool  import pool_1and2
> 
> error info here:
> bash-3.00# zpool import pool1_1and2
> cannot import 'pool1_1and2': no such pool available
> 
> So, can anyone help explain some details that differ from loop devices and 
> physical  block device?
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] import zpool error if use loop device as vdev

2007-09-18 Thread sunnie
Hey, guys
  I just do the test for use loop device as vdev for zpool
Procedures as followings:
1)  mkfile -v 100m disk1
 mkfile -v 100m disk2

2)  lofiadm -a disk1 /dev/lofi
 lofiadm -a disk2 /dev/lofi

3)  zpool create pool_1and2  /dev/lofi/1 and /dev/lofi/2

4)  zpool export pool_1and2

5) zpool  import pool_1and2

error info here:
bash-3.00# zpool import pool1_1and2
cannot import 'pool1_1and2': no such pool available

So, can anyone help explain some details that differ from loop devices and 
physical  block device?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what patches are needed to enable zfs_nocacheflush

2007-09-18 Thread George Wilson
Bernhard,

Here are the solaris 10 patches:

120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch

See http://www.opensolaris.org/jive/thread.jspa?threadID=39951&tstart=0 
for more info.

Thanks,
George

Bernhard Holzer wrote:
> Hi,
> 
> this parameter (zfs_nocacheflush) is now integrated into Solaris10/U4. 
> Is it possible to "just install a few patches" to enable this.
> What patches are required?
> 
> Thanks
> Bernhard
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] what patches are needed to enable zfs_nocacheflush

2007-09-18 Thread Bernhard Holzer
Hi,

this parameter (zfs_nocacheflush) is now integrated into Solaris10/U4. 
Is it possible to "just install a few patches" to enable this.
What patches are required?

Thanks
Bernhard

-- 
Bernhard Holzer
Sun Microsystems Ges.m.b.H.
Wienerbergstraße 3/7
A-1100 Vienna, Austria 
Phone x60983/+43 1 60563 11983
Mobile +43 664 60563 11983
Fax +43 1 60563  11920
Email [EMAIL PROTECTED]
Handelsgericht Wien, Firmenbuch-Nr. FN 186250 y


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-18 Thread George Wilson
The latest ZFS patches for Solaris 10 are now available:

120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch

ZFS Pool Version available with patches = 4

These patches will provide access to all of the latest features and bug 
fixes:

Features:
PSARC 2006/288 zpool history
PSARC 2006/308 zfs list sort option
PSARC 2006/479 zfs receive -F
PSARC 2006/486 ZFS canmount property
PSARC 2006/497 ZFS create time properties
PSARC 2006/502 ZFS get all datasets
PSARC 2006/504 ZFS user properties
PSARC 2006/622 iSCSI/ZFS Integration
PSARC 2006/638 noxattr ZFS property

Go to http://www.opensolaris.org/os/community/arc/caselog/ for more 
details on the above.

See http://www.opensolaris.org/jive/thread.jspa?threadID=39903&tstart=0 
for complete list of CRs.


Thanks,
George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] System hang caused by a "bad" snapshot

2007-09-18 Thread George Wilson
Ben,

Much of this code has been revamped as a result of:

6514331 in-memory delete queue is not needed

Although this may not fix your issue it would be good to try this test 
with more recent bits.

Thanks,
George

Ben Miller wrote:

> Hate to re-open something from a year ago, but we just had this problem 
> happen again.  We have been running Solaris 10u3 on this system for awhile.  
> I searched the bug reports, but couldn't find anything on this.  I also think 
> I understand what happened a little more.  We take snapshots at noon and the 
> system hung up during that time.  When trying to reboot the system would hang 
> on the ZFS mounts.  After I boot into single use and remove the snapshot from 
> the filesystem causing the problem everything is fine.  The filesystem in 
> question at 100% use with snapshots in use.
> 
> Here's the back trace for the system when it was hung:
>> ::stack
> 0xf0046a3c(f005a4d8, 2a10004f828, 0, 181c850, 1848400, f005a4d8)
> prom_enter_mon+0x24(0, 0, 183b400, 1, 1812140, 181ae60)
> debug_enter+0x118(0, a, a, 180fc00, 0, 183d400)
> abort_seq_softintr+0x94(180fc00, 18a9800, 180c000, 2a10004fd98, 1, 1857c00)
> intr_thread+0x170(2, 30007b64bc0, 0, c001ed9, 110, 6000240)
> 0x985c8(300adca4c40, 0, 0, 0, 0, 30007b64bc0)
> dbuf_hold_impl+0x28(60008cd02e8, 0, 0, 0, 7b648d73, 2a105bb57c8)
> dbuf_hold_level+0x18(60008cd02e8, 0, 0, 7b648d73, 0, 0)
> dmu_tx_check_ioerr+0x20(0, 60008cd02e8, 0, 0, 0, 7b648c00)
> dmu_tx_hold_zap+0x84(60011fb2c40, 0, 0, 0, 30049b58008, 400)
> zfs_rmnode+0xc8(3002410d210, 2a105bb5cc0, 0, 60011fb2c40, 30007b3ff58, 
> 30007b56ac0)
> zfs_delete_thread+0x168(30007b56ac0, 3002410d210, 69a4778, 30007b56b28, 
> 2a105bb5aca, 2a105bb5ac8)
> thread_start+4(30007b56ac0, 0, 0, 489a48, d83a10bf28, 50386)
> 
> Has this been fixed in more recent code?  I can make the crash dump available.
> 
> Ben
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool history not found

2007-09-18 Thread George Wilson
You need to install patch 120011-14. After you reboot you will be able 
to run 'zpool upgrade -a' to upgrade to the latest version.

Thanks,
George

sunnie wrote:
> Hey, guys
>  Since corrent zfs software only support ZFS pool version 3, how should I 
> do to upgrade the zfs software or package?
>  PS. my current os: SUNOS 5.10 Generic_118833-33 sun4u sparc
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool history not found

2007-09-18 Thread sunnie
Hey, guys
 Since corrent zfs software only support ZFS pool version 3, how should I 
do to upgrade the zfs software or package?
 PS. my current os: SUNOS 5.10 Generic_118833-33 sun4u sparc
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panic when trying to import pool

2007-09-18 Thread Geoffroy Doucet
actually here is the first panic messages:
Sep 13 23:33:22 netra2 unix: [ID 603766 kern.notice] assertion failed: 
dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file: 
../../common/fs/zfs/space_map.c, line: 307
Sep 13 23:33:22 netra2 unix: [ID 10 kern.notice]
Sep 13 23:33:22 netra2 genunix: [ID 723222 kern.notice] 02a103e6b000 
genunix:assfail3+94 (7b7706d0, 5, 7b770710, 0, 7b770718, 133)
Sep 13 23:33:22 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
2000 0133  0186f800
Sep 13 23:33:22 netra2   %l4-7:  0183d400 
011eb400 
Sep 13 23:33:22 netra2 genunix: [ID 723222 kern.notice] 02a103e6b0c0 
zfs:space_map_load+1a4 (30007cc2c38, 70450058, 1000, 30007cc2908, 38000, 1)
Sep 13 23:33:22 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
1a60 03000ce3b000  7b73ead0
Sep 13 23:33:22 netra2   %l4-7: 7b73e86c 7fff 
7fff 1000
Sep 13 23:33:22 netra2 genunix: [ID 723222 kern.notice] 02a103e6b190 
zfs:metaslab_activate+3c (30007cc2900, 8000, c000, 
e75efe6c, 30007cc2900, c000)
Sep 13 23:33:23 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
02a103e6b308 0003 0002 006dd004
Sep 13 23:33:23 netra2   %l4-7: 7045 030010834940 
0300080eba40 0300106c9748
Sep 13 23:33:23 netra2 genunix: [ID 723222 kern.notice] 02a103e6b240 
zfs:metaslab_group_alloc+1bc (3fff, 400, 8000, 
32dc18000, 30003387d88, )
Sep 13 23:33:23 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
 0300106c9750 0001 030007cc2900
Sep 13 23:33:23 netra2   %l4-7: 8000  
000196e0c000 4000
Sep 13 23:33:23 netra2 genunix: [ID 723222 kern.notice] 02a103e6b320 
zfs:metaslab_alloc_dva+114 (0, 32dc18000, 30003387d88, 400, 300080eba40, 3fd0f1)
Sep 13 23:33:23 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0001  0003 030011c068e0
Sep 13 23:33:23 netra2   %l4-7:  0300106c9748 
 0300106c9748
Sep 13 23:33:23 netra2 genunix: [ID 723222 kern.notice] 02a103e6b3f0 
zfs:metaslab_alloc+2c (30010834940, 200, 30003387d88, 3, 3fd0f1, 0)
Sep 13 23:33:23 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
030003387de8 0300139e1800 704506a0 
Sep 13 23:33:23 netra2   %l4-7: 030013fca7be  
030010834940 0001
Sep 13 23:33:24 netra2 genunix: [ID 723222 kern.notice] 02a103e6b4a0 
zfs:zio_dva_allocate+4c (30010eafcc0, 7b7515a8, 30003387d88, 70450508, 
70450400, 20001)
Sep 13 23:33:24 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
70450400 07030001 07030001 
Sep 13 23:33:24 netra2   %l4-7:  018a5c00 
0003 0007
Sep 13 23:33:24 netra2 genunix: [ID 723222 kern.notice] 02a103e6b550 
zfs:zio_write_compress+1ec (30010eafcc0, 23e20b, 23e000, 10001, 3, 30003387d88)
Sep 13 23:33:24 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
  0001 0200
Sep 13 23:33:24 netra2   %l4-7:  0001 
fc00 0001
Sep 13 23:33:24 netra2 genunix: [ID 723222 kern.notice] 02a103e6b620 
zfs:zio_wait+c (30010eafcc0, 30010834940, 7, 30010eaff20, 3, 3fd0f1)
Sep 13 23:33:24 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
 7b7297d0 030003387d40 03000be9edf8
Sep 13 23:33:24 netra2   %l4-7: 02a103e6b7c0 0002 
0002 03000a799920
Sep 13 23:33:24 netra2 genunix: [ID 723222 kern.notice] 02a103e6b6d0 
zfs:dmu_objset_sync+12c (30003387d40, 3000a762c80, 1, 1, 3000be9edf8, 0)
Sep 13 23:33:24 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
030003387d88  0002 003be93a
Sep 13 23:33:24 netra2   %l4-7: 030003387e40 0020 
030003387e20 030003387ea0
Sep 13 23:33:25 netra2 genunix: [ID 723222 kern.notice] 02a103e6b7e0 
zfs:dsl_dataset_sync+c (30007609480, 3000a762c80, 30007609510, 30005c475b8, 
30005c475b8, 30007609480)
Sep 13 23:33:25 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0001 0007 030005c47638 0001
Sep 13 23:33:25 netra2   %l4-7: 030007609508  
030005c4caa8 
Sep 13 23:33:25 netra2 genunix: [ID 723222 kern.notice] 02a103e6b890 
zfs:dsl_pool_sync+64 (30005c47500, 3fd0f1, 30007609480, 3000f904380, 
300032bb7c0, 300032bb7e8)
Sep 13 23:33:25 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
 030010834d00 03000a762c80 030005c47698
Sep 13 23:33:25 netra2   %l4-7: 030005c47668 030005c47638 
03000

Re: [zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-18 Thread John Plocher
Many/most of these are available at

http://www.opensolaris.org/os/community/arc/caselog//CCC


replacing /CCC with the case numbers below, as in

http://www.opensolaris.org/os/community/arc/caselog/2007/171

for the 2nd one below.  I'm not sure why the first one (2007/142) isn't
there - I'll check tomorrow...


-John


Kent Watsen wrote:
> How does one access the PSARC database to lookup the description of 
> these features?
> 
> Sorry if this has been asked before! - I tried google before posting 
> this  :-[
> 
> Kent
> 
> 
> George Wilson wrote:
>> ZFS Fans,
>>
>> Here's a list of features that we are proposing for Solaris 10u5. Keep 
>> in mind that this is subject to change.
>>
>> Features:
>> PSARC 2007/142 zfs rename -r
>> PSARC 2007/171 ZFS Separate Intent Log
>> PSARC 2007/197 ZFS hotplug
>> PSARC 2007/199 zfs {create,clone,rename} -p
>> PSARC 2007/283 FMA for ZFS Phase 2
>> PSARC/2006/465 ZFS Delegated Administration
>> PSARC/2006/577 zpool property to disable delegation
>> PSARC/2006/625 Enhancements to zpool history
>> PSARC/2007/121 zfs set copies
>> PSARC/2007/228 ZFS delegation amendments
>> PSARC/2007/295 ZFS Delegated Administration Addendum
>> PSARC/2007/328 zfs upgrade
>>
>> Stay tuned for a finalized list of RFEs and fixes.
>>
>> Thanks,
>> George
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>
>>   
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-18 Thread Eric Schrock
http://www.opensolaris.org/os/community/arc/caselog/

- Eric

On Tue, Sep 18, 2007 at 09:39:51PM -0400, Kent Watsen wrote:
> 
> How does one access the PSARC database to lookup the description of 
> these features?
> 
> Sorry if this has been asked before! - I tried google before posting 
> this  :-[
> 
> Kent
> 
> 
> George Wilson wrote:
> > ZFS Fans,
> >
> > Here's a list of features that we are proposing for Solaris 10u5. Keep 
> > in mind that this is subject to change.
> >
> > Features:
> > PSARC 2007/142 zfs rename -r
> > PSARC 2007/171 ZFS Separate Intent Log
> > PSARC 2007/197 ZFS hotplug
> > PSARC 2007/199 zfs {create,clone,rename} -p
> > PSARC 2007/283 FMA for ZFS Phase 2
> > PSARC/2006/465 ZFS Delegated Administration
> > PSARC/2006/577 zpool property to disable delegation
> > PSARC/2006/625 Enhancements to zpool history
> > PSARC/2007/121 zfs set copies
> > PSARC/2007/228 ZFS delegation amendments
> > PSARC/2007/295 ZFS Delegated Administration Addendum
> > PSARC/2007/328 zfs upgrade
> >
> > Stay tuned for a finalized list of RFEs and fixes.
> >
> > Thanks,
> > George
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> >
> >   
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panic when trying to import pool

2007-09-18 Thread Jeff Bonwick
Basically, it is complaining that there aren't enough disks to read
the pool metadata.  This would suggest that in your 3-disk RAID-Z
config, either two disks are missing, or one disk is missing *and*
another disk is damaged -- due to prior failed writes, perhaps.

(I know there's at least one disk missing because the failure mode
is errno 6, which is EXNIO.)

Can you tell from /var/adm/messages or fmdump whether there write
errors to multiple disks, or to just one?

Jeff

On Tue, Sep 18, 2007 at 05:26:16PM -0700, Geoffroy Doucet wrote:
> I have a raid-z zfs filesystem with 3 disks. The disk was starting have read 
> and write errors.
> 
> The disks was so bad that I started to have trans_err. The server lock up and 
> the server was reset. Then now when trying to import the pool the system 
> panic.
> 
> I installed the last Recommend on my Solaris U3 and also install the last 
> Kernel patch (120011-14).
> 
> But still when trying to do zpool import  it panic.
> 
> I also dd the disk and tested on another server with OpenSolaris B72 and 
> still the same thing. Here is the panic backtrace:
> 
> Stack Backtrace
> -
> vpanic()
> assfail3+0xb9(f7dde5f0, 6, f7dde840, 0, f7dde820, 153)
> space_map_load+0x2ef(ff008f1290b8, c00fc5b0, 1, ff008f128d88,
> ff008dd58ab0)
> metaslab_activate+0x66(ff008f128d80, 8000)
> metaslab_group_alloc+0x24e(ff008f46bcc0, 400, 3fd0f1, 32dc18000,
> ff008fbeaa80, 0)
> metaslab_alloc_dva+0x192(ff008f2d1a80, ff008f235730, 200,
> ff008fbeaa80, 0, 0)
> metaslab_alloc+0x82(ff008f2d1a80, ff008f235730, 200, 
> ff008fbeaa80, 2
> , 3fd0f1)
> zio_dva_allocate+0x68(ff008f722790)
> zio_next_stage+0xb3(ff008f722790)
> zio_checksum_generate+0x6e(ff008f722790)
> zio_next_stage+0xb3(ff008f722790)
> zio_write_compress+0x239(ff008f722790)
> zio_next_stage+0xb3(ff008f722790)
> zio_wait_for_children+0x5d(ff008f722790, 1, ff008f7229e0)
> zio_wait_children_ready+0x20(ff008f722790)
> zio_next_stage_async+0xbb(ff008f722790)
> zio_nowait+0x11(ff008f722790)
> dmu_objset_sync+0x196(ff008e4e5000, ff008f722a10, ff008f260a80)
> dsl_dataset_sync+0x5d(ff008df47e00, ff008f722a10, ff008f260a80)
> dsl_pool_sync+0xb5(ff00882fb800, 3fd0f1)
> spa_sync+0x1c5(ff008f2d1a80, 3fd0f1)
> txg_sync_thread+0x19a(ff00882fb800)
> thread_start+8()
> 
> 
> 
> And here is the panic message buf:
> panic[cpu0]/thread=ff0001ba2c80:
> assertion failed: dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 
> (0
> x6 == 0x0), file: ../../common/fs/zfs/space_map.c, line: 339
> 
> 
> ff0001ba24f0 genunix:assfail3+b9 ()
> ff0001ba2590 zfs:space_map_load+2ef ()
> ff0001ba25d0 zfs:metaslab_activate+66 ()
> ff0001ba2690 zfs:metaslab_group_alloc+24e ()
> ff0001ba2760 zfs:metaslab_alloc_dva+192 ()
> ff0001ba2800 zfs:metaslab_alloc+82 ()
> ff0001ba2850 zfs:zio_dva_allocate+68 ()
> ff0001ba2870 zfs:zio_next_stage+b3 ()
> ff0001ba28a0 zfs:zio_checksum_generate+6e ()
> ff0001ba28c0 zfs:zio_next_stage+b3 ()
> ff0001ba2930 zfs:zio_write_compress+239 ()
> ff0001ba2950 zfs:zio_next_stage+b3 ()
> ff0001ba29a0 zfs:zio_wait_for_children+5d ()
> ff0001ba29c0 zfs:zio_wait_children_ready+20 ()
> ff0001ba29e0 zfs:zio_next_stage_async+bb ()
> ff0001ba2a00 zfs:zio_nowait+11 ()
> ff0001ba2a80 zfs:dmu_objset_sync+196 ()
> ff0001ba2ad0 zfs:dsl_dataset_sync+5d ()
> ff0001ba2b40 zfs:dsl_pool_sync+b5 ()
> ff0001ba2bd0 zfs:spa_sync+1c5 ()
> ff0001ba2c60 zfs:txg_sync_thread+19a ()
> ff0001ba2c70 unix:thread_start+8 ()
> 
> syncing file systems...
> 
> 
> Is there a way to restore the data? Is there a way to "fsck" the zpool, and 
> correct the error manually?
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-18 Thread Kent Watsen

How does one access the PSARC database to lookup the description of 
these features?

Sorry if this has been asked before! - I tried google before posting 
this  :-[

Kent


George Wilson wrote:
> ZFS Fans,
>
> Here's a list of features that we are proposing for Solaris 10u5. Keep 
> in mind that this is subject to change.
>
> Features:
> PSARC 2007/142 zfs rename -r
> PSARC 2007/171 ZFS Separate Intent Log
> PSARC 2007/197 ZFS hotplug
> PSARC 2007/199 zfs {create,clone,rename} -p
> PSARC 2007/283 FMA for ZFS Phase 2
> PSARC/2006/465 ZFS Delegated Administration
> PSARC/2006/577 zpool property to disable delegation
> PSARC/2006/625 Enhancements to zpool history
> PSARC/2007/121 zfs set copies
> PSARC/2007/228 ZFS delegation amendments
> PSARC/2007/295 ZFS Delegated Administration Addendum
> PSARC/2007/328 zfs upgrade
>
> Stay tuned for a finalized list of RFEs and fixes.
>
> Thanks,
> George
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-09-18 Thread eric kustarz

On Sep 18, 2007, at 6:25 AM, Jill Duff wrote:

> Thanks for the feedback. I attempted to enter this bug into the  
> OpenSolaris
> Bug Database yesterday, 9/17. However, it looks as if it has either  
> been
> filtered out or I made an error during entry. I'm willing to re- 
> enter it if
> that's helpful.

Yes, please do.  Let me know if that doesn't work...

eric

>
> I can provide the source code for my test app and one crash dump if  
> anyone
> needs it. Yesterday, the crash was reproduced using bonnie++, an  
> open source
> storage benchmark utility, although the crash is not as frequent as  
> when
> using my test app.
>
> Duff
>
> -Original Message-
> From: eric kustarz [mailto:[EMAIL PROTECTED]
> Sent: Monday, September 17, 2007 6:58 PM
> To: J Duff; [EMAIL PROTECTED]
> Cc: ZFS Discussions
> Subject: Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash
>
> This actually looks like a sd bug... forwarding it to the storage
> alias to see if anyone has seen this...
>
> eric
>
> On Sep 14, 2007, at 12:42 PM, J Duff wrote:
>
>> I'd like to report the ZFS related crash/bug described below. How
>> do I go about reporting the crash and what additional information
>> is needed?
>>
>> I'm using my own very simple test app that creates numerous
>> directories and files of randomly generated data. I have run the
>> test app on two machines, both 64 bit.
>>
>> OpenSolaris crashes a few minutes after starting my test app. The
>> crash has occurred on both machines. On Machine 1, the fault occurs
>> in the SCSI driver when invoked from ZFS. On Machine 2, the fault
>> occurs in the ATA driver when invoked from ZFS. The relevant parts
>> of the message logs appear at the end of this post.
>>
>> The crash is repeatable when using the ZFS file system. The crash
>> does not occur when running the test app against a Solaris/UFS file
>> system.
>>
>> Machine 1:
>> OpenSolaris Community Edition,
>>  snv_72, no BFU (not DEBUG)
>> SCSI Drives, Fibre Channel
>> ZFS Pool is six drive stripe set
>>
>> Machine 2:
>> OpenSolaris Community Edition
>> snv_68 with BFU (kernel has DEBUG enabled)
>> SATA Drives
>> ZFS Pool is four RAIDZ sets, two disks in each RAIDZ set
>>
>> (Please forgive me if I have posted in the wrong place. I am new to
>> ZFS and this forum. However, this forum appears to be the best
>> place to get good quality ZFS information. Thanks.)
>>
>> Duff
>>
>> --
>>
>> Machine 1 Message Log:
>> . . .
>> Sep 13 14:13:22 cypress unix: [ID 836849 kern.notice]
>> Sep 13 14:13:22 cypress ^Mpanic[cpu5]/thread=ff000840dc80:
>> Sep 13 14:13:22 cypress genunix: [ID 683410 kern.notice] BAD TRAP:
>> type=e (#pf Page fault) rp=ff000840ce90 addr=ff01f2b0
>> Sep 13 14:13:22 cypress unix: [ID 10 kern.notice]
>> Sep 13 14:13:22 cypress unix: [ID 839527 kern.notice] sched:
>> Sep 13 14:13:22 cypress unix: [ID 753105 kern.notice] #pf Page fault
>> Sep 13 14:13:22 cypress unix: [ID 532287 kern.notice] Bad kernel
>> fault at addr=0xff01f2b0
>> . . .
>> Sep 13 14:13:22 cypress unix: [ID 10 kern.notice]
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840cd70 unix:die+ea ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840ce80 unix:trap+1351 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840ce90 unix:_cmntrap+e9 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840cfc0 scsi:scsi_transport+1f ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d040 sd:sd_start_cmds+2f4 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d090 sd:sd_core_iostart+17b ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d0f0 sd:sd_mapblockaddr_iostart+185 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d140 sd:sd_xbuf_strategy+50 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d180 sd:xbuf_iostart+103 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d1b0 sd:ddi_xbuf_qstrategy+60 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d1f0 sd:sdstrategy+ec ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d220 genunix:bdev_strategy+77 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d250 genunix:ldi_strategy+54 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d2a0 zfs:vdev_disk_io_start+219 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d2c0 zfs:vdev_io_start+1d ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d300 zfs:zio_vdev_io_start+123 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]
>> ff000840d320 zfs:zio_next_stage_async+

[zfs-discuss] ZFS panic when trying to import pool

2007-09-18 Thread Geoffroy Doucet
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read 
and write errors.

The disks was so bad that I started to have trans_err. The server lock up and 
the server was reset. Then now when trying to import the pool the system panic.

I installed the last Recommend on my Solaris U3 and also install the last 
Kernel patch (120011-14).

But still when trying to do zpool import  it panic.

I also dd the disk and tested on another server with OpenSolaris B72 and still 
the same thing. Here is the panic backtrace:

Stack Backtrace
-
vpanic()
assfail3+0xb9(f7dde5f0, 6, f7dde840, 0, f7dde820, 153)
space_map_load+0x2ef(ff008f1290b8, c00fc5b0, 1, ff008f128d88,
ff008dd58ab0)
metaslab_activate+0x66(ff008f128d80, 8000)
metaslab_group_alloc+0x24e(ff008f46bcc0, 400, 3fd0f1, 32dc18000,
ff008fbeaa80, 0)
metaslab_alloc_dva+0x192(ff008f2d1a80, ff008f235730, 200,
ff008fbeaa80, 0, 0)
metaslab_alloc+0x82(ff008f2d1a80, ff008f235730, 200, ff008fbeaa80, 2
, 3fd0f1)
zio_dva_allocate+0x68(ff008f722790)
zio_next_stage+0xb3(ff008f722790)
zio_checksum_generate+0x6e(ff008f722790)
zio_next_stage+0xb3(ff008f722790)
zio_write_compress+0x239(ff008f722790)
zio_next_stage+0xb3(ff008f722790)
zio_wait_for_children+0x5d(ff008f722790, 1, ff008f7229e0)
zio_wait_children_ready+0x20(ff008f722790)
zio_next_stage_async+0xbb(ff008f722790)
zio_nowait+0x11(ff008f722790)
dmu_objset_sync+0x196(ff008e4e5000, ff008f722a10, ff008f260a80)
dsl_dataset_sync+0x5d(ff008df47e00, ff008f722a10, ff008f260a80)
dsl_pool_sync+0xb5(ff00882fb800, 3fd0f1)
spa_sync+0x1c5(ff008f2d1a80, 3fd0f1)
txg_sync_thread+0x19a(ff00882fb800)
thread_start+8()



And here is the panic message buf:
panic[cpu0]/thread=ff0001ba2c80:
assertion failed: dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 (0
x6 == 0x0), file: ../../common/fs/zfs/space_map.c, line: 339


ff0001ba24f0 genunix:assfail3+b9 ()
ff0001ba2590 zfs:space_map_load+2ef ()
ff0001ba25d0 zfs:metaslab_activate+66 ()
ff0001ba2690 zfs:metaslab_group_alloc+24e ()
ff0001ba2760 zfs:metaslab_alloc_dva+192 ()
ff0001ba2800 zfs:metaslab_alloc+82 ()
ff0001ba2850 zfs:zio_dva_allocate+68 ()
ff0001ba2870 zfs:zio_next_stage+b3 ()
ff0001ba28a0 zfs:zio_checksum_generate+6e ()
ff0001ba28c0 zfs:zio_next_stage+b3 ()
ff0001ba2930 zfs:zio_write_compress+239 ()
ff0001ba2950 zfs:zio_next_stage+b3 ()
ff0001ba29a0 zfs:zio_wait_for_children+5d ()
ff0001ba29c0 zfs:zio_wait_children_ready+20 ()
ff0001ba29e0 zfs:zio_next_stage_async+bb ()
ff0001ba2a00 zfs:zio_nowait+11 ()
ff0001ba2a80 zfs:dmu_objset_sync+196 ()
ff0001ba2ad0 zfs:dsl_dataset_sync+5d ()
ff0001ba2b40 zfs:dsl_pool_sync+b5 ()
ff0001ba2bd0 zfs:spa_sync+1c5 ()
ff0001ba2c60 zfs:txg_sync_thread+19a ()
ff0001ba2c70 unix:thread_start+8 ()

syncing file systems...


Is there a way to restore the data? Is there a way to "fsck" the zpool, and 
correct the error manually?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-18 Thread George Wilson
ZFS Fans,

Here's a list of features that we are proposing for Solaris 10u5. Keep 
in mind that this is subject to change.

Features:
PSARC 2007/142 zfs rename -r
PSARC 2007/171 ZFS Separate Intent Log
PSARC 2007/197 ZFS hotplug
PSARC 2007/199 zfs {create,clone,rename} -p
PSARC 2007/283 FMA for ZFS Phase 2
PSARC/2006/465 ZFS Delegated Administration
PSARC/2006/577 zpool property to disable delegation
PSARC/2006/625 Enhancements to zpool history
PSARC/2007/121 zfs set copies
PSARC/2007/228 ZFS delegation amendments
PSARC/2007/295 ZFS Delegated Administration Addendum
PSARC/2007/328 zfs upgrade

Stay tuned for a finalized list of RFEs and fixes.

Thanks,
George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot timestamp info

2007-09-18 Thread Tim Spriggs

I think they are listed in order with "zfs list".


Fred Oliver wrote:
> Along these lines, the creation time is provided to the nearest second. 
> It is possible to have two snapshots (of the same file system) with the 
> same time stamp.
>
> In this case, is there any way to determine which snapshot was created 
> earlier?
>
> This would be helpful to know in order to predict the effect of a 
> rollback or promote command.
>
> Fred Oliver
>
>
> Tim Spriggs wrote:
>   
>> zfs get creation pool|filesystem|snapshot
>>
>> Poulos, Joe wrote:
>> 
>>> Hello,
>>>
>>>  
>>>
>>> Is there a way to find out what the timestamp is of a specific 
>>> snapshot?  Currently, I have a system with 5 snapshots, and would like 
>>> to know the timestamp as to when it was created.  Thanks JOr
>>>
>>> This message and its attachments may contain legally privileged or 
>>> confidential information. It is intended solely for the named 
>>> addressee. If you are not the addressee indicated in this message (or 
>>> responsible for delivery of the message to the addressee), you may not 
>>> copy or deliver this message or its attachments to anyone. Rather, you 
>>> should permanently delete this message and its attachments and kindly 
>>> notify the sender by reply e-mail. Any content of this message and its 
>>> attachments that does not relate to the official business of News 
>>> America Incorporated or its subsidiaries must be taken not to have 
>>> been sent or endorsed by any of them. No warranty is made that the 
>>> e-mail or attachment(s) are free from computer virus or other defect.
>>>
>>> 
>>>
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>   
>>>   
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot timestamp info

2007-09-18 Thread Fred Oliver

Along these lines, the creation time is provided to the nearest second. 
It is possible to have two snapshots (of the same file system) with the 
same time stamp.

In this case, is there any way to determine which snapshot was created 
earlier?

This would be helpful to know in order to predict the effect of a 
rollback or promote command.

Fred Oliver


Tim Spriggs wrote:
> zfs get creation pool|filesystem|snapshot
> 
> Poulos, Joe wrote:
>> Hello,
>>
>>  
>>
>> Is there a way to find out what the timestamp is of a specific 
>> snapshot?  Currently, I have a system with 5 snapshots, and would like 
>> to know the timestamp as to when it was created.  Thanks JOr
>>
>> This message and its attachments may contain legally privileged or 
>> confidential information. It is intended solely for the named 
>> addressee. If you are not the addressee indicated in this message (or 
>> responsible for delivery of the message to the addressee), you may not 
>> copy or deliver this message or its attachments to anyone. Rather, you 
>> should permanently delete this message and its attachments and kindly 
>> notify the sender by reply e-mail. Any content of this message and its 
>> attachments that does not relate to the official business of News 
>> America Incorporated or its subsidiaries must be taken not to have 
>> been sent or endorsed by any of them. No warranty is made that the 
>> e-mail or attachment(s) are free from computer virus or other defect.
>>
>> 
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>   
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot timestamp info

2007-09-18 Thread Eric Schrock
Or 'zfs list -t snapshot -o name,creation'

- Eric

On Tue, Sep 18, 2007 at 01:41:19PM -0700, Richard Elling wrote:
> Try zpool history
>   -- richard
> 
> Poulos, Joe wrote:
> > 
> > 
> > Hello,
> > 
> >  
> > 
> > Is there a way to find out what the timestamp is of a specific 
> > snapshot?  Currently, I have a system with 5 snapshots, and would like 
> > to know the timestamp as to when it was created.  Thanks JOr
> > 
> > This message and its attachments may contain legally privileged or 
> > confidential information. It is intended solely for the named addressee. 
> > If you are not the addressee indicated in this message (or responsible 
> > for delivery of the message to the addressee), you may not copy or 
> > deliver this message or its attachments to anyone. Rather, you should 
> > permanently delete this message and its attachments and kindly notify 
> > the sender by reply e-mail. Any content of this message and its 
> > attachments that does not relate to the official business of News 
> > America Incorporated or its subsidiaries must be taken not to have been 
> > sent or endorsed by any of them. No warranty is made that the e-mail or 
> > attachment(s) are free from computer virus or other defect.
> > 
> > 
> > 
> > 
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot timestamp info

2007-09-18 Thread Tim Spriggs

zfs get creation pool|filesystem|snapshot

Poulos, Joe wrote:
>
> Hello,
>
>  
>
> Is there a way to find out what the timestamp is of a specific 
> snapshot?  Currently, I have a system with 5 snapshots, and would like 
> to know the timestamp as to when it was created.  Thanks JOr
>
> This message and its attachments may contain legally privileged or 
> confidential information. It is intended solely for the named 
> addressee. If you are not the addressee indicated in this message (or 
> responsible for delivery of the message to the addressee), you may not 
> copy or deliver this message or its attachments to anyone. Rather, you 
> should permanently delete this message and its attachments and kindly 
> notify the sender by reply e-mail. Any content of this message and its 
> attachments that does not relate to the official business of News 
> America Incorporated or its subsidiaries must be taken not to have 
> been sent or endorsed by any of them. No warranty is made that the 
> e-mail or attachment(s) are free from computer virus or other defect.
>
> 
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Neil Perrin


Matty wrote:
> On 9/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
> 
>> Separate log devices (slogs) didn't make it into S10U4 but will be in U5.
> 
> This is awesome! Will the SYNC_NV support that was integrated this
> week be added to update 5 as well? That would be super useful,
> assuming the major arrays vendors support it.

I believe it will. So far we have just batched up all the
bug fixes and enhancements in ZFS and all of them are integrated
into the next update. It's easier for us that way as well.

Actually the part of "we" is not usually played by me!

Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot timestamp info

2007-09-18 Thread Richard Elling
Try zpool history
  -- richard

Poulos, Joe wrote:
> 
> 
> Hello,
> 
>  
> 
> Is there a way to find out what the timestamp is of a specific 
> snapshot?  Currently, I have a system with 5 snapshots, and would like 
> to know the timestamp as to when it was created.  Thanks JOr
> 
> This message and its attachments may contain legally privileged or 
> confidential information. It is intended solely for the named addressee. 
> If you are not the addressee indicated in this message (or responsible 
> for delivery of the message to the addressee), you may not copy or 
> deliver this message or its attachments to anyone. Rather, you should 
> permanently delete this message and its attachments and kindly notify 
> the sender by reply e-mail. Any content of this message and its 
> attachments that does not relate to the official business of News 
> America Incorporated or its subsidiaries must be taken not to have been 
> sent or endorsed by any of them. No warranty is made that the e-mail or 
> attachment(s) are free from computer virus or other defect.
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Matty
On 9/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote:

> Separate log devices (slogs) didn't make it into S10U4 but will be in U5.

This is awesome! Will the SYNC_NV support that was integrated this
week be added to update 5 as well? That would be super useful,
assuming the major arrays vendors support it.

Thanks,
- Ryan
-- 
UNIX Administrator
http://prefetch.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs snapshot timestamp info

2007-09-18 Thread Poulos, Joe
Hello,

 

Is there a way to find out what the timestamp is of a specific snapshot?
Currently, I have a system with 5 snapshots, and would like to know the
timestamp as to when it was created.  Thanks JOr



This message and its attachments may contain legally privileged or confidential 
information.  It is intended solely for the named addressee.  If you are not 
the addressee indicated in this message (or responsible for delivery of the 
message to the addressee), you may not copy or deliver this message or its 
attachments to anyone.  Rather, you should permanently delete this message and 
its attachments and kindly notify the sender by reply e-mail.  Any content of 
this message and its attachments that does not relate to the official business 
of News America Incorporated or its subsidiaries must be taken not to have been 
sent or endorsed by any of them.  No warranty is made that the e-mail or 
attachment(s) are free from computer virus or other defect.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] arcstat - a tool to print ARC statistics

2007-09-18 Thread Neelakanth Nadgir
I wrote a simple tool to print out the ARC statistics exported via
kstat. Details at
http://blogs.sun.com/realneel/entry/zfs_arc_statistics

-neel

-- 
---
Neelakanth Nadgir  PAE Performance And Availability Eng 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] System hang caused by a "bad" snapshot

2007-09-18 Thread Ben Miller
> > > Hello Matthew,
> > > Tuesday, September 12, 2006, 7:57:45 PM, you
> > wrote:
> > > MA> Ben Miller wrote:
> > > >> I had a strange ZFS problem this morning.
>  The
>  > entire system would
>  > >> hang when mounting the ZFS filesystems.  After
>  > trial and error I
> > >> determined that the problem was with one of
>  the
>  > 2500 ZFS filesystems.
> > >> When mounting that users' home the system
>  would
>  > hang and need to be
>  > >> rebooted.  After I removed the snapshots (9 of
>  > them) for that
>  > >> filesystem everything was fine.
>  > >> 
>  > >> I don't know how to reproduce this and didn't
>  get
>  > a crash dump.  I
>  > >> don't remember seeing anything about this
>  before
>  > so I wanted to
>  > >> report it and see if anyone has any ideas.
>  > 
> > MA> Hmm, that sounds pretty bizarre, since I
>  don't
>  > think that mounting a 
>  > MA> filesystem doesn't really interact with
>  snapshots
>  > at all. 
>  > MA> Unfortunately, I don't think we'll be able to
>  > diagnose this without a 
>  > MA> crash dump or reproducibility.  If it happens
>  > again, force a crash dump
>  > MA> while the system is hung and we can take a
>  look
>  > at it.
>  > 
>  > Maybe it wasn't hung after all. I've seen similar
>  > behavior here
>  > sometimes. Did your disks used in a pool were
>  > actually working?
>  > 
>  
>  There was lots of activity on the disks (iostat and
> status LEDs) until it got to this one filesystem
>  and
>  everything stopped.  'zpool iostat 5' stopped
>  running, the shell wouldn't respond and activity on
>  the disks stopped.  This fs is relatively small
>(175M used of a 512M quota).
>  Sometimes it takes a lot of time (30-50minutes) to
>  > mount a file system
>  > - it's rare, but it happens. And during this ZFS
>  > reads from those
>  > disks in a pool. I did report it here some time
>  ago.
>  > 
>  In my case the system crashed during the evening
>  and it was left hung up when I came in during the
>   morning, so it was hung for a good 9-10 hours.
> 
> The problem happened again last night, but for a
> different users' filesystem.  I took a crash dump
> with it hung and the back trace looks like this:
> > ::status
> debugging crash dump vmcore.0 (64-bit) from hostname
> operating system: 5.11 snv_40 (sun4u)
> panic message: sync initiated
> dump content: kernel pages only
> > ::stack
> 0xf0046a3c(f005a4d8, 2a100047818, 181d010, 18378a8,
> 1849000, f005a4d8)
> prom_enter_mon+0x24(2, 183c000, 18b7000, 2a100046c61,
> 1812158, 181b4c8)
> debug_enter+0x110(0, a, a, 180fc00, 0, 183e000)
> abort_seq_softintr+0x8c(180fc00, 18abc00, 180c000,
> 2a100047d98, 1, 1859800)
> intr_thread+0x170(600019de0e0, 0, 6000d7bfc98,
> 600019de110, 600019de110, 
> 600019de110)
> zfs_delete_thread_target+8(600019de080,
> , 0, 600019de080, 
> 6000d791ae8, 60001aed428)
> zfs_delete_thread+0x164(600019de080, 6000d7bfc88, 1,
> 2a100c4faca, 2a100c4fac8, 
> 600019de0e0)
> thread_start+4(600019de080, 0, 0, 0, 0, 0)
> 
> In single user I set the mountpoint for that user to
> be none and then brought the system up fine.  Then I
> destroyed the snapshots for that user and their
> filesystem mounted fine.  In this case the quota was
> reached with the snapshots and 52% used without.
> 
> Ben

Hate to re-open something from a year ago, but we just had this problem happen 
again.  We have been running Solaris 10u3 on this system for awhile.  I 
searched the bug reports, but couldn't find anything on this.  I also think I 
understand what happened a little more.  We take snapshots at noon and the 
system hung up during that time.  When trying to reboot the system would hang 
on the ZFS mounts.  After I boot into single use and remove the snapshot from 
the filesystem causing the problem everything is fine.  The filesystem in 
question at 100% use with snapshots in use.

Here's the back trace for the system when it was hung:
> ::stack
0xf0046a3c(f005a4d8, 2a10004f828, 0, 181c850, 1848400, f005a4d8)
prom_enter_mon+0x24(0, 0, 183b400, 1, 1812140, 181ae60)
debug_enter+0x118(0, a, a, 180fc00, 0, 183d400)
abort_seq_softintr+0x94(180fc00, 18a9800, 180c000, 2a10004fd98, 1, 1857c00)
intr_thread+0x170(2, 30007b64bc0, 0, c001ed9, 110, 6000240)
0x985c8(300adca4c40, 0, 0, 0, 0, 30007b64bc0)
dbuf_hold_impl+0x28(60008cd02e8, 0, 0, 0, 7b648d73, 2a105bb57c8)
dbuf_hold_level+0x18(60008cd02e8, 0, 0, 7b648d73, 0, 0)
dmu_tx_check_ioerr+0x20(0, 60008cd02e8, 0, 0, 0, 7b648c00)
dmu_tx_hold_zap+0x84(60011fb2c40, 0, 0, 0, 30049b58008, 400)
zfs_rmnode+0xc8(3002410d210, 2a105bb5cc0, 0, 60011fb2c40, 30007b3ff58, 
30007b56ac0)
zfs_delete_thread+0x168(30007b56ac0, 3002410d210, 69a4778, 30007b56b28, 
2a105bb5aca, 2a105bb5ac8)
thread_start+4(30007b56ac0, 0, 0, 489a48, d83a10bf28, 50386)

Has this been fixed in more recent code?  I can make the crash dump available.

Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-

Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Andy Lubel
On 9/18/07 2:26 PM, "Neil Perrin" <[EMAIL PROTECTED]> wrote:

> 
> 
> Andy Lubel wrote:
>> On 9/18/07 1:02 PM, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:
>> 
>>> Hey Andy,
>>> 
>>> On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
 I think we are very close to using zfs in our production environment..  Now
 that I have snv_72 installed and my pools set up with NVRAM log devices
 things are hauling butt.
>>> Interesting!  Are you using a MicroMemory device, or is this some other
>>> NVRAM concoction?
>>> 
>> 
>> RAMSAN :)
>> http://www.superssd.com/products/ramsan-400/
> 
> May I ask roughly what you paid for it.
> I think perhaps we ought to get one in-house and check it out as well.
> 
> Thanks: Neil.

~80k for the 128gb model.  But we didn't pay anything for it, it was a
customer return that the vendor wouldn't take back.

Being that they have a fancy (we love) sun logo on the homepage I'm willing
to bet that they would send you a demo unit.  Let me know if I can help at
all with that.

-Andy Lubel
-- 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and encryption

2007-09-18 Thread Robert Milkowski
Hello zfs-discuss,

  I wonder if ZFS will be able to take any advantage of Niagara's
  built-in crypto?
  

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
 http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Andy Lubel
On 9/18/07 1:02 PM, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:

> 
> Hey Andy,
> 
> On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
>> I think we are very close to using zfs in our production environment..  Now
>> that I have snv_72 installed and my pools set up with NVRAM log devices
>> things are hauling butt.
> 
> Interesting!  Are you using a MicroMemory device, or is this some other
> NVRAM concoction?
> 

RAMSAN :)
http://www.superssd.com/products/ramsan-400/

>> I've been digging to find out whether this capability would be put into
>> Solaris 10, does anyone know?
> 
> I would say it's probably unlikely, but I'll let Neil and the ZFS team
> speak for that.  Do you mind if I ask what you're using ZFS for?
> 
> - Bryan

Today's answer:
We want to use it for nearline backups (via nfs), eventually we would like
to use zvols+iscsi to serve up for Oracle databases.

My future answer:
What cant we use ZFS for?


If anyone wants to see my iozones just let me know.

-Andy Lubel

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Neil Perrin
Separate log devices (slogs) didn't make it into S10U4 but will be in U5.

Andy Lubel wrote:
> I think we are very close to using zfs in our production environment..  Now
> that I have snv_72 installed and my pools set up with NVRAM log devices
> things are hauling butt.
> 
> I've been digging to find out whether this capability would be put into
> Solaris 10, does anyone know?
> 
> If not, then I guess we can probably be OK using SXCE (as Joyent did).
> 
> Thanks,
> 
> Andy Lubel
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Cindy . Swearingen
The log device feature integrated into snv_68.

You can read about them here:

http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on

And starting on page 18 of the ZFS Admin Guide, here:

http://opensolaris.org/os/community/zfs/docs



Albert Chin wrote:
> On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
> 
>>I think we are very close to using zfs in our production environment..  Now
>>that I have snv_72 installed and my pools set up with NVRAM log devices
>>things are hauling butt.
> 
> 
> How did you get NVRAM log devices?
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Albert Chin
On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
> I think we are very close to using zfs in our production environment..  Now
> that I have snv_72 installed and my pools set up with NVRAM log devices
> things are hauling butt.

How did you get NVRAM log devices?

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and small files

2007-09-18 Thread Claus Guttesen
> > I have many small - mostly jpg - files where the original file is
> > approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
> > are currently on vxfs. I have copied all files from one partition onto
> > a zfs-ditto. The vxfs-partition occupies 401 GB and zfs 449 GB. Most
> > files uploaded are in jpg and all thumbnails are always jpg.
>
> Is there a problem?

Not by the diskusage itself. But if zfs takes up more space than vxfs
(12 %) 80 TB will become 89 TB instead (our current storage) and add
cost.

> Also, how are you measuring this (what commands)?

I did a 'df -h'.

> > Will a different volblocksize (during creation of the partition) make
> > better use of the available diskspace? Will (meta)data require less
> > space if compression is enabled?
>
> volblocksize won't have any affect on file systems, it is for zvols.
> Perhaps you mean recordsize?  But recall that recordsize is the maximum limit,
> not the actual limit, which is decided dynamically.
>
> > I read http://www.opensolaris.org/jive/thread.jspa?threadID=37673&tstart=105
> > which is very similar to my case except for the file type. But no
> > clear pointers otherwise.
>
> A good start would be to find the distribution of file sizes.

The files are approx. 1 MB with an thumbnail of approx. 4 KB.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and small files

2007-09-18 Thread Richard Elling
Claus Guttesen wrote:
> I have many small - mostly jpg - files where the original file is
> approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
> are currently on vxfs. I have copied all files from one partition onto
> a zfs-ditto. The vxfs-partition occupies 401 GB and zfs 449 GB. Most
> files uploaded are in jpg and all thumbnails are always jpg.

Is there a problem?
Also, how are you measuring this (what commands)?

> Will a different volblocksize (during creation of the partition) make
> better use of the available diskspace? Will (meta)data require less
> space if compression is enabled?

volblocksize won't have any affect on file systems, it is for zvols.
Perhaps you mean recordsize?  But recall that recordsize is the maximum limit,
not the actual limit, which is decided dynamically.

> I read http://www.opensolaris.org/jive/thread.jspa?threadID=37673&tstart=105
> which is very similar to my case except for the file type. But no
> clear pointers otherwise.

A good start would be to find the distribution of file sizes.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Andy Lubel
I think we are very close to using zfs in our production environment..  Now
that I have snv_72 installed and my pools set up with NVRAM log devices
things are hauling butt.

I've been digging to find out whether this capability would be put into
Solaris 10, does anyone know?

If not, then I guess we can probably be OK using SXCE (as Joyent did).

Thanks,

Andy Lubel

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 23, Issue 34

2007-09-18 Thread Russ Petruzzelli




Check out this webpage:
http://www.wizy.org/wiki/ZFS_on_FUSE

Tushar Pardeshi wrote:

  Hello,

I am a final year computer engg student and I am planning to implement
zfs on linux,

I have gone through the articles posted on solaris . Please let me
know about the

feasibility of zfs to be implemented on linux.

waiting for valuable replies.

thanks in advance.

On 9/14/07, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
  
  
Send zfs-discuss mailing list submissions to
	zfs-discuss@opensolaris.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or, via email, send a message with subject or body 'help' to
	[EMAIL PROTECTED]

You can reach the person managing the list at
	[EMAIL PROTECTED]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of zfs-discuss digest..."


Today's Topics:

   1. Re: How do I get my pool back? (Solaris)
   2. Re: ZFS/WAFL lawsuit (Rob Windsor)
   3. Re: How do I get my pool back? (Peter Tribble)
   4. Re: How do I get my pool back? (Mike Lee)
   5. Re: How do I get my pool back? (Peter Tribble)
   6. Re: How do I get my pool back? (Eric Schrock)
   7. Re: How do I get my pool back? (Marion Hakanson)
   8. Re: How do I get my pool back? (Solaris)


--

Message: 1
Date: Thu, 13 Sep 2007 13:27:22 -0400
From: Solaris <[EMAIL PROTECTED]>
Subject: Re: [zfs-discuss] How do I get my pool back?
To: zfs-discuss@opensolaris.org
Message-ID:
	<[EMAIL PROTECTED]>
Content-Type: text/plain; charset="iso-8859-1"

Try exporting the pool then import it.  I have seen this after moving disks
between systems, and on a couple of occasions just rebooting.

On 9/13/07, [EMAIL PROTECTED] <
[EMAIL PROTECTED]> wrote:


  Date: Thu, 13 Sep 2007 15:19:02 +0100
From: "Peter Tribble" <[EMAIL PROTECTED]>
Subject: [zfs-discuss] How do I get my pool back?
To: zfs-discuss@opensolaris.org
Message-ID:
<[EMAIL PROTECTED]>
Content-Type: text/plain; charset=ISO-8859-1

After having to replace an internal raid card in an X2200 (S10U3 in
this case), I can see the disks just fine - and can boot, so the data
isn't completely missing.

However, my zpool has gone.

# zpool status -x
  pool: storage
state: FAULTED
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
storage UNAVAIL  0 0 0  insufficient replicas
  c1t0d0s7  UNAVAIL  0 0 0  cannot open

Note that this is just a slice on my boot drive. So the device
is physically accessible.

How do I get my pool back?

(And how can this sort of thing happen?)

# zpool import
no pools available to import

# zdb
storage
version=3
name='storage'
state=0
txg=4
pool_guid=2378224617566178223
vdev_tree
type='root'
id=0
guid=2378224617566178223
children[0]
type='disk'
id=0
guid=12723054067535078074
path='/dev/dsk/c1t0d0s7'
devid='id1,[EMAIL PROTECTED]/h'
whole_disk=0
metaslab_array=13
metaslab_shift=32
ashift=9
asize=448412778496


--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/



  

-- next part --
An HTML attachment was scrubbed...
URL:
http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070913/0b7d0682/attachment-0001.html

--

Message: 2
Date: Thu, 13 Sep 2007 12:30:30 -0500
From: Rob Windsor <[EMAIL PROTECTED]>
Subject: Re: [zfs-discuss] ZFS/WAFL lawsuit
To: zfs-discuss@opensolaris.org
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

[EMAIL PROTECTED] wrote:



From what I gather about the East Texas venue they tend to
  

repeatedly


  dismiss very competent technical testimony (prior art/non-infringement) --
instead relying more on the lawyer's arguments, lay conjecture and soft
fact.  This seems to be why the venue is so valued by patent sharks
  

holding


  potentially unsupportable patents (they face little risk of losing
enforceability of the patent and better odds of winning).
  

Well, another way I've heard it is "East Texas courts are much more pro-IP."

What you describe above are the means to be pro-IP.  ;)

Rob++
--
Internet: [EMAIL PROTECTED] __o
Life: [EMAIL PROTECTED]_`\<,_
(_)/ (_)
"They couldn't hit an elephant at this

Re: [zfs-discuss] zfs-discuss Digest, Vol 23, Issue 34

2007-09-18 Thread Tushar Pardeshi
Hello,

I am a final year computer engg student and I am planning to implement
zfs on linux,

I have gone through the articles posted on solaris . Please let me
know about the

feasibility of zfs to be implemented on linux.

waiting for valuable replies.

thanks in advance.

On 9/14/07, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> Send zfs-discuss mailing list submissions to
>   zfs-discuss@opensolaris.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> or, via email, send a message with subject or body 'help' to
>   [EMAIL PROTECTED]
>
> You can reach the person managing the list at
>   [EMAIL PROTECTED]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of zfs-discuss digest..."
>
>
> Today's Topics:
>
>1. Re: How do I get my pool back? (Solaris)
>2. Re: ZFS/WAFL lawsuit (Rob Windsor)
>3. Re: How do I get my pool back? (Peter Tribble)
>4. Re: How do I get my pool back? (Mike Lee)
>5. Re: How do I get my pool back? (Peter Tribble)
>6. Re: How do I get my pool back? (Eric Schrock)
>7. Re: How do I get my pool back? (Marion Hakanson)
>8. Re: How do I get my pool back? (Solaris)
>
>
> --
>
> Message: 1
> Date: Thu, 13 Sep 2007 13:27:22 -0400
> From: Solaris <[EMAIL PROTECTED]>
> Subject: Re: [zfs-discuss] How do I get my pool back?
> To: zfs-discuss@opensolaris.org
> Message-ID:
>   <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Try exporting the pool then import it.  I have seen this after moving disks
> between systems, and on a couple of occasions just rebooting.
>
> On 9/13/07, [EMAIL PROTECTED] <
> [EMAIL PROTECTED]> wrote:
> >
> > Date: Thu, 13 Sep 2007 15:19:02 +0100
> > From: "Peter Tribble" <[EMAIL PROTECTED]>
> > Subject: [zfs-discuss] How do I get my pool back?
> > To: zfs-discuss@opensolaris.org
> > Message-ID:
> > <[EMAIL PROTECTED]>
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > After having to replace an internal raid card in an X2200 (S10U3 in
> > this case), I can see the disks just fine - and can boot, so the data
> > isn't completely missing.
> >
> > However, my zpool has gone.
> >
> > # zpool status -x
> >   pool: storage
> > state: FAULTED
> > status: One or more devices could not be opened.  There are insufficient
> > replicas for the pool to continue functioning.
> > action: Attach the missing device and online it using 'zpool online'.
> >see: http://www.sun.com/msg/ZFS-8000-D3
> > scrub: none requested
> > config:
> >
> > NAMESTATE READ WRITE CKSUM
> > storage UNAVAIL  0 0 0  insufficient replicas
> >   c1t0d0s7  UNAVAIL  0 0 0  cannot open
> >
> > Note that this is just a slice on my boot drive. So the device
> > is physically accessible.
> >
> > How do I get my pool back?
> >
> > (And how can this sort of thing happen?)
> >
> > # zpool import
> > no pools available to import
> >
> > # zdb
> > storage
> > version=3
> > name='storage'
> > state=0
> > txg=4
> > pool_guid=2378224617566178223
> > vdev_tree
> > type='root'
> > id=0
> > guid=2378224617566178223
> > children[0]
> > type='disk'
> > id=0
> > guid=12723054067535078074
> > path='/dev/dsk/c1t0d0s7'
> > devid='id1,[EMAIL PROTECTED]/h'
> > whole_disk=0
> > metaslab_array=13
> > metaslab_shift=32
> > ashift=9
> > asize=448412778496
> >
> >
> > --
> > -Peter Tribble
> > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
> >
> >
> >
> -- next part --
> An HTML attachment was scrubbed...
> URL:
> http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070913/0b7d0682/attachment-0001.html
>
> --
>
> Message: 2
> Date: Thu, 13 Sep 2007 12:30:30 -0500
> From: Rob Windsor <[EMAIL PROTECTED]>
> Subject: Re: [zfs-discuss] ZFS/WAFL lawsuit
> To: zfs-discuss@opensolaris.org
> Message-ID: <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> [EMAIL PROTECTED] wrote:
>
> >   From what I gather about the East Texas venue they tend to
> repeatedly
> > dismiss very competent technical testimony (prior art/non-infringement) --
> > instead relying more on the lawyer's arguments, lay conjecture and soft
> > fact.  This seems to be why the venue is so valued by patent sharks
> holding
> > potentially unsupportable patents (they face little risk of losing
> > enforceability of the patent and better odds of winning).
>
> Well, another way I've heard it is "East Texas courts are much more pro-IP."
>
> What you describe above are the means to be pro-IP.  ;)
>
> Rob++
> --
> Internet:

Re: [zfs-discuss] Bugs fixed in update 4?

2007-09-18 Thread Matty
On 9/18/07, Larry Wake <[EMAIL PROTECTED]> wrote:
> Matty wrote:
> > George Wilson put together a list of ZFS enhancements and bug fixes
> > that were integrated into Solaris 10 update 3, and I was curious if
> > there was something similar for update 4? There have been a bunch of
> > reliability and performance enhancements to the ZFS code over the past
> > few months, and I am curious which ones were integrated into the
> > latest Solaris 10 update (I can't seem to find a full list of bugs
> > fixed -- if there is one, please let me know where the bug fixes are
> > documented).
> >
> > Thanks for any insight,
> > - Ryan
> >
>
>
> This isn't specific to ZFS, but the Solaris 10 8/07 release notes doc
> has a list of all patches included and the bugs they address:
> http://docs.sun.com/app/docs/doc/820-1259/ .

Hi Larry,

I only see 3 - 4 CRs related to ZFS in that list. Did any additional
bug fixes make it into update 4 that are not included in the patch
list?

Thanks,
- Ryan
-- 
UNIX Administrator
http://prefetch.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bugs fixed in update 4?

2007-09-18 Thread Larry Wake
Matty wrote:
> George Wilson put together a list of ZFS enhancements and bug fixes
> that were integrated into Solaris 10 update 3, and I was curious if
> there was something similar for update 4? There have been a bunch of
> reliability and performance enhancements to the ZFS code over the past
> few months, and I am curious which ones were integrated into the
> latest Solaris 10 update (I can't seem to find a full list of bugs
> fixed -- if there is one, please let me know where the bug fixes are
> documented).
>
> Thanks for any insight,
> - Ryan
>   


This isn't specific to ZFS, but the Solaris 10 8/07 release notes doc 
has a list of all patches included and the bugs they address: 
http://docs.sun.com/app/docs/doc/820-1259/ .

Regards,
Larry


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] S10U4 ZFS Improvements / Bug Fixes

2007-09-18 Thread Prabahar Jeyaram
Attached is the list of ZFS improvements / bugs fixes in S10U4.


--
Prabahar.
4894692 caching data in heap inflates crash dump
6269805 properties should be set via an nvlist.
6276925 option to sort 'zfs list' output
6281585 user defined properties
6341569 zio_alloc_blk() vdev distribution performs badly
6343741 want to store a command history on disk
6349494 'zfs list' output annoying for even moderately long dataset names
6349987 lzjb.c lived longer than expected?
6351954 zfs missing noxattr mount flag
6366244 'canmount' option for container-like functionality
6367103 create-time properties
6368751 libzfs interface for mount/umounting all the file systems for a given 
pool
6385349 zpool import -d /dev hangs
6393525 vdev_reopen() should verify that it's still the same device
6397052 unmounting datasets should process /etc/mnttab instead of traverse DSL
6403510 zfs remount,noatime option broken
6413510 zfs: writing to ZFS filesystem slows down fsync() on other files in the 
same FS
6414648 zfs allows overlapping devices to be added
6416639 RFE: provide zfs get -a
6420135 zfs(1m) should display properties of snapshots that affect their 
behavior
6421992 'zfs send -i' requires redundant input
6423412 Two spaces lines are unnecessary after 'zpool import -a'
6424466 "panic: data after EOF" when unmounting abused pool
6428639 large writes to zvol synchs too much, better cut down a little
6431818 ZFS receive should print a better error message on failure
6434054 'zfs destroy' core dumps if clone is namespace-parent of origin
6435943 assertion failed: spare != 0L, file: ../../common/fs/zfs/spa_misc.c
6436000 import of actively spared device returns EBUSY
6437472 want 'zfs recv -F' to force rollback
6437808 ZFS module version should match on-disk version
6438643 'zfs rename  ' requires redundant arguments
6438702 error handling in zfs_getpage() can trigger "page not locked" panic
6438947 znode_t's z_active should be removed
6440515 namespace_reload() can leak memory on allocation faiure
6440592 nvlist_dup() should not fill in destination pointer on error
6444692 Need to flush disk write cache for dmu_sync buffers
6445680 Having write_acl allowed in an ACL doesn't give the ability to set the 
mode via chmod
6446060 zfs get does not consistently report temporary properties
6446512 zfs send does not catch malformed snapshot name
6447701 ZFS hangs when iSCSI Target attempts to initialize its backing store
6448326 zfs(1) 'list' command crashes if hidden property createtxg is requested
6450653 get_dependents() has poor error semantics
6450672 chmod acl parse error gave poor help for incorrect delimeter (,) 
instead of (/)
6453026 typo in zfs clone error message
6453172 ztest turns into a sloth due to massive arc_min_prefetch_lifespan
6454551 'zfs create -b blocksize filesystem' should fail.
6455228 zpool_mount_datasets() should take an additional flag
6455234 libzfs_error_description() should not abort if no error
6456642 Bug in libzfs_init()
6457478 unrecognized character in error message with 'zpool create -R' command
6457865 missing device name in the error message of 'zpool clear' command
6458058 invalid snapshot error message
6458571 zfs_ioc_set_prop() doesn't validate input
6458614 zfs ACL #defines should use prefix
6458638 get_configs() accesses bogus memory
6458678 zvol functions should be moved out of zfs_ioctl.h
6458683 zfs_cmd_t could use more cleanup
6458691 common routines to manage zfs_cmd_t nvlists
6460043 zfs_write()/zfs_getpage() range lock deadlock
6460059 zfs destroy  leaves behind kruft
6460398 zpool import cores on zfs_prop_get
6461029 zpool status -x noexisting-pool has incorrect error message.
6461223 index translations should live with property definitions
6461424 zpool_unmount_datasets() has some busted logic
6461427 zfs_realloc() would be useful
6461438 zfs send/recv code should live in its own file
6461609 zfs delete permissions are not working correctly
6461757 'zpool status' can report the wrong number of persistent errors
6461784 recursive zfs_snapshot() leaks memory
6462174 zap_update() likes to return 0
6463140 zfs recv with a snapshot name that has 2 @@ in a row succeeds
6463348 ZFS code could be more portable
6463349 error message from zpool(1M) is missing a newline
6463350 getcomponent() has spurious and wrong check
6463788 'zfs recv -d' fails if some ancestors already exist
6464310 Stressing trunc can induce file corruption
6464897 assertion failed: "BP_GET_COMPRESS(bp) == compress" zio.c, line:897
6465634 zvol: dmu_sync() should be issued in parallel
6468453 Incorrect usage of rw_enter() in zfs_getpage()
6468731 lwb_state_t can be nuked
6468748 assertion failure in dnode_sync
6469119 race between arc_buf_clone() and arc_buf_add_ref() results in NULL 
pointer dereference
6469385 zfs_set_prop_nvlist range checking is busted
6469830 'zfs set' panics non-debug systems
6470042 parallel dmu_sync() isn't being used
6470764 zfs_clone() confuses filesystems and volumes
6471679 stash blocksize

Re: [zfs-discuss] PLOGI errors

2007-09-18 Thread Gino
> What lead you to the assumption it's ONLY those
> switches?  Just because the patch is ONLY for those
> switches doesn't mean that the bug is only for them.
> The reason you only see the patch for 3xxx and newer
> is because the 2xxx was EOL before the patch was
>  released...
> 
> FabOS is FabOS, the nature of the issue is not
> hardware related, it's software related.  2850 or
> 3850 makes no difference.

I think the same.

What other problems can make this errors happens?

gino
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panic in space_map.c line 125

2007-09-18 Thread Gino
> Tomas Ögren wrote:
> > On 18 September, 2007 - Gino sent me these 0,3K
> bytes:
> > 
> >> Hello,
> >> upgrade to snv_60 or later if you care about your
> data :)
> > 
> > If there are known serious data loss bug fixes that
> have gone into
> > snv60+, but not into s10u4, then I would like to
> tell Sun to "backport"
> > those into s10u4 if they care about keeping
> customers..
> > 
> > Any specific bug fixes you know about that one
> really wants? (so we can
> > poke support)..
> 
> I think it is bug
> 
> 6458218 assertion failed: ss == NULL
> 
> which is fixed in Solaris 10 8/07.

yes, it was 6458218.  Then Solaris 10 8/07 will be fine.

Gino
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Bugs fixed in update 4?

2007-09-18 Thread Matty
George Wilson put together a list of ZFS enhancements and bug fixes
that were integrated into Solaris 10 update 3, and I was curious if
there was something similar for update 4? There have been a bunch of
reliability and performance enhancements to the ZFS code over the past
few months, and I am curious which ones were integrated into the
latest Solaris 10 update (I can't seem to find a full list of bugs
fixed -- if there is one, please let me know where the bug fixes are
documented).

Thanks for any insight,
- Ryan
-- 
UNIX Administrator
http://prefetch.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-09-18 Thread Larry Liu

> I can provide the source code for my test app and one crash dump if anyone
> needs it. Yesterday, the crash was reproduced using bonnie++, an open source
> storage benchmark utility, although the crash is not as frequent as when
> using my test app.
>
>   
Yes, it is appreciated if you could provide a link to download the corefile.

Thanks,
Larry


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-09-18 Thread Jill Duff
Thanks for the feedback. I attempted to enter this bug into the OpenSolaris
Bug Database yesterday, 9/17. However, it looks as if it has either been
filtered out or I made an error during entry. I'm willing to re-enter it if
that's helpful.

I can provide the source code for my test app and one crash dump if anyone
needs it. Yesterday, the crash was reproduced using bonnie++, an open source
storage benchmark utility, although the crash is not as frequent as when
using my test app.

Duff

-Original Message-
From: eric kustarz [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 17, 2007 6:58 PM
To: J Duff; [EMAIL PROTECTED]
Cc: ZFS Discussions
Subject: Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

This actually looks like a sd bug... forwarding it to the storage  
alias to see if anyone has seen this...

eric

On Sep 14, 2007, at 12:42 PM, J Duff wrote:

> I'd like to report the ZFS related crash/bug described below. How  
> do I go about reporting the crash and what additional information  
> is needed?
>
> I'm using my own very simple test app that creates numerous  
> directories and files of randomly generated data. I have run the  
> test app on two machines, both 64 bit.
>
> OpenSolaris crashes a few minutes after starting my test app. The  
> crash has occurred on both machines. On Machine 1, the fault occurs  
> in the SCSI driver when invoked from ZFS. On Machine 2, the fault  
> occurs in the ATA driver when invoked from ZFS. The relevant parts  
> of the message logs appear at the end of this post.
>
> The crash is repeatable when using the ZFS file system. The crash  
> does not occur when running the test app against a Solaris/UFS file  
> system.
>
> Machine 1:
> OpenSolaris Community Edition,
>  snv_72, no BFU (not DEBUG)
> SCSI Drives, Fibre Channel
> ZFS Pool is six drive stripe set
>
> Machine 2:
> OpenSolaris Community Edition
> snv_68 with BFU (kernel has DEBUG enabled)
> SATA Drives
> ZFS Pool is four RAIDZ sets, two disks in each RAIDZ set
>
> (Please forgive me if I have posted in the wrong place. I am new to  
> ZFS and this forum. However, this forum appears to be the best  
> place to get good quality ZFS information. Thanks.)
>
> Duff
>
> --
>
> Machine 1 Message Log:
> . . .
> Sep 13 14:13:22 cypress unix: [ID 836849 kern.notice]
> Sep 13 14:13:22 cypress ^Mpanic[cpu5]/thread=ff000840dc80:
> Sep 13 14:13:22 cypress genunix: [ID 683410 kern.notice] BAD TRAP:  
> type=e (#pf Page fault) rp=ff000840ce90 addr=ff01f2b0
> Sep 13 14:13:22 cypress unix: [ID 10 kern.notice]
> Sep 13 14:13:22 cypress unix: [ID 839527 kern.notice] sched:
> Sep 13 14:13:22 cypress unix: [ID 753105 kern.notice] #pf Page fault
> Sep 13 14:13:22 cypress unix: [ID 532287 kern.notice] Bad kernel  
> fault at addr=0xff01f2b0
> . . .
> Sep 13 14:13:22 cypress unix: [ID 10 kern.notice]
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840cd70 unix:die+ea ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840ce80 unix:trap+1351 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840ce90 unix:_cmntrap+e9 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840cfc0 scsi:scsi_transport+1f ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d040 sd:sd_start_cmds+2f4 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d090 sd:sd_core_iostart+17b ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d0f0 sd:sd_mapblockaddr_iostart+185 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d140 sd:sd_xbuf_strategy+50 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d180 sd:xbuf_iostart+103 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d1b0 sd:ddi_xbuf_qstrategy+60 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d1f0 sd:sdstrategy+ec ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d220 genunix:bdev_strategy+77 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d250 genunix:ldi_strategy+54 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d2a0 zfs:vdev_disk_io_start+219 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d2c0 zfs:vdev_io_start+1d ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d300 zfs:zio_vdev_io_start+123 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d320 zfs:zio_next_stage_async+bb ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d340 zfs:zio_nowait+11 ()
> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
> ff000840d380 zfs:vdev_mi

Re: [zfs-discuss] ZFS panic in space_map.c line 125

2007-09-18 Thread Victor Latushkin
Tomas Ögren wrote:
> On 18 September, 2007 - Gino sent me these 0,3K bytes:
> 
>> Hello,
>> upgrade to snv_60 or later if you care about your data :)
> 
> If there are known serious data loss bug fixes that have gone into
> snv60+, but not into s10u4, then I would like to tell Sun to "backport"
> those into s10u4 if they care about keeping customers..
> 
> Any specific bug fixes you know about that one really wants? (so we can
> poke support)..

I think it is bug

6458218 assertion failed: ss == NULL

which is fixed in Solaris 10 8/07.

Hth,
victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panic in space_map.c line 125

2007-09-18 Thread Tomas Ögren
On 18 September, 2007 - Gino sent me these 0,3K bytes:

> Hello,
> upgrade to snv_60 or later if you care about your data :)

If there are known serious data loss bug fixes that have gone into
snv60+, but not into s10u4, then I would like to tell Sun to "backport"
those into s10u4 if they care about keeping customers..

Any specific bug fixes you know about that one really wants? (so we can
poke support)..

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about uberblock blkptr

2007-09-18 Thread [EMAIL PROTECTED]
Jim Mauro wrote:
>
> Hey Max - Check out the on-disk specification document at
> http://opensolaris.org/os/community/zfs/docs/.
>
> Page 32 illustration shows the rootbp pointing to a dnode_phys_t
> object (the first member of a objset_phys_t data structure).
>
> The source code indicates ub_rootbp is a blkptr_t, which contains
> a 3 member array of dva_t 's called blk_dva (blk_dva[3]).
> Each dva_t is a 2 member array of 64-bit unsigned ints (dva_word[2]).
>
> So it looks like each blk_dva contains 3 128-bit DVA's
>
> You probably figured all this out alreadydid you try using
> a objset_phys_t to format the data?
>
> Thanks,
> /jim
Ok.  I think I know what's wrong.  I think the information (most likely, 
a objset_phys_t) is compressed
with lzjb compression.  Is there a way to turn this entirely off (not 
just for file data, but for all meta data
as well when a pool is created?  Or do I need to figure out how to hack 
in the lzjb_decompress() function in
my modified mdb?  (Also, I figured out that zdb is already doing the 
left shift by 9 before dumping DVA values,
for anyone following this...).

thanks,
max

>
>
>
> [EMAIL PROTECTED] wrote:
>> Hi All,
>> I have modified mdb so that I can examine data structures on disk 
>> using ::print.
>> This works fine for disks containing ufs file systems.  It also works 
>> for zfs file systems, but...
>> I use the dva block number from the uberblock_t to print what is at 
>> the block
>> on disk.  The problem I am having is that I can not figure out what 
>> (if any) structure to use.
>> All of the xxx_phys_t types that I try do not look right.  So, the 
>> question is, just what is
>> the structure that the uberblock_t dva's refer to on the disk?
>>
>> Here is an example:
>>
>> First, I use zdb to get the dva for the rootbp (should match the 
>> value in the uberblock_t(?)).
>>
>> # zdb - usbhard | grep -i dva
>> Dataset mos [META], ID 0, cr_txg 4, 1003K, 167 objects, rootbp [L0 
>> DMU objset] 400L/200P DVA[0]=<0:111f79000:200> 
>> DVA[1]=<0:506bde00:200> DVA[2]=<0:36a286e00:200> fletcher4 lzjb LE 
>> contiguous birth=621838 fill=167 
>> cksum=84daa9667:365cb5b02b0:b4e531085e90:197eb9d99a3beb
>> bp = [L0 DMU objset] 400L/200P 
>> DVA[0]=<0:111f6ae00:200> DVA[1]=<0:502efe00:200> 
>> DVA[2]=<0:36a284e00:200> fletcher4 lzjb LE contiguous birth=621838 
>> fill=34026 cksum=cd0d51959:4fef8f217c3:10036508a5cc4:2320f4b2cde529
>> Dataset usbhard [ZPL], ID 5, cr_txg 4, 15.7G, 34026 objects, rootbp 
>> [L0 DMU objset] 400L/200P DVA[0]=<0:111f6ae00:200> 
>> DVA[1]=<0:502efe00:200> DVA[2]=<0:36a284e00:200> fletcher4 lzjb LE 
>> contiguous birth=621838 fill=34026 
>> cksum=cd0d51959:4fef8f217c3:10036508a5cc4:2320f4b2cde529
>> first block: [L0 ZIL intent log] 9000L/9000P 
>> DVA[0]=<0:36aef6000:9000> zilog uncompressed LE contiguous 
>> birth=263950 fill=0 cksum=97a624646cebdadb:fd7b50f37b55153b:5:1
>> ^C
>> #
>>
>> Then I run my modified mdb on the vdev containing the "usbhard" pool
>> # ./mdb /dev/rdsk/c4t0d0s0
>>
>> I am using the DVA[0} for the META data set above.  Note that I have 
>> tried all of the xxx_phys_t structures
>> that I can find in zfs source, but none of them look right.  Here is 
>> example output dumping the data as a objset_phys_t.
>> (The shift by 9 and adding 40 is from the zfs on-disk format 
>> paper, I have tried without the addition, without the shift,
>> in all combinations, but the output still does not make sense).
>>
>>  > (111f79000<<9)+40::print zfs`objset_phys_t
>> {
>> os_meta_dnode = {
>> dn_type = 0x4f
>> dn_indblkshift = 0x75
>> dn_nlevels = 0x82
>> dn_nblkptr = 0x25
>> dn_bonustype = 0x47
>> dn_checksum = 0x52
>> dn_compress = 0x1f
>> dn_flags = 0x82
>> dn_datablkszsec = 0x5e13
>> dn_bonuslen = 0x63c1
>> dn_pad2 = [ 0x2e, 0xb9, 0xaa, 0x22 ]
>> dn_maxblkid = 0x20a34fa97f3ff2a6
>> dn_used = 0xac2ea261cef045ff
>> dn_pad3 = [ 0x9c2b4541ab9f78c0, 0xdb27e70dce903053, 
>> 0x315efac9cb693387, 0x2d56c54db5da75bf ]
>> dn_blkptr = [
>> {
>> blk_dva = [
>> {
>> dva_word = [ 0x87c9ed7672454887, 
>> 0x760f569622246efe ]
>> }
>> {
>> dva_word = [ 0xce26ac20a6a5315c, 
>> 0x38802e5d7cce495f ]
>> }
>> {
>> dva_word = [ 0x9241150676798b95, 
>> 0x9c6985f95335742c ]
>> }
>> ]
>> None of this looks believable.  So, just what is the rootbp in the 
>> uberblock_t referring to?
>>
>> thanks,
>> max
>>
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>   
>

___
zfs-discuss mailing list
zfs-discuss@op

Re: [zfs-discuss] ZFS panic in space_map.c line 125

2007-09-18 Thread Gino
Hello,
upgrade to snv_60 or later if you care about your data :)

Gino
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] change uid/gid below 100

2007-09-18 Thread Darren J Moffat
Paul Kraus wrote:
> On 9/17/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> 
>> Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?
>>
>> Note that ALL uid and gid values below 100 are explicitly reserved for
>> use by the operating system itself and should not be used by end admins.
>>   This is why smc failed to make the change.
> 
> Calling the Sun ONE Web Server (the reservation of UID/GID 80)
> part of the operating system is a stretch.

I didn't mention anything about Sun ONE Web Server (in fact I didn't 
name any webserver at all).   It happens that I ran the ARC case to add 
that uid/gid pair for the "Sun Java Enterprise System Web Server" first 
but they aren't reserved just for that webserver.  If you look at the 
Apache2 packages in Solaris Express you will see that the httpd.conf is 
configured to use that uid/gid.

 > Is there a definitive list
> of what users and services all of the UID/GIDs below 100 are reserved
> for anywhere ?

The master reference for those at are currently allocated is the ARC[1] 
case log and the /etc/passwd source[2] file in the the ON consolidation.


[1] http://opensolaris.org/os/community/arc/
[2] 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/Adm/sun/passwd

ALL values below 100 are reserved and may be allocated by an ARC case at 
any time.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and small files

2007-09-18 Thread Claus Guttesen
> Will a different volblocksize (during creation of the partition) make
> better use of the available diskspace? Will (meta)data require less
> space if compression is enabled?

Just re-read the evil-tuning-guide and metadata is allready compressed
(http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Disabling_Metadata_Compression).

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs and small files

2007-09-18 Thread Claus Guttesen
Hi.

I have many small - mostly jpg - files where the original file is
approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
are currently on vxfs. I have copied all files from one partition onto
a zfs-ditto. The vxfs-partition occupies 401 GB and zfs 449 GB. Most
files uploaded are in jpg and all thumbnails are always jpg.

Will a different volblocksize (during creation of the partition) make
better use of the available diskspace? Will (meta)data require less
space if compression is enabled?

I read http://www.opensolaris.org/jive/thread.jspa?threadID=37673&tstart=105
which is very similar to my case except for the file type. But no
clear pointers otherwise.

This is a fresh install of opensolaris nevada b64a on a Dell PE840
(test-machine) and three 750 GB sata disks dedicated to zfs as a raidz
pool.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss