Re: [zfs-discuss] Corrupt meta data, the coredump

2009-06-11 Thread Timh Bergström
Hi,

It indeed does, I am running a really old version of zfs (3?) so i
figured a newer release would atleast not panic, but the bug report
shows exactly what I saw.

I'll give it a shot, thanks.

//Timh

Den den 11 juni 2009 17:35 skrev Richard Elling:
> This sounds like
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6587723
> which was fixed a long time ago.  You might check that bug against your
> stack trace (which was not included in this post).
>
> You may be able to boot from a later OS release and import/export the pool
> to repair.
> -- richard
>
> Timh Bergström wrote:
>>
>> Hi all,
>>
>> I've encountered a not so fun problem with one of our pools, the pool
>> was built with raidz1 according to the zfs-manual, the discs was
>> imported through an ERQ 16x750GB FC-Array (exported as JBOD) via
>> (QLogic) FC-HBA's to Solaris 10u3 (x86). Everything have worked fine
>> and dandy until this morning when the disc-enclosure "crashed" (Reason
>> unknown) and subsequently dragged the whole system with it, I didnt
>> get the coredump at the time but now when i've restarted and
>> reattached the enclosure and tried to import the zpool again I got the
>> following:
>>
>> # zpool status -vx
>> pool: migrated_data
>> state: FAULTED
>> status: The pool metadata is corrupted and the pool cannot be opened.
>> action: Destroy and re-create the pool from a backup source.
>> see: http://www.sun.com/msg/ZFS-8000-CS
>> scrub: none requested
>> config:
>> ...
>>
>> And just a couple of seconds after zpool status -vx the machine coredumps
>> with:
>>
>> panic[cpu0]/thread=fe80fcd34ba0: BAD TRAP: type=e (#pf Page fault)
>> rp=fe
>> 800138cb10 addr=0 occurred in module "zfs" due to a NULL pointer
>> dereference
>> zpool: #pf Page fault
>> Bad kernel fault at addr=0x0
>> pid=1116, pc=0xf0663b45, sp=0xfe800138cc00, eflags=0x10202
>> cr0: 8005003b cr4: 6f0
>> cr2: 0 cr3: e5f2000 cr8: c
>>         rdi: 80039200 rsi: 89d883c0 rdx:                0
>>         rcx: fe80e3667000  r8:                1  r9:                0
>>         rax:                0 rbx:                1 rbp: fe800138cc10
>>         r10: 938eb920 r11:                3 r12: b0bc4080
>>         r13: b0bc42f0 r14:                1 r15:                0
>>         fsb: 8000 gsb: fbc240e0  ds:               43
>>         es:               43  fs:                0  gs:              1c3
>>         trp:                e err:                0 rip: f0663b45
>>          cs:               28 rfl:            10202 rsp: fe800138cc00
>>          ss:               30
>> ...
>>
>> This occurs a couple of seconds after the system is fully booted, i've
>> tried several times to be fast enough to unconfigure the
>> fc-controllers but.. to slow :-) . So I shut the path for the machine
>> to the FC-enclosure, and of course the pool is now "UNAVAIL" which is
>> ok since my other pools work fine.
>>
>> Im curious though - how can metadata be corrupted like that? Why does
>> the system panic? Can it be repaired?
>>
>> I know I should have backups but I dont, and if it's a lost cause it's
>> fine, the data itself is not important.
>>
>>
>



-- 
Timh Bergström
System Operations Manager
Diino AB - www.diino.com
:wq
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot send/recv "hangs" X4540 servers

2009-06-11 Thread Brent Jones
>>
>>
> After examining the dump we got from you (thanks again), we're relatively
> sure you are hitting
>
> 6826836 Deadlock possible in dmu_object_reclaim()
>
> This was introduced in nv_111 and fixed in nv_113.
>
> Sorry for the trouble.
>
> -tim
>
>

Do you know when new builds will show up on pkg.opensolaris.org/dev ?


-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Fwd: Re: [osol-discuss] Possible ZFS corruption?]

2009-06-11 Thread Jim Walker
 --- Begin Message ---
Some data I forgot to add:

I have tried installing opensolaris 2009.06 and importing the pool, yields the 
same results as Solaris 10 U7

The array is configured to use all 24 disks in a radz2 configuration with 2 
hot-spares this gives me about 16TB of usable space.  The recordsize has 
remained the default of 128k.
-- 
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-disc...@opensolaris.org
--- End Message ---
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Fwd: [osol-discuss] Possible ZFS corruption?]

2009-06-11 Thread Jim Walker

Joel,

Welcome to the community.

I'm forwarding this to zfs-discuss where you may get
more help, but this isn't the best place to get
help with s10.

Cheers,
Jim
--- Begin Message ---
Hi everyone,

This is my first post to these forums, but I must first say that there is alot 
of very useful data here and I am very glad to see such a large community of 
contributors. I have been working with Solaris 10 now for about 3 years, coming 
from the linux world, there is no reason to look back.

My problem is this (no, no transition material, sorry):

for a while now I have been using Solaris 10 to store TB's of data on ZFS. This 
file system has been great at handling large datasets (no running crazy tools 
like diskpart to get ext* to recognize the disk > 1.8T and then worrying about 
losing it because the journal get corrupted), however I recently ran into an 
issue with some storage enclosures that I was starting to verify to be put into 
Production.

They are Supermicro storage enclosures that hold 24 1TB disks (Seagate ES2 
SATA), connected via an LSI logic 1068 based card.  I had been copying data 
from other sources (lots of large files) to this new enclosure to test, the 
enclosure was about 80% full and then the machine panic'd.  It then proceded to 
reboot and then got stuck in a reboot loop, each time giving me core dump 
information:

{code}
storage01# mdb unix.0 vmcore.0   
Loading modules: [ unix krtld genunix specfs dtrace cpu.generic uppc pcplusmp 
zfs ip hook neti sctp arp usba uhci fcp fctl md lofs mpt fcip random crypto 
logindmux ptm ufs nfs ]

> ::status
debugging crash dump vmcore.0 (64-bit) from storage01
operating system: 5.10 Generic_139556-08 (i86pc)
panic message: BAD TRAP: type=e (#pf Page fault) rp=fe80010fe7a0 addr=0 
occurred in module "unix" due to a NULL pointer dereference
dump content: kernel pages only
> 

> ::ps
SPID   PPID   PGIDSIDUID  FLAGS ADDR NAME
R  0  0  0  0  0 0x0001 fbc25800 sched
R  3  0  0  0  0 0x00020001 82b14a78 fsflush
R  2  0  0  0  0 0x00020001 82b156e0 pageout
R  1  0  0  0  0 0x4a004000 82b16348 init
R945  1945945  0 0x4200 8b2a1c88 rcm_daemon
R846  1846846 25 0x5201 8b2a7360 sendmail
R847  1847847  0 0x5201 8b2a66f8 sendmail
R841  1841841  0 0x4200 8866ce18 fmd
R836  1836836  0 0x4200 8b2a4e28 syslogd
R816  1816816  0 0x4200 88669010 automountd
R818816816816  0 0x4200 89e94c80 automountd
R666  1666666  0 0x4200 89e94018 inetd
R651  1651651  0 0x4200 89e96550 utmpd
R631  1631631  1 0x4200 89e97e20 lockd
R619  1616616  1 0x4200 8866e6e8 nfs4cbd
R617  1617617  1 0x4200 89e996f0 statd
R618  1618618  1 0x5200 89e98a88 nfsmapid
R610  1610610  1 0x4200 82b118d8 rpcbind
R593  1593593  0 0x4201 82b13e10 cron
R525  1525525  0 0x4200 8866da80 nscd
R504  1504504  0 0x4200 89e9a358 picld
R487  1487487  1 0x4200 8866b548 kcfd
R464  1464464  0 0x4200 82b10008 syseventd
R 69  1 69 69  0 0x4200 82b10c70 devfsadm
R  9  1  9  9  0 0x4200 82b12540 svc.configd
R  7  1  7  7  0 0x4200 82b131a8 svc.startd
R661  7661661  0 0x4a004000 8866c1b0 sh
R954661661661  0 0x4a004000 89e971b8 zpool
R645  7645645  0 0x4a014000 8866a8e0 sac
R648645645645  0 0x4a014000 89e958e8 ttymon


::msgbuf
MESSAGE   
sd15 at mpt0: target d lun 0
sd15 is /p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@d,0
/p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@d,0 (sd15) online
sd16 at mpt0: target e lun 0
sd16 is /p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@e,0
/p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@e,0 (sd16) online
sd17 at mpt0: target f lun 0
sd17 is /p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@f,0
/p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@f,0 (sd17) online
sd18 at mpt0: target 10 lun 0
sd18 is /p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@10,0
/p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@10,0 (sd18) online
sd19 at mpt0: target 11 lun 0
sd19 is /p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@11,0
/p...@0,0/pci8086,2...@1/pci1000,3...@0/s...@11,0 (sd19) online
sd20 at 

Re: [zfs-discuss] LUN expansion

2009-06-11 Thread James Hess
> What you could do is to write a program which calls 
> efi_use_whole_disk(3EXT) to re-write the label for you. Once you have a 
> new label you will be able to export/import the pool
Awesome..

Worked for me, anyways.   .C file attached
Although I did a  "zpool export"  before  opening the device and calling that 
function.

I'm generally not one to mess with labels on a live filesystem..
-- 
This message posted from opensolaris.org

uwd.c
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread David Magda


On Jun 11, 2009, at 05:44, Paul van der Zwan wrote:

Strange thing I noticed in the keynote is that they claim the disk  
usage of Snow Leopard is 6 GB less than Leopard mostly because of  
compression.


It's probably 6 GB because Leopard (10.5) ran on both Intel and  
PowerPC chips ("Universal" binaries) but Snow Leopard (10.6) only runs  
on Intel; they probably stripped all the PowerPC bits.


Even things like ls(1) are universal binaries, so if you take every  
program and library, and cut it's use by half, it can add up to quite  
a bit.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Wes Felter

Paul van der Zwan wrote:

Strange thing I noticed in the keynote is that they claim the disk usage 
of Snow Leopard

is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use filesystem 
compression.

Neither feature is present in Leopard AFAIK..
Filesystem compression is a ZFS feature, so 


HFS+ now has filesystem compression.

http://www.appleinsider.com/articles/08/10/25/new_snow_leopard_seed_leak_confirms_cocoa_finder_more.html

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] ZFS snapshot send/recv "hangs" X4540 servers

2009-06-11 Thread Robert Milkowski
Hello Ian,

Saturday, June 6, 2009, 12:29:48 AM, you wrote:

IC> Tim Haley wrote:
>> Brent Jones wrote:
>>>
>>> On the sending side, I CAN kill the ZFS send process, but the remote
>>> side leaves its processes going, and I CANNOT kill -9 them. I also
>>> cannot reboot the receiving system, at init 6, the system will just
>>> hang trying to unmount the file systems.
>>> I have to physically cut power to the server, but a couple days later,
>>> this issue will occur again.
>>>
>>>
>> A crash dump from the receiving server with the stuck receives would 
>> be highly useful, if you can get it.  Reboot -d would be best, but it 
>> might just hang. You can try savecore -L.
>>
IC> I tried a reboot -d (I even had kmem-flags=0xf set), but it did hang.  I
IC> didn't try savecore.

mdb -KF

and then $http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover data after zpool create

2009-06-11 Thread Kees Nuyt
On Tue, 09 Jun 2009 17:51:25 PDT, stephen bond
 wrote:

>is it possible to recover a file system that existed prior to
>
>zpool create pool2 device
>
>I had a mirror on device which I detached and then issued
>the create command hoping it would give me my old file system.

That's close to impossible using that device alone, all
labels and ueberblocks have been overwritten.

Your best chance is to destroy pool2 and attach the device
to the original pool again as a mirror device.
It should resilver by itself.

If the original pool is lost, your data is lost.

Then, you can detach it and import it in some other system
as an unmirrored pool. 

In other words: you don't have to create a pool to access
one side of a mirror. After all, it;s a mirror, so the pool
is already in place.

>thank you all.

Good luck.
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help, I need a simple way how to update zpool label "devid and phys_path"

2009-06-11 Thread Rudolf Kutina

Hello ZDB Experts,

To be able move virtual disks with OpenSolaris between Virtualization platforms 
in automated way
I need to be able update zpool label  devid, phys_path without need to do actual "import" on target 
platform as I describe it in:


Bug 5785] Document procedure how to FIX boot of OpenSolaris manually for P2V or 
V2V
http://defect.opensolaris.org/bz/show_bug.cgi?id=5785

So I need a Fast Fix in form of this functionality:

1. Get zpool devid and phys_path, its easy (I can even store this info 
for later us in file)


zdb -e rpool -l | egrep "devid|phys_path"
  
devid='id1,s...@sata_seagate_st32500n9qe6b3ff/a'

  phys_path='/p...@0,0/pci108e,5...@5/d...@0,0:a'

2. Update devid and phys_path on concrete zpool (even on imported one !?)

zpool_update_label rpool '/p...@0,0/pci108e,5...@5/d...@0,0:a' \
'id1,s...@sata_seagate_st32500n9qe6b3ff/a'

Notes:
- In some case like USB disks phys_path can be empty.
- For simplicity simplicity zpool with one disk will be supported, if 
mirror or so on will be detected , command will ail with no modifications


So what can provide me a magical "zpool_update_label" functionality ?

Nice
Rudolf  (VirtualGuru)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Asymmetric mirroring

2009-06-11 Thread Richard Elling

Monish Shah wrote:

Hello,

Thanks to everyone who replied.

Dan, your suggestions (quoted below) are excellent and yes, I do want 
to make this work with SSDs, as well.  However, I didn't tell you one 
thing.  I want to compress the data on the drive.  This would be 
particularly important if an SSD is used, as the cost per GB is high.  
This is why I wanted to put it in a zpool.


Before somebody points out that compression with increase the CPU 
utilization, I'd like to mention that we have hardware accelerated 
gzip compression technology already working with ZFS, so the CPU will 
not be loaded.


I'm also hoping that write IOPS will improve with compression, because 
more writes can be combined into a single block of storage.  I don't 
know enough about ZFS allocation policies to be sure, but we'll try to 
run some tests.


Please share what you find.  It seems counterintuitive to me that 
compression
would increase iops for small-block, random workloads.  But real data is 
better

than intuition :-)
-- richard



It looks like, for now, the mirror disks will also have to be SSDs. 
(Perhaps raidz1 will be OK, instead.)  Eventually, we will look into 
modifying ZFS to support the kind of asymmetric mirroring I mentioned 
in the original post.  The other alternative is to modify ZFS to 
compress L2ARC, but that sounds much more complicated to me.  Any 
insights from ZFS developers would be appreciated.


Monish

Monish Shah
CEO, Indra Networks, Inc.
www.indranetworks.com


Use the SAS drives as l2arc for a pool on sata disks.   If your l2arc 
is the full size of your pool, you won't see reads from the pool 
(once the cache is primed).


If you're purchasing all the gear from new, consider whether SSD in 
this mode would be better than 15k sas.

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Are ARC memory pages relocatable?

2009-06-11 Thread Robert Milkowski

Hi,

 Are ZFS ARC memory pages relocatable so if UE or too many CE happens in a 
given page being used by ZFS ARC it will be nicely handled in most 
cases...? Would data in a page be automatecally re-read from a dataset if 
the page wasn't dirty or would it be just gone from cache and page would 
be retired? What if data was dirty?


or are arc mem pages non-relocatable and system would just panic...?


--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Corrupt meta data, the coredump

2009-06-11 Thread Richard Elling

This sounds like
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6587723
which was fixed a long time ago.  You might check that bug against your
stack trace (which was not included in this post).

You may be able to boot from a later OS release and import/export the pool
to repair.
-- richard

Timh Bergström wrote:

Hi all,

I've encountered a not so fun problem with one of our pools, the pool
was built with raidz1 according to the zfs-manual, the discs was
imported through an ERQ 16x750GB FC-Array (exported as JBOD) via
(QLogic) FC-HBA's to Solaris 10u3 (x86). Everything have worked fine
and dandy until this morning when the disc-enclosure "crashed" (Reason
unknown) and subsequently dragged the whole system with it, I didnt
get the coredump at the time but now when i've restarted and
reattached the enclosure and tried to import the zpool again I got the
following:

# zpool status -vx
pool: migrated_data
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-CS
scrub: none requested
config:
...

And just a couple of seconds after zpool status -vx the machine coredumps with:

panic[cpu0]/thread=fe80fcd34ba0: BAD TRAP: type=e (#pf Page fault) rp=fe
800138cb10 addr=0 occurred in module "zfs" due to a NULL pointer dereference
zpool: #pf Page fault
Bad kernel fault at addr=0x0
pid=1116, pc=0xf0663b45, sp=0xfe800138cc00, eflags=0x10202
cr0: 8005003b cr4: 6f0
cr2: 0 cr3: e5f2000 cr8: c
 rdi: 80039200 rsi: 89d883c0 rdx:0
 rcx: fe80e3667000  r8:1  r9:0
 rax:0 rbx:1 rbp: fe800138cc10
 r10: 938eb920 r11:3 r12: b0bc4080
 r13: b0bc42f0 r14:1 r15:0
 fsb: 8000 gsb: fbc240e0  ds:   43
 es:   43  fs:0  gs:  1c3
 trp:e err:0 rip: f0663b45
  cs:   28 rfl:10202 rsp: fe800138cc00
  ss:   30
...

This occurs a couple of seconds after the system is fully booted, i've
tried several times to be fast enough to unconfigure the
fc-controllers but.. to slow :-) . So I shut the path for the machine
to the FC-enclosure, and of course the pool is now "UNAVAIL" which is
ok since my other pools work fine.

Im curious though - how can metadata be corrupted like that? Why does
the system panic? Can it be repaired?

I know I should have backups but I dont, and if it's a lost cause it's
fine, the data itself is not important.

  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Corrupt meta data, the coredump

2009-06-11 Thread Timh Bergström
Hi all,

I've encountered a not so fun problem with one of our pools, the pool
was built with raidz1 according to the zfs-manual, the discs was
imported through an ERQ 16x750GB FC-Array (exported as JBOD) via
(QLogic) FC-HBA's to Solaris 10u3 (x86). Everything have worked fine
and dandy until this morning when the disc-enclosure "crashed" (Reason
unknown) and subsequently dragged the whole system with it, I didnt
get the coredump at the time but now when i've restarted and
reattached the enclosure and tried to import the zpool again I got the
following:

# zpool status -vx
pool: migrated_data
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-CS
scrub: none requested
config:
...

And just a couple of seconds after zpool status -vx the machine coredumps with:

panic[cpu0]/thread=fe80fcd34ba0: BAD TRAP: type=e (#pf Page fault) rp=fe
800138cb10 addr=0 occurred in module "zfs" due to a NULL pointer dereference
zpool: #pf Page fault
Bad kernel fault at addr=0x0
pid=1116, pc=0xf0663b45, sp=0xfe800138cc00, eflags=0x10202
cr0: 8005003b cr4: 6f0
cr2: 0 cr3: e5f2000 cr8: c
 rdi: 80039200 rsi: 89d883c0 rdx:0
 rcx: fe80e3667000  r8:1  r9:0
 rax:0 rbx:1 rbp: fe800138cc10
 r10: 938eb920 r11:3 r12: b0bc4080
 r13: b0bc42f0 r14:1 r15:0
 fsb: 8000 gsb: fbc240e0  ds:   43
 es:   43  fs:0  gs:  1c3
 trp:e err:0 rip: f0663b45
  cs:   28 rfl:10202 rsp: fe800138cc00
  ss:   30
...

This occurs a couple of seconds after the system is fully booted, i've
tried several times to be fast enough to unconfigure the
fc-controllers but.. to slow :-) . So I shut the path for the machine
to the FC-enclosure, and of course the pool is now "UNAVAIL" which is
ok since my other pools work fine.

Im curious though - how can metadata be corrupted like that? Why does
the system panic? Can it be repaired?

I know I should have backups but I dont, and if it's a lost cause it's
fine, the data itself is not important.

-- 
Timh Bergström
System Operations Manager
Diino AB - www.diino.com
:wq
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-11 Thread Arthur Bundo
> what does the present /export/home folder contain ?

contains nothing, it is empty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Chris Ridd


On 11 Jun 2009, at 10:52, Paul van der Zwan wrote:



On 11 jun 2009, at 11:48, Sami Ketola wrote:



On 11 Jun 2009, at 12:44, Paul van der Zwan wrote:


Strange thing I noticed in the keynote is that they claim the disk  
usage of Snow Leopard

is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use  
filesystem compression.

Neither feature is present in Leopard AFAIK..
Filesystem compression is a ZFS feature, so 


I think this is because they are removing PowerPC support from the  
binaries.




I really doubt the PPC specific code is 6GB. A few 100 MB perhaps.
Most of a fat binary or an .app folder is architecture independent  
and will remain.
And Phil Schiller specifically mentioned it was because of  
compression.


They might just have changed the localized resources format from a  
directory (English.lproj) containing loads of files into a zip file.


There's probably a better place to discuss this.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Paul van der Zwan


On 11 jun 2009, at 11:48, Sami Ketola wrote:



On 11 Jun 2009, at 12:44, Paul van der Zwan wrote:


Strange thing I noticed in the keynote is that they claim the disk  
usage of Snow Leopard

is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use  
filesystem compression.

Neither feature is present in Leopard AFAIK..
Filesystem compression is a ZFS feature, so 


I think this is because they are removing PowerPC support from the  
binaries.




I really doubt the PPC specific code is 6GB. A few 100 MB perhaps.
Most of a fat binary or an .app folder is architecture independent and  
will remain.

And Phil Schiller specifically mentioned it was because of compression.

Paul

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Sami Ketola


On 11 Jun 2009, at 12:44, Paul van der Zwan wrote:


Strange thing I noticed in the keynote is that they claim the disk  
usage of Snow Leopard

is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use  
filesystem compression.

Neither feature is present in Leopard AFAIK..
Filesystem compression is a ZFS feature, so 


I think this is because they are removing PowerPC support from the  
binaries.


Sami
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Paul van der Zwan


On 11 jun 2009, at 10:48, Jerry K wrote:

There is a pretty active apple ZFS sourceforge group that provides  
RW bits for 10.5.


Things are oddly quiet concerning 10.6.  I am curious about how this  
will turn out myself.


Jerry




Strange thing I noticed in the keynote is that they claim the disk  
usage of Snow Leopard

is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use  
filesystem compression.

Neither feature is present in Leopard AFAIK..
Filesystem compression is a ZFS feature, so 

Paul

Disclaimer, even though I work for Sun, I have no idea what's going on  
regarding Apple and ZFS.




Rich Teer wrote:

It's not pertinent to this sub-thread, but zfs (albeit read-only)
is already in currently shipping MacOS 10.5.  SO presumably it'll
be in MacOS 10.6...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Asymmetric mirroring

2009-06-11 Thread Monish Shah

Hello,

Thanks to everyone who replied.

Dan, your suggestions (quoted below) are excellent and yes, I do want to 
make this work with SSDs, as well.  However, I didn't tell you one thing.  I 
want to compress the data on the drive.  This would be particularly 
important if an SSD is used, as the cost per GB is high.  This is why I 
wanted to put it in a zpool.


Before somebody points out that compression with increase the CPU 
utilization, I'd like to mention that we have hardware accelerated gzip 
compression technology already working with ZFS, so the CPU will not be 
loaded.


I'm also hoping that write IOPS will improve with compression, because more 
writes can be combined into a single block of storage.  I don't know enough 
about ZFS allocation policies to be sure, but we'll try to run some tests.


It looks like, for now, the mirror disks will also have to be SSDs. 
(Perhaps raidz1 will be OK, instead.)  Eventually, we will look into 
modifying ZFS to support the kind of asymmetric mirroring I mentioned in the 
original post.  The other alternative is to modify ZFS to compress L2ARC, 
but that sounds much more complicated to me.  Any insights from ZFS 
developers would be appreciated.


Monish

Monish Shah
CEO, Indra Networks, Inc.
www.indranetworks.com


Use the SAS drives as l2arc for a pool on sata disks.   If your l2arc is 
the full size of your pool, you won't see reads from the pool (once the 
cache is primed).


If you're purchasing all the gear from new, consider whether SSD in this 
mode would be better than 15k sas.

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Jerry K
There is a pretty active apple ZFS sourceforge group that provides RW 
bits for 10.5.


Things are oddly quiet concerning 10.6.  I am curious about how this 
will turn out myself.


Jerry


Rich Teer wrote:

It's not pertinent to this sub-thread, but zfs (albeit read-only)
is already in currently shipping MacOS 10.5.  SO presumably it'll
be in MacOS 10.6...


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] UCD-SNMP-MIB::dskPercent not returned for ZFS filesystems?

2009-06-11 Thread Alexander Skwar
Hello.

On a Solaris 10 10/08 (137137-09) Sparc system, I setup SMA to also return 
values for disk usage, by adding the following to snmpd.conf:

disk / 5%
disk /tmp 10% 
disk /apps 4%
disk /data 3%

/data and /apps are on ZFS. But when I do "snmpwalk -v2c -c public 10.0.1.26 
UCD-SNMP-MIB::dskPercent", it only returns values for the first two directories 
(/ and /tmp), which are on UFS and Swap, respectively. But for /apps and /data, 
it just returns 0, which is not correct.

Question now: Is it supposed to be that way? Or is Solaris 10 supposed to 
return a sensible value?

Thanks,
Alexander
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss