Re: [zfs-discuss] ZFS Usability issue : improve means of finding ZFS<->physdevice(s) mapping

2006-10-13 Thread Noel Dellofano
I  don't understand why you can't use 'zpool status'?  That will show  
the pools and the physical devices in each and is also a pretty basic  
command.  Examples are given in the sysadmin docs and manpages for  
ZFS on the opensolaris ZFS community page.


I realize it's not quite the same command as in UFS, and it's easier  
when things remain the same, but it's a different filesystem so you  
need some different commands that make more sense for how it's  
structured. The idea being hopefully that  soon zpool and zfs  
commands will become just as 'intuitive' for people :)


Noel

(p.s. not to mention am I the only person that thinks that 'zpool  
status' (in human speak, not geek) makes more sense than 'df'? wtf )


On Oct 13, 2006, at 1:55 PM, Bruce Chapman wrote:


ZFS is supposed to be much easier to use than UFS.

For creating a filesystem, I agree it is, as I could do that easily  
without a man page.


However, I found it rather surprising that I could not see the  
physical device(s) a zfs filesystem was attached to using either  
"df" command (that shows physical device mount points for all other  
file systems), or even the "zfs" command.


Even going to "zpool" command it took a few minutes to finally  
stumble across the only two commands that will give you that  
information, as it is not exactly intuitive.


Ideally, I'd think "df" should show physical device connections of  
zfs pools, though I can imagine there may be some circumstances  
where that is not desirable so perhaps a new argument would be  
needed to specify if that detail is shown or not.


If this is not done, I think "zfs list -v"  (-v is not currently an  
option to the zfs list command) should show the physical devices in  
use by the pools.


In any case, I think it is clear "zpool list" should have a "-v"  
argument added that will show the device associations, so that  
people don't have to stumble blindly until they run into the "zpool  
iostat -v" or "zpool status -v" commands to finally accomplish this  
rather simple task.


Any comments on the above?  I'm using S10 06/06, so perhaps I'll  
get lucky and someone has already added one or all the above  
improvements. :)


Cheers,

   Bruce


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incorrect link for dmu_tx.c in ZFS Source Code tour

2006-09-27 Thread Noel Dellofano

Fixed.  Thank you for the heads up on that.

Noel
On Sep 27, 2006, at 1:04 AM, Victor Latushkin wrote:


Hi All,

I've noticed that link to dmu_txg.c from the ZFS Source Code tour  
is broken. It looks like it dmu_txg.c should be changed to dmu_tx.c

Please take care of this.

- Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic during recv

2006-09-26 Thread Noel Dellofano

I can also reproduce this on my test machines and have opened up CR
6475506 panic in dmu_recvbackup due to NULL pointer dereference
to track this problem.  This is most likely due to recent changes  
made in the snapshot code for -F.  I'm looking into it...


thanks for testing!
Noel

On Sep 26, 2006, at 6:21 AM, Mark Phalan wrote:


Hi,

I'm using b48 on two machines.. when I issued the following I get a
panic on the recv'ing machine:

$ zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh machine2
zfs recv -F data

doing the following caused no problems:

zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh  
machine2 zfs

recv data/[EMAIL PROTECTED]


Is this a known issue? I reproduced it twice. I have core files.

from the log:

Sep 26 14:52:21 dhcp-eprg06-19-134 savecore: [ID 570001 auth.error]
reboot after panic: BAD TRAP: type=e (#pf Page fault) rp=d0965c34  
addr=4

occurred in module "zfs" due to a NULL pointer dereference


from the core:

echo '$C' | mdb 0

d0072ddc dmu_recvbackup+0x85b(d0562400, d05629d0, d0562828, 1,  
ea5ff9c0,

138)
d0072e18 zfs_ioc_recvbackup+0x4c()
d0072e40 zfsdev_ioctl+0xfc(2d8, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072e6c cdev_ioctl+0x2e(2d8, 5a1b, 8046c0c, 13, d5478840,
d0072f78)6475506
d0072e94 spec_ioctl+0x65(d256f9c0, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072ed4 fop_ioctl+0x27(d256f9c0, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072f84 ioctl+0x151()
d0072fac sys_sysenter+0x100()

-Mark


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] new ZFS links page

2006-08-29 Thread Noel Dellofano

Hey everybody,
I'd like to announce the addition of a "ZFS Links" page on the  
Opensolaris ZFS community page.  If you have any links to articles  
that pertain to ZFS that you find useful or should be shared with the  
community as a whole, please let us know and we'll add it to the page.


http://www.opensolaris.org/os/community/zfs/links/

thanks,
Noel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and very large directories

2006-08-24 Thread Noel Dellofano
ZFS actually uses the ZAP to handle directory lookups.  The ZAP is  
not a btree but a specialized hash table where a hash for each  
directory entry is generated based on that entry's name.  Hence you  
won't be doing any sort of linear search through the entire directory  
for a file, a hash is generated from the file name and a lookup of  
that hash in the zap will be done.   This is nice and speedy, even  
with 100,000 files in a directory.




Noel

On Aug 24, 2006, at 8:02 AM, Patrick Narkinsky wrote:

Due to legacy constraints, I have a rather complicated system that  
is currently using Sun QFS (actually the SAM portion of it.) For a  
lot of reasons, I'd like to look at moving to ZFS, but would like a  
"sanity check" to make sure ZFS is suitable to this application.


First of all, we are NOT using the cluster capabilities of SAMFS.   
Instead, we're using it as a way of dealing with one directory that  
contains approximately 100,000 entries.


The question is this: I know from the specs that ZFS can handle a  
directory with this many entries, but what I'm actually wondering  
is how directory lookups are handled? That is, if I do a "cd  
foo05" in a directory with foo01 through foo10, will  
the filesystem have to scroll through all the directory contents to  
find foo05, or does it use a btree or something to handle this?


This directory is, in turn, shared out over NFS.  Are there any  
issues I should be aware of with this sort of installation?


Thanks for any advice or input!

Patrick Narkinsky
Sr. System Engineer
EDS


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Issue with zfs snapshot replication from version2 to version3 pool.

2006-08-23 Thread Noel Dellofano

I've filed a bug for the problem Tim mentions below.
6463140 zfs recv with a snapshot name that has 2 @@ in a row succeeds

This is most likely due to the order in which we call  
zfs_validate_name in the zfs recv code, which would explain why other  
snapshot commands like 'zfs snapshot' will fail out and refuse to  
create a snapshot with 2 @@ in a row.  I'll look into it and update  
the bug further.


Noel

On Aug 22, 2006, at 11:45 AM, Shane Milton wrote:

Just updating the discussion with some email chains.  After more  
digging, this does not appear to be a version 2 or 3 replicatiion  
issues.  I believe it to be an invalid named snapshot that causes  
zpool and zfs commands to core.


Tim mentioned it may be similiar to bug 6450219.
I agree it seems similiar to 6450219, but I'm not so sure it's the  
same as the related bug of 6446512.  At least the description of  
"...mistakenly trying to copy a file or directory..." I do not  
believe to apply in this case.  However, I'm still testing things  
so it very well may produce the same error.


-Shane


--

To: Tim Foster , Eric Schrock
Date: Aug 22, 2006 10:37 AM
Subject: Re: [zfs-discuss] Issue with zfs snapshot replication from  
version2 to version3 pool.



Looks like the problem is that 'zfs recieve' will accept invalid  
snapshot names.  In this case two @ signs
This causes most  other zfs and zpool commands that look up the  
snapshot object type to core dump.


Reproduced on x64 Build44 system with the following command.
"zfs send t0/[EMAIL PROTECTED] | zfs recv t1/fs0@@snashot_in"


[EMAIL PROTECTED]:/var/tmp/]
$ zfs list -r t1
internal error: Invalid argument
Abort(coredump)


dtrace output

1  51980   zfs_ioc_objset_stats:entry   t1
  1  51981  zfs_ioc_objset_stats:return 0
  1  51980   zfs_ioc_objset_stats:entry   t1/fs0
  1  51981  zfs_ioc_objset_stats:return 0
  1  51980   zfs_ioc_objset_stats:entry   t1/fs0
  1  51981  zfs_ioc_objset_stats:return 0
  1  51980   zfs_ioc_objset_stats:entry   t1/fs0@@snashot_in
  1  51981  zfs_ioc_objset_stats:return22



This may need to be filed as a bug again zfs recv.

Thank you for your time,

-Shane




From: Tim Foster
To: shane milton
Cc: Eric Schrock
Date: Aug 22, 2006 10:56 AM
Subject: Re: [zfs-discuss] Issue with zfs snapshot replication from  
version2 to version3 pool.



Hi Shane,

On Tue, 2006-08-22 at 10:37 -0400, shane milton wrote:

Looks like the problem is that 'zfs recieve' will accept invalid
snapshot names.  In this case two @ signs
This causes most  other zfs and zpool commands that look up the
snapshot object type to core dump.


Thanks for that! I believe this is the same as
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6450219

(but I'm open to corrections :-)

   cheers,
   tim


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat, scrubbing increases used disk space

2006-08-20 Thread Noel Dellofano
thanks for the heads up.  I've fixed them to point to the right documents. NoelOn Aug 20, 2006, at 11:38 AM, Ricardo Correia wrote:By the way, the manpage links in  http://www.opensolaris.org/os/community/zfs/docs/ are not correct, they are  linked to wrong documents.  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz -> raidz2

2006-08-02 Thread Noel Dellofano
Your suspicions are correct,  it's not possible to upgrade an  
existing raidz pool to raidz2.  You'll actually have to create the  
raidz2 pool from scratch.


Noel
On Aug 2, 2006, at 10:02 AM, Frank Cusack wrote:

Will it be possible to update an existing raidz to a raidz2?  I  
wouldn't

think so, but maybe I'll be pleasantly surprised.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS support for USB disks 120GB Western Digital

2006-07-19 Thread Noel Dellofano
We don't have a specific supported configuration method for usb  
devices for ZFS.  Most people are using them as mirrors or backups  
for their laptop data.  It's really up to you. There are a few  
threads from the discuss archives where people have discussed some  
different possible configs for usb storage or ways they've used it.   
One is here:


http://www.opensolaris.org/jive/thread.jspa?messageID=25144戸

where David Bustos also mentions Artem's blog as a go to.  Perhaps we  
also should add something to this effect in the FAQ


Also, depending on how you intend to use the disk, a known issue is  
this:

6424510 usb ignores DKIOCFLUSHWRITECACHE



Noel

On Jul 18, 2006, at 11:53 PM, Stefan Parvu wrote:


Hey,

I have a portable harddisk Western Digital 120GB USB. Im running  
Nevada b42a on Thinkpad T43. Is this a supported configuration for  
setting up ZFS on portable disks ?


Found out some old blogs about this topic:
http://blogs.sun.com/roller/page/artem?entry=zfs_on_the_go and some  
other info under: http://www.sun.com/io_technologies/USB-Faq.html


Is this information still valid ? Under ZFS FAQ there is no mention  
of this topic, a good idea is to add a section about ZFS on mobile  
devices.


Thanks,
Stefan

# rmformat
Looking for devices...
 1. Volmgt Node: /vol/dev/aliases/cdrom0
Logical Node: /dev/rdsk/c1t0d0s2
Physical Node: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
Connected Device: MATSHITA UJDA765 DVD/CDRW 1.70
Device Type: DVD Reader
Bus: IDE
Size: 
Label: 
Access permissions: 
 2. Volmgt Node: /vol/dev/aliases/rmdisk0
Logical Node: /dev/rdsk/c2t0d0p0
Physical Node: /[EMAIL PROTECTED],0/pci1014,[EMAIL PROTECTED],7/[EMAIL PROTECTED]/[EMAIL PROTECTED]/ 
[EMAIL PROTECTED],0

Connected Device: WDC WD12 00VE-00KWT0  
Device Type: Removable
Bus: USB
Size: 114.5 GB
Label: 
Access permissions: Medium is not write protected.


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk evacuate

2006-06-28 Thread Noel Dellofano

Hey Robert,

Well, not yet.  Right now our top two priorities are improving 
performance in multiple areas of zfs(soon there will be a performance 
page tracking progess on the zfs community page), and also getting zfs 
boot done.  Hence, we're not currently working on heaps of brand new 
features.  So this is definately on our list, but not currently being 
worked on yet.


Noel

Robert Milkowski wrote:

Hello Noel,

Wednesday, June 28, 2006, 5:59:18 AM, you wrote:

ND> a zpool remove/shrink type function is on our list of features we want
ND> to add.
ND> We have RFE
ND> 4852783 reduce pool capacity
ND> open to track this.

Is there someone actually working on this right now?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk evacuate

2006-06-27 Thread Noel Dellofano
a zpool remove/shrink type function is on our list of features we want 
to add.

We have RFE
4852783 reduce pool capacity
open to track this.


Noel

Dick Davies wrote:

Just wondered if there'd been any progress in this area?

Correct me if i'm wrong, but as it stands, there's no way
to remove a device you accidentally 'zpool add'ed without
destroying the pool.

On 12/06/06, Gregory Shaw <[EMAIL PROTECTED]> wrote:


Yes, if zpool remove works like you describe, it does the same
thing.  Is there a time frame for that feature?

Thanks!

On Jun 11, 2006, at 10:21 AM, Eric Schrock wrote:

> This only seems valuable in the case of an unreplicated pool.  We
> already have 'zpool offline' to take a device and prevent ZFS from
> talking to it (because it's in the process of failing, perhaps).  This
> gives you what you want for mirrored and RAID-Z vdevs, since
> there's no
> data to migrate anyway.
>
> We are also planning on implementing 'zpool remove' (for more than
> just
> hot spares), which would allow you to remove an entire toplevel vdev,
> migrating the data off of it in the process.  This would give you what
> you want for the case of an unreplicated pool.
>
> Does this satisfy the usage scenario you described?
>
> - Eric
>
> On Sun, Jun 11, 2006 at 07:52:37AM -0600, Gregory Shaw wrote:
>> Pardon me if this scenario has been discussed already, but I haven't
>> seen anything as yet.
>>
>> I'd like to request a 'zpool evacuate pool ' command.
>> 'zpool evacuate' would migrate the data from a disk device to other
>> disks in the pool.
>>
>> Here's the scenario:
>>
>> Say I have a small server with 6x146g disks in a jbod
>> configuration.   If I mirror the system disk with SVM (currently) and
>> allocate the rest as a non-raidz pool, I end up with 4x146g in a pool
>> of approximately 548gb capacity.
>>
>> If one of the disks is starting to fail, I would need to use 'zpool
>> replace new-disk old-disk'.  However, since I have no more slots in
>> the machine to add a replacement disk, I'm stuck.
>>
>> This is where a 'zpool evacuate pool ' would come in handy.
>> It would allow me to evacuate the failing device so that it could be
>> replaced and re-added with 'zpool add pool '.
>>
>> What does the group think?






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] status question regarding sol10u2

2006-06-26 Thread Noel Dellofano

Solaris 10u2 was released today.  You can now download it from here:

http://www.sun.com/software/solaris/get.jsp


Noel



Joe Little wrote:


So, if I recall from this list, a mid-june release to the web was
expected for S10U2. I'm about to do some final production testing, and
I was wondering if S10U2 was near term or more of a July thing now.
This may not be the perfect venue for the question, but the subject
was previously covered with authority here, so its seems appropriate
to ask here when ZFS backports to Sol10 arrive.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs man pages on Open Solaris

2006-06-19 Thread Noel Dellofano
The links off the documentation page on the zfs open solaris site were  
mysteriously pointing to the wrong subcommands on docs.sun.com.  So if  
you requested the man page for zfs, you actually got the man page for  
zdump.  Not cool :)


So I've gone through all the links and fixed them to point to the  
correct thing.  If you have any problems let me know.



Noel


 
**


"Question all the answers"

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic in buf_hash_remove

2006-06-16 Thread Noel Dellofano
I have filed 6439484 panic in buf_hash_remove due to a NULL pointer  
dereference to track this issue.


thanks,
Noel :-)


 
**


"Question all the answers"
On Jun 12, 2006, at 3:45 PM, Daniel Rock wrote:


Hi,

had recently this panic during some I/O stress tests:

> $BAD TRAP: type=e (#pf Page fault) rp=fe80005c3980 addr=30 occurred  
in module "zfs" due to a NULL pointer dereference



sched:
#pf Page fault
Bad kernel fault at addr=0x30
pid=0, pc=0xf3ee322e, sp=0xfe80005c3a70, eflags=0x10206
cr0: 8005003b cr4: 6f0
cr2: 30 cr3: a49a000 cr8: c
rdi: fe80f0aa2b40 rsi: 89c3a050 rdx:  
6352
rcx:   2f  r8:0  r9:
30
rax: 64f2 rbx:2 rbp:  
fe80005c3aa0
r10:   fe80f0c979 r11:   bd7189449a7087 r12:  
89c3a040
r13: 89c3a040 r14:32790 r15:
 0
fsb: 8000 gsb: 8149d800  ds:
43
 es:   43  fs:0  gs:   
1c3
trp:e err:0 rip:  
f3ee322e
 cs:   28 rfl:10206 rsp:  
fe80005c3a70

 ss:   30

fe80005c3870 unix:die+eb ()
fe80005c3970 unix:trap+14f9 ()
fe80005c3980 unix:cmntrap+140 ()
fe80005c3aa0 zfs:buf_hash_remove+54 ()
fe80005c3b00 zfs:arc_change_state+1bd ()
fe80005c3b70 zfs:arc_evict_ghost+d1 ()
fe80005c3b90 zfs:arc_adjust+10f ()
fe80005c3bb0 zfs:arc_kmem_reclaim+d0 ()
fe80005c3bf0 zfs:arc_kmem_reap_now+30 ()
fe80005c3c60 zfs:arc_reclaim_thread+108 ()
fe80005c3c70 unix:thread_start+8 ()

syncing file systems...
 done
dumping to /dev/md/dsk/swap, offset 644874240, content: kernel
> $c
buf_hash_remove+0x54(89c3a040)
arc_change_state+0x1bd(c0099370, 89c3a040,  
c0098f30)

arc_evict_ghost+0xd1(c0099470, 14b5c0c4)
arc_adjust+0x10f()
arc_kmem_reclaim+0xd0()
arc_kmem_reap_now+0x30(0)
arc_reclaim_thread+0x108()
thread_start+8()
> ::status
debugging crash dump vmcore.0 (64-bit) from server
operating system: 5.11 snv_39 (i86pc)
panic message:
BAD TRAP: type=e (#pf Page fault) rp=fe80005c3980 addr=30 occurred  
in module "zfs" due to a NULL pointer dereference

dump content: kernel pages only



Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic in buf_hash_remove

2006-06-13 Thread Noel Dellofano
Out of curiosity, is this panic reproducible? A bug should be filed on  
this for more investigation. Feel free to open one or I'll open it if  
you forward me info on where the crash dump is and information on the  
I/O stress test you were running.


thanks,
Noel :-)


 
**


"Question all the answers"
On Jun 12, 2006, at 3:45 PM, Daniel Rock wrote:


Hi,

had recently this panic during some I/O stress tests:

> $BAD TRAP: type=e (#pf Page fault) rp=fe80005c3980 addr=30 occurred  
in module "zfs" due to a NULL pointer dereference



sched:
#pf Page fault
Bad kernel fault at addr=0x30
pid=0, pc=0xf3ee322e, sp=0xfe80005c3a70, eflags=0x10206
cr0: 8005003b cr4: 6f0
cr2: 30 cr3: a49a000 cr8: c
rdi: fe80f0aa2b40 rsi: 89c3a050 rdx:  
6352
rcx:   2f  r8:0  r9:
30
rax: 64f2 rbx:2 rbp:  
fe80005c3aa0
r10:   fe80f0c979 r11:   bd7189449a7087 r12:  
89c3a040
r13: 89c3a040 r14:32790 r15:
 0
fsb: 8000 gsb: 8149d800  ds:
43
 es:   43  fs:0  gs:   
1c3
trp:e err:0 rip:  
f3ee322e
 cs:   28 rfl:10206 rsp:  
fe80005c3a70

 ss:   30

fe80005c3870 unix:die+eb ()
fe80005c3970 unix:trap+14f9 ()
fe80005c3980 unix:cmntrap+140 ()
fe80005c3aa0 zfs:buf_hash_remove+54 ()
fe80005c3b00 zfs:arc_change_state+1bd ()
fe80005c3b70 zfs:arc_evict_ghost+d1 ()
fe80005c3b90 zfs:arc_adjust+10f ()
fe80005c3bb0 zfs:arc_kmem_reclaim+d0 ()
fe80005c3bf0 zfs:arc_kmem_reap_now+30 ()
fe80005c3c60 zfs:arc_reclaim_thread+108 ()
fe80005c3c70 unix:thread_start+8 ()

syncing file systems...
 done
dumping to /dev/md/dsk/swap, offset 644874240, content: kernel
> $c
buf_hash_remove+0x54(89c3a040)
arc_change_state+0x1bd(c0099370, 89c3a040,  
c0098f30)

arc_evict_ghost+0xd1(c0099470, 14b5c0c4)
arc_adjust+0x10f()
arc_kmem_reclaim+0xd0()
arc_kmem_reap_now+0x30(0)
arc_reclaim_thread+0x108()
thread_start+8()
> ::status
debugging crash dump vmcore.0 (64-bit) from server
operating system: 5.11 snv_39 (i86pc)
panic message:
BAD TRAP: type=e (#pf Page fault) rp=fe80005c3980 addr=30 occurred  
in module "zfs" due to a NULL pointer dereference

dump content: kernel pages only



Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of zfs create time options

2006-05-17 Thread Noel Dellofano

Hey Darren,
I can't seem to find an RFE/bug open on anything that looks like this.   
I'm behind in my email so could have missed the a thread containing  
this explanation and info, but exactly what kind of option are you  
looking for?


thanks!
Noel :-)


 
**


"Question all the answers"
On May 16, 2006, at 10:27 PM, Darren J Moffat wrote:


Whats the status of the zfs create time option setting ?
Is someone working on it yet ?

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mirror/raidz: can they use different types of disks

2006-05-03 Thread Noel Dellofano
ZFS doesn't demand that the drives have the same capacity.  If you  
mirror a 30GB drive and a 40GB drive then ZFS will treat them as two  
30GB drives, so you're effectively wasting 10GB of space.


Noel


 
**


"Question all the answers"
On May 3, 2006, at 2:16 PM, James Foronda wrote:


Hi,

I just got an Ultra 20 with the default 80GB internal disk.  Right  
now, I'm using around 30GB for zfs.  I will be getting a new 250GB  
drive.


Question: If I create a 30GB slice on the 250GB drive, will that be  
okay to use as mirror (or raidz) of the current 30GB that I now have  
on the 30GB drive?  I ask because if I remember correctly, some volume  
managers require(?) that mirrors should use drives with the same  
geometry.


Thanks.

James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss