so, anyone have any ideas? I'm obviously hitting a bug here. I'm happy to help
anyone solve this, I DESPERATELY need this data. I can post dtrace results if
you send them to me. I wish I could solve this myself, but I'm not a C
progammer, I don't know how to program filesystems, much less an adv
Hello, thanks for your suggestion. I tried settin zfs_arc_max to 0x3000
(768MB, out of 3GB). The system ran for almost 45 minutes before it froze.
Here's an interesting piece of arcstat.pl, which I noticed just as it was
pasing by:
Time read miss miss% dmis dm% pmis pm% mmis
Hello Hernan,
Friday, May 23, 2008, 6:08:34 PM, you wrote:
HF> The question is still, why does it hang the machine? Why can't I
HF> access the filesystems? Isn't it supposed to import the zpool,
HF> mount the ZFSs and then do the destroy, in background?
HF>
Try to limit ARC size to 1/4 of yo
So, I think I've narrowed it down to two things:
* ZFS tries to destroy the dataset every time it's called because the last time
it didn't finish destroying
* In this process, ZFS makes the kernel run out of memory and die
So I thought of two options, but I'm not sure if I'm right:
Option 1: "D
No, this is a 64-bit system (athlon64) with 64-bit kernel of course.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2008/5/24 Hernan Freschi <[EMAIL PROTECTED]>:
> I let it run while watching TOP, and this is what I got just before it hung.
> Look at free mem. Is this memory allocated to the kernel? can I allow the
> kernel to swap?
No, the kernel will not use swap for this.
But most of the memory used by th
oops. replied too fast.
Ran without -n, and space was added successfully... but it didn't work. It died
out of memory again.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Hernan Freschi пишет:
> I tried the mkfile and swap, but I get:
> [EMAIL PROTECTED]:/]# mkfile -n 4g /export/swap
> [EMAIL PROTECTED]:/]# swap -a /export/swap
> "/export/swap" may contain holes - can't swap on it.
You should not use -n for creating files for additional swap. This is
mentioned in
Hi, Herman
You may not use '-n' to Makefile, that'll lead swap comlain.
Hernan Freschi wrote:
> I forgot to post arcstat.pl's output:
>
> Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
> 22:32:37 556K 525K 94 515K 949K 98 515K 97 1G1G
> 22:3
I forgot to post arcstat.pl's output:
Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
22:32:37 556K 525K 94 515K 949K 98 515K 97 1G1G
22:32:38636310063 100 0063 100 1G1G
22:32:39747410074 100
> Memory: 3072M phys mem, 31M free mem, 2055M swap, 1993M free swap
perhaps this might help..
mkfile -n 4g /usr/swap
swap -a/usr/swap
http://blogs.sun.com/realneel/entry/zfs_arc_statistics
Rob
___
zfs-discuss mailing lis
I let it run while watching TOP, and this is what I got just before it hung.
Look at free mem. Is this memory allocated to the kernel? can I allow the
kernel to swap?
last pid: 7126; load avg: 3.36, 1.78, 1.11; up 0+01:01:11
21:16:49
88 pr
I let it run for about 4 hours. when I returned, still the same: I can ping the
machine but I can't SSH to it, or use the console. Please, I need urgent help
with this issue!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
I got more info. I can run zpool history and this is what I get:
2008-05-23.00:29:40 zfs destroy tera/[EMAIL PROTECTED]
2008-05-23.00:29:47 [internal destroy_begin_sync txg:3890809] dataset = 152
2008-05-23.01:28:38 [internal destroy_begin_sync txg:3891101] dataset = 152
2008-05-23.07:01:36 zpool
Hello, I'm having a big problem here, disastrous maybe.
I have a zpool consisting of 4x500GB SATA drives, this pool was born on S10U4
and was recently upgraded to snv85 because of iSCSI issues with some initiator.
Last night I was doing housekeeping, deleting old snapshots. One snapshot
failed
15 matches
Mail list logo