Thanks for your help.
I would check this out.
Hi, yes. No new support plans have been available for a while.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Am 11.11.10 11:51, schrieb Ville Ojamo:
Some generic ideas:
Looks like the zdb output is cut, is it cut because of mail reader or
because zdb died for some reason? If died pstack core..
What does panic stack trace say?
Does zpool import work any better with option -f or -F? Some have in
I am trying to bring in my zpool from build 121 into build 134 and every time I
do a zpool import the system crashes.
I have read other posts for this and have tried setting zfs_recover = 1 and
aok = 1 in /etc/system I have used mdb to verify that they are in the kernel
but the system still
Big thanks! I think I'll also buy one before long. The
power savings alone should be worth it over lifetime.
On Wed, Nov 10, 2010 at 11:03:21PM -0800, Krist van Besien wrote:
I just bought one. :-)
My imprssions:
- Installed Nexentastor community edition in it. All hardware was recognized
Did you try NexentaCore? You gain full control over command line, like in
OpenSolaris. However it
seems faster and got more bugs fixed than OpenSolaris b134.
I have already had OpenSolaris b134 crash one of my disk systems, I would never
install it
again... Besides, I would never get full 1
I would also add that you should try the NexentaStor Enterprise demo - fully
functional for 45 days. If you find a partner they will most likely be able
to provide you a managed trial. I'd be interested to hear what parts of the
GUI didn't work for you.
---
W. A. Khushil Dep -
Am 11.11.10 14:26, schrieb Steve Gonczi:
Dumpadm should tell you how your
Dumps are set up
Also you could load mdb before importing
I have located the dump, it's called vmdump.0. I also loaded mdb before
I imported the pool, but that didn't help. Actually I tried it this way:
mdb -K -F
:c
Hi,
# savecore -vf vmdump.0
This should produce two files: unix.0 and vmcore.0
Now we use mdb on these as follows:
# mdb unix.0 vmcore.0
Now when presented with the '' prompt, type ::status and send us all the
output please?
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit
The vmdump.0 is a compressed crash dump. You will need to convert it to
a format that can be read.
# savecore -f ./vmdump.0 ./
This will create a couple of files, but the ones you will need next is
unix.0 vmcore.0. Use mdb to print out the stack.
# mdb unix.0 vmcore.0
run the
David,
thanks so much (and of course to all other helpful souls here as well)
for providing such great guidance!
Here we go:
Am 11.11.10 16:17, schrieb David Blasingame Oracle:
The vmdump.0 is a compressed crash dump. You will need to convert it
to a format that can be read.
# savecore
In this function, the second argument is a pointer to the osname
(mount). You can dump out the string of what it is.
ff0023b7db50 zfs_domount+0x17c(ff0588aaf698, ff0580cb3d80)
mdb unix.0 vmcore.0
ff0580cb3d80/S
Should print out the offending FS. You could try to then import
Did you try NexentaCore? You gain full control over
command line, like in OpenSolaris. However it
seems faster and got more bugs fixed than OpenSolaris
b134.
I'll give it a try. I'm quite familiar with Ubuntu, so that looks like a good
option.
--
This message posted from opensolaris.org
div id=jive-html-wrapper-div
I would also add that you should try the NexentaStor
Enterprise demo - fully functional for 45 days. If
you find a partner they will most likely be able to
provide you a managed trial. I#39;d be interested to
hear what parts of the GUI didn#39;t work for
you.
David,
thanks a lot for your support. I have been able to get both of my zpools up
again by checking which zfs fs caused these problems.
And... today I also learned at least a bit about zpool troubleshooting.
Thanks
Stephan
--
Von meinem iPhone iOS4
gesendet.
In this function, the second
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm shopping for an SSD for a ZIL.
Looking around on NewEgg, at the claimed (not sure I beleive them)
IOPS, these caught my attention:
Corsair Force 80GB CSSD-F80GBP2-BRKT50K 4K aligned ran.
write IOPS
OCZ Vertex 2 120GB
Any opinions? stories? other models I missed?
I was a speaker at the recent OpenStorage Summit,
my presentation ZIL Accelerator: DRAM or Flash?
might be of interest:
http://www.ddrdrive.com/zil_accelerator.pdf
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message
On Nov 11, 2010, at 15:08, Kyle McDonald wrote:
Any opinions? stories? other models I missed?
Other questions:
1) The ZIL will be small compared to the size of these, can I use
the rest as L2ARC or is that not such a good idea?
2) Will ZFS align the ZIL writes in such a way that those
I'm still trying to find a fix/workaround for the problem described in
Unable to mount root pool dataset
http://opensolaris.org/jive/thread.jspa?messageID=492460
Since the Blade 1500's rpool is mirrored, I've decided to detach the
second half of the mirror, relabel the disk,
I will be setting up a NexentaStor Community Edition based ZFS file server. I
will be serving some zvols over iSCSI to some FreeBSD machines to host jails
in.
1) The ZFS box offers a single iSCSI target that exposes all the zvols as
individual disks. When the FreeBSD initiator finds it,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 11/11/10 17:57, Chad Leigh -- Shire.Net LLC wrote:
I will be setting up a NexentaStor Community Edition based ZFS file
server. I will be serving some zvols over iSCSI to some FreeBSD
machines to host jails in.
1) The ZFS box offers a
On Nov 11, 2010, at 7:18 PM, Xin LI wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 11/11/10 17:57, Chad Leigh -- Shire.Net LLC wrote:
I will be setting up a NexentaStor Community Edition based ZFS file
server. I will be serving some zvols over iSCSI to some FreeBSD
machines
21 matches
Mail list logo