Hi there
I post this problem in Xen discussion before but with different title, I
thought it is something has to do with the memory .. so guys can you read the
thread first !!
http://www.opensolaris.org/jive/thread.jspa?threadID=76870&tstart=0
I tried this yesterday , I brought my friend
On Sun, Oct 05, 2008 at 09:07:31PM -0400, Brian Hechinger wrote:
> On Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
> > I'm not sure I could survive a crash of both nodes, going to try and
> > test some more.
>
> Ok, so taking my idea above, maybe a pair of 15K SAS disks in those
> box
> So what are the downsides to this? If both nodes were to crash and
> I used the same technique to recreate the ramdisk I would lose any
> transactions in the slog at the time of the crash, but the physical
> disk image is still in a consistent state right (just not from my
> apps point o
> It would be trivial to make the threshold a tunable,
> but we're
> trying to avoid this sort of thing. I don't want
> there to be a
> ZFS tuning guide, ever. That would mean we failed.
>
> Jeff
harumph... http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
:-)
Well now that
On Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
>
> So I tried this experiment this week...
> On each host (OpenSolaris 2008.05), I created an 8GB ramdisk with ramdiskadm.
> I shared this ramdisk on each host via the iscsi target and initiator over a
> 1GB crossconnect cable (jumbo
Erik:
> > (2) a SAS drive has better throughput and IOPs than a SATA drive
Richard:
> Disagree. We proved that the transport layer protocol has no bearing
> on throughput or iops. Several vendors offer drives which are
> identical in all respects except for transport layer protocol: SAS or
> SA
/usr/openwin/bin/xterm returns 'Could not set destroy callback to IM' but does
open an xterm. I have never seen this one before. Any ideas?
--ron
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
zpool destroy -f usb1
--ron
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Reading through the post the error message didn't come through properly. It is
"tank/mail:0x0" (with lesser than and greater than on either sides of the 0's).
Also, the 4 disks (2 vdevs x 2 for raid-z) are physical sata disks dedicated to
the vmware image.
Thanks.
--
This message posted from ope
Hi
I am looking for guidance on the following zfs setup and error:
- opensolaris 2008.05 running as guest in vmware server - ubuntu host
- system has run flawlessly as an NFS file server for some months now. Single
zpool (called 'tank'), 2 vdevs each as raid-Z, about 10 filesystems (one of
them
Hi
I have one usb hard drive that shows in "zpool import" as zpool that does not
exist in disk anymore
# zpool import
pool: usb1
id: 8159001826765429865
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The p
11 matches
Mail list logo