On 19 nov. 2010, at 03:53, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
SAS Controller
and all ZFS Disks/ Pools are passed-through to Nexenta to have full
ZFS-Disk
control like on real hardware.
This is precisely the thing I'm interested in.
hmmm br
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide br br
Disabling the ZIL (Don't) br
Caution: Disabling the ZIL on an NFS server can lead to client side corruption.
The ZFS pool integrity itself is not compromised by this tuning. brbr
so especially with nfs i won`t
On 19/11/2010 00:39, David Magda wrote:
On Nov 16, 2010, at 05:09, Darren J Moffat wrote:
Both CCM[1] and GCM[2] are provided so that if one turns out to have
flaws hopefully the other will still be available for use safely even
though they are roughly similar styles of modes.
On systems
The design for ZFS crypto was done in the open via opensolaris.org and
versions of the source (though not the final version at this time) are
available on opensolaris.org.
It was reviewed by internal and external to Sun/Oracle people who have
considerable crypto experience. Important parts
From: Saxon, Will [mailto:will.sa...@sage.com]
In order to do this, you need to configure passthrough for the device at
the
host level (host - configuration - hardware - advanced settings). This
Awesome. :-)
The only problem is that once a device is configured to pass-thru to the
guest VM,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of VO
How to accomplish ESXi 4 raw device mapping with SATA at least:
http://www.vm-help.com/forum/viewtopic.php?f=14t=1025
It says:
You can pass-thru individual disks, if you have SCSI, but
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of VO
This sounds interesting as I have been thinking something similar but
never
implemented it because all the eggs would be in the same basket. If you
don't mind me asking for more
From: Gil Vidals [mailto:gvid...@gmail.com]
connected to my ESXi hosts using 1 gigabit switches and network cards: The
speed is very good as can be seen by IOZONE tests:
KB reclen write rewrite read reread
512000 32 71789 76155 94382 101022
512000 1024 75104
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Günther
br br Disabling the ZIL (Don't) br
This is relative. There are indeed situations where it's acceptable to
disable ZIL. To make your choice, you need to understand a few things...
On 19 nov. 2010, at 15:04, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Günther
br br Disabling the ZIL (Don't) br
This is relative. There are indeed situations where it's acceptable to
disable ZIL. To
Disclaimer: Solaris 10 U8.
I had an SSD die this morning and am in the process of replacing the 1GB
partition which was part of a log mirror. The SSDs do nothing else.
The resilver has been running for ~30m, and suggests it will finish sometime
before Elvis returns from Andromeda, though perhaps
i have the same problem with my 2HE supermicro server (24x2,5, connected via
6x mini SAS 8087) and no additional mounting possibilities for 2,5 or 3,5
drives.
brbr
on those machines i use one sas port (4 drives) of an old adaptec 3805 (i have
used them in my pre zfs-times) to build a raid-1 +
On Fri, 19 Nov 2010 07:16:20 PST, Günther wrote:
i have the same problem with my 2HE supermicro server (24x2,5,
connected via 6x mini SAS 8087) and no additional mounting
possibilities for 2,5 or 3,5 drives.
brbr
on those machines i use one sas port (4 drives) of an old adaptec
3805 (i have
I'm not sure that leaving the ZIL enabled whilst replacing the log devices
is a good idea?
Also - I had no idea Elvis was coming back tomorrow! Sweet. ;-)
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 19 November 2010 14:57, Bryan
-Original Message-
From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
Sent: Friday, November 19, 2010 8:03 AM
To: Saxon, Will; 'Günther'; zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
From: Saxon, Will [mailto:will.sa...@sage.com]
Hi,
Here is a small script to test deduped zfs send stream:
=
#!/bin/bash
ZFSPOOL=rpool
ZFSDATASET=zfs-send-dedup-test
dd if=/dev/random of=/var/tmp/testfile1 bs=512 count=10
zfs create $ZFSPOOL/$ZFSDATASET
cp /var/tmp/testfile1 /$ZFSPOOL/$ZFSDATASET/testfile1
zfs snapshot
Sry, the script was cut off, ending part is:
mp/ddtest-snap2.zfs
=
It works in OpenSolaris b134, but not in OpenIndiana b147, nor Solaris Express
11, where zfs receive exists on second incremental snapshot with error message:
cannot receive incremental stream: invalid backup stream
Also, most of the big name vendors have a USB or SD
option for booting ESXi. I believe this is the 'ESXi
Embedded' flavor vs. the typical 'ESXi Installable'
that we're used to. I don't think it's a bad idea at
all. I've got a not-quite-production system I'm
booting off USB right now, and
18 matches
Mail list logo