Hello Mike,
Saturday, April 12, 2008, 4:17:30 PM, you wrote:
EM Could someone kindly provide some details on using a zvol in sparse-mode?
EM Wouldn't the COW nature of zfs (assuming COW still applies on
EM ZVOLS) quickly erode the sparse nature of the zvol?
COW does apply to zvols. If you
Hello Mario,
Saturday, April 12, 2008, 3:02:18 PM, you wrote:
MG How can I set up a ZVOL that's accessible by non-root users, too?
MG The intent is to use sparse ZVOLs as raw disks in virtualization
MG (reducing overhead compared to file-based virtual volumes).
change zvol permissions to
Hi, I'm doing the following actions on my solaris 10 system. Please let me know
if zfs will do the following things: -
Question1: - Will zfs employ ordinary raid0 stripes while creating the file
dust?
Question2: - Since most of my file /exp/dust1 (~74% = 1 - 400MB/1500MB) reside
on
I'm a bit late replying to this, but I'd take the quick dirty approach
personally. When the server is running fine, unplug one disk and just see
which one is reported faulty in ZFS.
A couple of minutes doing that and you've tested that your raid array is
working fine and you know exactly
Stuart Anderson wrote:
As an artificial test, I created a filesystem with compression enabled
and ran mkfile 1g and the reported compressratio for that filesystem
is 1.00x even though this 1GB file only uses only 1kB.
ZFS seems to treat files filled with zeroes as sparse files, regardless
On Mon, Apr 14, 2008 at 09:59:48AM -0400, Luke Scharf wrote:
Stuart Anderson wrote:
As an artificial test, I created a filesystem with compression enabled
and ran mkfile 1g and the reported compressratio for that filesystem
is 1.00x even though this 1GB file only uses only 1kB.
ZFS
The only supported controller I've found is the Areca ARC-1280ML. I want to
put it in one of the 24-disk Supermicro chassis that Silicon Mechanics builds.
Has anyone had success with this card and this kind of chassis/number of drives?
cheers,
Blake
This message posted from opensolaris.org
Mario Goebbels (Webmail) wrote:
MG How can I set up a ZVOL that's accessible by non-root users, too?
MG The intent is to use sparse ZVOLs as raw disks in virtualization
MG (reducing overhead compared to file-based virtual volumes).
change zvol permissions to whatever you want?
The nodes
Stuart Anderson wrote:
On Mon, Apr 14, 2008 at 09:59:48AM -0400, Luke Scharf wrote:
Stuart Anderson wrote:
As an artificial test, I created a filesystem with compression enabled
and ran mkfile 1g and the reported compressratio for that filesystem
is 1.00x even though this 1GB file
On Fri, Apr 11, 2008 at 12:36 PM, kristof [EMAIL PROTECTED] wrote:
A colleague told me IET is not longer an ongoing project, so it's obsoleted.
The latest release in Sourceforge was Mar 17, 2008. I think IET is still alive.
see http://scst.sourceforge.net/
iSCSI-SCST is a fork of the IET
On Mon, Apr 14, 2008 at 05:22:03PM -0400, Luke Scharf wrote:
Stuart Anderson wrote:
On Mon, Apr 14, 2008 at 09:59:48AM -0400, Luke Scharf wrote:
Stuart Anderson wrote:
As an artificial test, I created a filesystem with compression enabled
and ran mkfile 1g and the reported
No, that is definitely not expected.
One thing that can hose you is having a single disk that performs
really badly. I've seen disks as slow as 5 MB/sec due to vibration,
bad sectors, etc. To see if you have such a disk, try my diskqual.sh
script (below). On my desktop system, which has 8
Not at present, but it's a good RFE. Unfortunately it won't be
quite as simple as just adding an ioctl to report the dnode checksum.
To see why, consider a file with one level of indirection: that is,
it consists of a dnode, a single indirect block, and several data blocks.
The indirect block
On Mon, 14 Apr 2008, Blake Irvin wrote:
The only supported controller I've found is the Areca ARC-1280ML.
I want to put it in one of the 24-disk Supermicro chassis that
Silicon Mechanics builds.
For obvious reasons (redundancy and throughput), it makes more sense
to purchase two 12 port
On Mon, 14 Apr 2008, Jeff Bonwick wrote:
disks=`format /dev/null | grep c.t.d | nawk '{print $2}'`
I had to change the above line to
disks=`format /dev/null | grep ' c.t' | nawk '{print $2}'`
in order to match my mutipathed devices.
./diskqual.sh
c1t0d0 130 MB/sec
c1t1d0 13422 MB/sec
On Mon, Apr 14, 2008 at 3:43 AM, Bhaskar Jayaraman
[EMAIL PROTECTED] wrote:
Question1: - Will zfs employ ordinary raid0 stripes while creating the file
dust?
Sort of, though its not raid0. It will balance the writes across the
members of its storage pools. So in your 3 disk zpool, the writes
On Tue, Apr 15, 2008 at 1:25 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
For obvious reasons (redundancy and throughput), it makes more sense
to purchase two 12 port cards. I see that there is an option to
populate more cache RAM.
More RAM always helps ;)
I would be interested to know
On Mon, Apr 14, 2008 at 11:34 PM, Will Murnane [EMAIL PROTECTED]
wrote:
On Tue, Apr 15, 2008 at 1:25 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
For obvious reasons (redundancy and throughput), it makes more sense
to purchase two 12 port cards. I see that there is an option to
18 matches
Mail list logo