Phil,
Recently, we have built a large configuration on 4 way Xeon sever with 8 4U
24 Bay JBOD. We are using 2x LSI 6160 SAS switch so we can easy to expand
the Storage in the future.
1) If you are planning to expand your storage, you should consider
using LSI SAS switch for easy
Bullshit. I just got a OCZ Vertex 3, and the first fill was 450-500MB/s.
Second and sequent fills are at half that speed. I'm quite confident
that it's due to the flash erase cycle that's needed, and if stuff can
be TRIM:ed (and thus flash erased as well), speed would be regained.
Overwriting an
Does anyone know if it's OK to do zfs send/receive between zpools with
different ashift values?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jul 26, 2011 at 3:28 PM, casper@oracle.com wrote:
Bullshit. I just got a OCZ Vertex 3, and the first fill was 450-500MB/s.
Second and sequent fills are at half that speed. I'm quite confident
that it's due to the flash erase cycle that's needed, and if stuff can
be TRIM:ed (and thus
Shouldn't modern SSD controllers be smart enough already that they know:
- if there's a request to overwrite a sector, then the old data on
that sector is no longer needed
- allocate a clean sector from pool of available sectors (part of
wear-leveling mechanism)
- clear the old sector, and add
On 07/26/11 10:14, Andrew Gabriel wrote:
Does anyone know if it's OK to do zfs send/receive between zpools with
different ashift values?
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
The ashift is a vdev
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
Even the data compressed/encrypted by ZFS will be decrypted? If it is true,
will it be any CPU overhead?
And ZFS send/receive tunneled by ssh becomes the
On 07/26/11 11:28, Fred Liu wrote:
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
Even the data compressed/encrypted by ZFS will be decrypted?
Yes, which is exactly what I said.
All data as seen by the
Yes, which is exactly what I said.
All data as seen by the DMU is decrypted and decompressed, the DMU
layer
is what the ZPL layer is built ontop of so it has to be that way.
Understand. Thank you. ;-)
There is always some overhead for doing a decryption and decompression,
the
Op 26-07-11 12:56, Fred Liu schreef:
Any alternatives, if you don't mind? ;-)
vpn's, openssl piped over netcat, a password-protected zip file,... ;)
ssh would be the most practical, probably.
--
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means,
On 07/26/11 11:56, Fred Liu wrote:
It is up to how big the delta is. It does matter if the data backup can not
be finished within the required backup window when people use zfs send/receive
to do the mass data backup.
The only way you will know of decrypting and decompressing causes a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
Shouldn't modern SSD controllers be smart enough already that they know:
- if there's a request to overwrite a sector, then the old data on
that sector is no longer needed
G'Day,
- zfs pool with 4 disks (from Clariion A)
- must migrate to Clariion B (so I created 4 disks with the same size,
avaiable for the zfs)
The zfs pool has no mirrors, my idea was to add the new 4 disks from
the Clariion B to the 4 disks which are still in the pool - and later
remove the
Bernd W. Hennig wrote:
G'Day,
- zfs pool with 4 disks (from Clariion A)
- must migrate to Clariion B (so I created 4 disks with the same size,
avaiable for the zfs)
The zfs pool has no mirrors, my idea was to add the new 4 disks from
the Clariion B to the 4 disks which are still in the pool
On Mon, July 25, 2011 10:03, Orvar Korvar wrote:
There is at least a common perception (misperception?) that devices
cannot process TRIM requests while they are 100% busy processing other
tasks.
Just to confirm; SSD disks can do TRIM while processing other tasks?
Processing the request just
Hi all-
We've been experiencing a very strange problem for two days now.
We have three client (Linux boxes) connected to a ZFS box (Nexenta) via iSCSI.
Every few seconds (seems random), iostats shows the clients go from an normal
80K+ IOPS to zero. It lasts up to a few seconds and things
Subject: Re: [zfs-discuss] Adding mirrors to an existing zfs-pool
Date: Tue, 26 Jul 2011 08:54:38 -0600
From: Cindy Swearingen cindy.swearin...@oracle.com
To: Bernd W. Hennig consult...@hennig-consulting.com
References: 342994905.11311662049567.JavaMail.Twebapp@sf-app1
Hi Bernd,
If you are
To add to that... iostat on the client boxes show the connection to always be
around 98% util and tops at 100% whenever it hangs. The same clients are
connected to another ZFS server with much lower specs and a smaller number of
slower disks, it performs much better and rarely get past 5%
Ian,
Did you enable DeDup?
Rocky
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ian D
Sent: Tuesday, July 26, 2011 7:52 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Entire client hangs every few
Hi
It is better just ceate new ool in array 8
Then use cpio ro copy the data
On 7/26/11, Bernd W. Hennig consult...@hennig-consulting.com wrote:
G'Day,
- zfs pool with 4 disks (from Clariion A)
- must migrate to Clariion B (so I created 4 disks with the same size,
avaiable for the zfs)
No dedup.
The hiccups started around 2am on Sunday while (obviously) nobody was
interacting with neither the clients or the server. It's been running for
months (as is) without any problem.
My guess is that it's a defective hard drive that instead of totally failing,
just stutters. Or
Hi all,
I lost my storage because rpool don't boot. I try to recover, but
opensolaris says to destroy and re-create.
My rpool installed on flash drive, and my pool (with my info) it's on
another disks.
My question is: It's possible I reinstall opensolaris in new flash drive,
without stirring on
On Tue, Jul 26, 2011 at 7:51 AM, David Dyer-Bennet d...@dd-b.net wrote:
Processing the request just means flagging the blocks, though, right?
And the actual benefits only acrue if the garbage collection / block
reshuffling background tasks get a chance to run?
I think that's right. TRIM just
I'm on S11E 150.0.1.9 and I replaced one of the drives and the pool seems to be
stuck in a resilvering loop. I performed a 'zpool clear' and 'zpool scrub' and
just complains that the drives I didn't replace are degraded because of too
many errors. Oddly the replaced drive is reported as being
This is actually a recently known problem, and a fix for it is in the
3.1 version, which should be available any minute now, if it isn't
already available.
The problem has to do with some allocations which are sleeping, and jobs
in the ZFS subsystem get backed behind some other work.
If you have
Hi Garrett-
It is something that could happen at any time on a system that has been working
fine for a while? That system has 256G of RAM, I think adequate is not a
concern here :)
We'll try 3.1 as soon as we can download it.
Ian
--
This message posted from opensolaris.org
Hi Roberto,
Yes, you can reinstall the OS on another disk and as long as the
OS install doesn't touch the other pool's disks, your
previous non-root pool should be intact. After the install
is complete, just import the pool.
Thanks,
Cindy
On 07/26/11 10:49, Roberto Scudeller wrote:
Hi all,
On Tue, Jul 26, 2011 at 1:33 PM, Bernd W. Hennig
consult...@hennig-consulting.com wrote:
G'Day,
- zfs pool with 4 disks (from Clariion A)
- must migrate to Clariion B (so I created 4 disks with the same size,
avaiable for the zfs)
The zfs pool has no mirrors, my idea was to add the new 4
Are the disk active lights typically ON when this happens?
On Tue, Jul 26, 2011 at 3:27 PM, Garrett D'Amore garr...@damore.org wrote:
This is actually a recently known problem, and a fix for it is in the
3.1 version, which should be available any minute now, if it isn't
already available.
On Tue, Jul 26, 2011 at 1:14 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
Yes, you can reinstall the OS on another disk and as long as the
OS install doesn't touch the other pool's disks, your
previous non-root pool should be intact. After the install
is complete, just import the
On 2011-Jul-26 17:24:05 +0800, Fajar A. Nugraha w...@fajar.net wrote:
Shouldn't modern SSD controllers be smart enough already that they know:
- if there's a request to overwrite a sector, then the old data on
that sector is no longer needed
ZFS never does update-in-place and UFS only does
31 matches
Mail list logo