iozone doesn't vary the blocksize during the test, it's a very
artificial test but it's useful for gauging performance under
different scenarios.
So for this test all of the writes would have been 64k blocks, 128k,
etc. for that particular step.
Just as another point of reference I reran the test
vfs.zfs.txg.synctime_ms: 1000
vfs.zfs.txg.timeout: 5
On Thu, Jul 19, 2012 at 8:47 PM, John Martin wrote:
> On 07/19/12 19:27, Jim Klimov wrote:
>
>> However, if the test file was written in 128K blocks and then
>> is rewritten with 64K blocks, then Bob's answer is probably
>> valid - the block wo
On 07/19/12 19:27, Jim Klimov wrote:
However, if the test file was written in 128K blocks and then
is rewritten with 64K blocks, then Bob's answer is probably
valid - the block would have to be re-read once for the first
rewrite of its half; it might be taken from cache for the
second half's rew
On Fri, 20 Jul 2012, Jim Klimov wrote:
I am not sure if I misunderstood the question or Bob's answer,
but I have a gut feeling it is not fully correct: ZFS block
sizes for files (filesystem datasets) are, at least by default,
dynamically-sized depending on the contiguous write size as
queued by
This is normal. The problem is that with zfs 128k block sizes, zfs
needs to re-read the original 128k block so that it can compose and
write the new 128k block. With sufficient RAM, this is normally avoided
because the original block is already cached in the ARC.
If you were to reduce the zfs b
On Thu, 19 Jul 2012, Gordon Ross wrote:
On Thu, Jul 19, 2012 at 5:38 AM, sol wrote:
Other than Oracle do you think any other companies would be willing to take
over support for a clustered 7410 appliance with 6 JBODs?
(Some non-Oracle names which popped out of google:
Joyent/Coraid/Nexenta/Gr
On Wed, 18 Jul 2012, Michael Traffanstead wrote:
I have an 8 drive ZFS array (RAIDZ2 - 1 Spare) using 5900rpm 2TB SATA drives
with an hpt27xx controller under FreeBSD 10
(but I've seen the same issue with FreeBSD 9).
The system has 8gigs and I'm letting FreeBSD auto-size the ARC.
Running iozo
On Thu, Jul 19, 2012 at 5:29 AM, Hans J. Albertsson
wrote:
> I think the problem is with disks that are 4k organised, but report their
> blocksize as 512.
>
> If the disk reports it's blocksize correctly as 4096, then ZFS should not
> have a problem.
> At least my 2TB Seagate Barracuda disks seeme
On Thu, Jul 19, 2012 at 5:38 AM, sol wrote:
> Other than Oracle do you think any other companies would be willing to take
> over support for a clustered 7410 appliance with 6 JBODs?
>
> (Some non-Oracle names which popped out of google:
> Joyent/Coraid/Nexenta/Greenbytes/NAS/RackTop/EraStor/Illumo
On Thu, Jul 19, 2012 at 02:29:38PM +0200, Hans J. Albertsson wrote:
> I think the problem is with disks that are 4k organised, but report
> their blocksize as 512.
>
> If the disk reports it's blocksize correctly as 4096, then ZFS should
> not have a problem.
> At least my 2TB Seagate Barracuda
hi
you have two issues here
1)one is the HW support
2)one is the SW support
no one but oracle can provide SW support even if you find someone for HW
support
regards
For Other you mentioned are all opensolaris fock, some does provide Gui
but pricing model are very different
AFAIK Nexenta is ch
I think the problem is with disks that are 4k organised, but report
their blocksize as 512.
If the disk reports it's blocksize correctly as 4096, then ZFS should
not have a problem.
At least my 2TB Seagate Barracuda disks seemed to report their
blocksizes as 4096, and my zpools on those machin
Other than Oracle do you think any other companies would be willing to take
over support for a clustered 7410 appliance with 6 JBODs?
(Some non-Oracle names which popped out of google:
Joyent/Coraid/Nexenta/Greenbytes/NAS/RackTop/EraStor/Illumos/???)
_
13 matches
Mail list logo