On Thu, Jun 16, 2011 at 09:15:44PM -0400, Edward Ned Harvey wrote:
> My personal preference, assuming 4 disks, since the OS is mostly reads and
> only a little bit of writes, is to create a 4-way mirrored 100G partition
> for the OS, and the remaining 900G of each disk (or whatever) becomes either
> From: Daniel Carosone [mailto:d...@geek.com.au]
> Sent: Thursday, June 16, 2011 10:27 PM
>
> Is it still the case, as it once was, that allocating anything other
> than whole disks as vdevs forces NCQ / write cache off on the drive
> (either or both, forget which, guess write cache)?
I will onl
On Thu, Jun 16, 2011 at 10:40:25PM -0400, Edward Ned Harvey wrote:
> > From: Daniel Carosone [mailto:d...@geek.com.au]
> > Sent: Thursday, June 16, 2011 10:27 PM
> >
> > Is it still the case, as it once was, that allocating anything other
> > than whole disks as vdevs forces NCQ / write cache off
On 06/16/11 20:26, Daniel Carosone wrote:
On Thu, Jun 16, 2011 at 09:15:44PM -0400, Edward Ned Harvey wrote:
My personal preference, assuming 4 disks, since the OS is mostly reads and
only a little bit of writes, is to create a 4-way mirrored 100G partition
for the OS, and the remaining 900G
> From: Daniel Carosone [mailto:d...@geek.com.au]
> Sent: Thursday, June 16, 2011 10:27 PM
>
> Is it still the case, as it once was, that allocating anything other
> than whole disks as vdevs forces NCQ / write cache off on the drive
> (either or both, forget which, guess write cache)?
I will onl
> From: Daniel Carosone [mailto:d...@geek.com.au]
> Sent: Thursday, June 16, 2011 11:05 PM
>
> the [sata] channel is idle, blocked on command completion, while
> the heads seek.
I'm interested in proving this point. Because I believe it's false.
Just hand waving for the moment ... Presenting th
2011-06-17 15:41, Edward Ned Harvey пишет:
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 11:05 PM
the [sata] channel is idle, blocked on command completion, while
the heads seek.
I'm interested in proving this point. Because I believe it's false.
Just hand wavi
2011-06-17 15:06, Edward Ned Harvey пишет:
When it comes to reads: The OS does readahead more intelligently than the
disk could ever hope. Hardware readahead is useless.
Here's another (lame?) question to the experts, partly as a
followup to my last post about large arrays and essentially
a
On Jun 17, 2011, at 7:06 AM, Edward Ned Harvey
wrote:
> I will only say, that regardless of whether or not that is or ever was true,
> I believe it's entirely irrelevant. Because your system performs read and
> write caching and buffering in ram, the tiny little ram on the disk can't
> possibly
On Jun 16, 2011, at 8:05 PM, Daniel Carosone wrote:
> On Thu, Jun 16, 2011 at 10:40:25PM -0400, Edward Ned Harvey wrote:
>>> From: Daniel Carosone [mailto:d...@geek.com.au]
>>> Sent: Thursday, June 16, 2011 10:27 PM
>>>
>>> Is it still the case, as it once was, that allocating anything other
>>>
Richard Elling wrote:
Actually, all of the data I've gathered recently shows that the number of
IOPS does not significantly increase for HDDs running random workloads.
However the response time does :-( My data is leading me to want to restrict
the queue depth to 1 or 2 for HDDs.
Thinking
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: Saturday, June 18, 2011 7:47 PM
>
> Actually, all of the data I've gathered recently shows that the number of
> IOPS does not significantly increase for HDDs running random workloads.
> However the response time does :-(
Could you
On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>> Sent: Saturday, June 18, 2011 7:47 PM
>>
>> Actually, all of the data I've gathered recently shows that the number of
>> IOPS does not significantly increase for HDDs running random
On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote:
> Richard Elling wrote:
>> Actually, all of the data I've gathered recently shows that the number of
>> IOPS does not significantly increase for HDDs running random workloads.
>> However the response time does :-( My data is leading me to want to
On Fri, Jun 17, 2011 at 07:41:41AM -0400, Edward Ned Harvey wrote:
> > From: Daniel Carosone [mailto:d...@geek.com.au]
> > Sent: Thursday, June 16, 2011 11:05 PM
> >
> > the [sata] channel is idle, blocked on command completion, while
> > the heads seek.
>
> I'm interested in proving this point.
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: Sunday, June 19, 2011 11:03 AM
>
> > I was planning, in the near
> > future, to go run iozone on some system with, and without the disk cache
> > enabled according to format -e. If my hypothesis is right, it shouldn't
> > significan
On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote:
> On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote:
> >> From: Richard Elling [mailto:richard.ell...@gmail.com]
> >> Sent: Saturday, June 18, 2011 7:47 PM
> >>
> >> Actually, all of the data I've gathered recently shows that the n
On Jun 20, 2011, at 6:31 AM, Gary Mills wrote:
> On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote:
>> On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Saturday, June 18, 2011 7:47 PM
Actually, all
Richard Elling wrote:
On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote:
Richard Elling wrote:
Actually, all of the data I've gathered recently shows that the number of IOPS
does not significantly increase for HDDs running random workloads. However the
response time does :-( My data i
For SSD we have code in illumos that disables disksort. Ultimately, we believe
that the cost of disksort is in the noise for performance.
-- Garrett D'Amore
On Jun 20, 2011, at 8:38 AM, "Andrew Gabriel" wrote:
> Richard Elling wrote:
>> On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote:
>>
On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote:
> Yes. I've been looking at what the value of zfs_vdev_max_pending should be.
> The old value was 35 (a guess, but a really bad guess) and the new value is
> 10 (another guess, but a better guess). I observe that data from a fast,
>
On Sun, 19 Jun 2011, Richard Elling wrote:
Yes. I've been looking at what the value of zfs_vdev_max_pending should be.
The old value was 35 (a guess, but a really bad guess) and the new value is
10 (another guess, but a better guess). I observe that data from a fast, modern
I am still using 5
On Jun 21, 2011, at 8:18 AM, Garrett D'Amore wrote:
>>
>> Does that also go through disksort? Disksort doesn't seem to have any
>> concept of priorities (but I haven't looked in detail where it plugs in to
>> the whole framework).
>>
>>> So it might make better sense for ZFS to keep the disk qu
2011-06-19 3:47, Richard Elling пишет:
On Jun 16, 2011, at 8:05 PM, Daniel Carosone wrote:
On Thu, Jun 16, 2011 at 10:40:25PM -0400, Edward Ned Harvey wrote:
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 10:27 PM
Is it still the case, as it once was, that alloc
> From: Ross Walker [mailto:rswwal...@gmail.com]
> Sent: Friday, June 17, 2011 9:48 PM
>
> The on-disk buffer is there so data is ready when the hard drive head
lands,
> without it the drive's average rotational latency will trend higher due to
> missed landings because the data wasn't in buffer a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> Conclusion: Yes it matters to enable the write_cache.
Now the question of whether or not it matters to use the whole disk versus
partitioning, and how to enable the wr
On Jul 2, 2011, at 6:39 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> Conclusion: Yes it matters to enable the write_cache.
>
> Now the question of whether or not it matters to us
27 matches
Mail list logo