On Thu, Oct 17, 2013 at 01:47:15AM -0400, Richard Yao wrote:
>
>
> On Oct 16, 2013, at 8:13 PM, Saso Kiselkov wrote:
>
> > On 10/17/13 12:42 AM, Richard Yao wrote:
> >> One of these days, I will learn to include the CC list when using my
> >> iPad. Anyway, I like being able to tell people that
On Oct 16, 2013, at 8:13 PM, Saso Kiselkov wrote:
> On 10/17/13 12:42 AM, Richard Yao wrote:
>> One of these days, I will learn to include the CC list when using my
>> iPad. Anyway, I like being able to tell people that ZFS is self
>> tuning. :)
>
> Self-tuning is good, but you can't possibly
On 10/17/13 12:42 AM, Richard Yao wrote:
> One of these days, I will learn to include the CC list when using my
> iPad. Anyway, I like being able to tell people that ZFS is self
> tuning. :)
Self-tuning is good, but you can't possibly self-tune everything every
time. I consider running a large dat
One of these days, I will learn to include the CC list when using my iPad.
Anyway, I like being able to tell people that ZFS is self tuning. :)
On Oct 16, 2013, at 7:36 PM, Saso Kiselkov wrote:
> On 10/17/13 12:34 AM, Richard Yao wrote:
>> It is worth mentioning that a system with 1TB worth of
On 10/17/13 12:34 AM, Richard Yao wrote:
> It is worth mentioning that a system with 1TB worth of 512-byte ARC buffers
> would again favor an AVL tree implementation unless you again increase the
> size of the hash table even more.
Come on, do you seriously think we should tune for a system so h
It is worth mentioning that a system with 1TB worth of 512-byte ARC buffers
would again favor an AVL tree implementation unless you again increase the size
of the hash table even more.
On Oct 16, 2013, at 5:56 PM, Saso Kiselkov wrote:
> Just to be on the record on the list, I'd like to add one
On Oct 16, 2013, at 1:11 PM, Richard Yao wrote:
> On 10/16/2013 04:01 PM, Prakash Surya wrote:
>> On Wed, Oct 16, 2013 at 11:55:10AM -0700, garrett.dam...@gmail.com wrote:
>>> In a nutshell Richard -- you're looking at this from a computer science
>>> theory perspective, without understanding t
On Wed, Oct 16, 2013 at 10:34:24PM +0100, Saso Kiselkov wrote:
> On 10/16/13 8:57 PM, Prakash Surya wrote:
> > Well, we're just increasing the size of the hash. So, given enough RAM,
> > the problem of having enough buffers in the cache to cause large chains
> > still exists. Right? Although, "enou
Just to be on the record on the list, I'd like to add one more thing I
wrote to Richard which didn't make it to the mailing list (Richard: I
updated some numbers after running some more calculations):
For instance, let's say you're trying to represent 1TB worth of 4k ARC
buffers, i.e. 268435456 bu
On 10/16/13 8:57 PM, Prakash Surya wrote:
> Well, we're just increasing the size of the hash. So, given enough RAM,
> the problem of having enough buffers in the cache to cause large chains
> still exists. Right? Although, "enough RAM" might not be practically
> achievable.
It would only be a prob
On 10/16/2013 04:13 PM, Prakash Surya wrote:
> Actually, I think I've convinced myself I was wrong here. It should
> scale well with increased amounts of RAM, due to the table's size being
> proportional to the RAM size. It's the case when using more/smaller
> buffers with the same amount of RAM wh
On 10/16/2013 04:06 PM, Prakash Surya wrote:
> On Wed, Oct 16, 2013 at 04:01:16PM -0400, Richard Yao wrote:
>> On 10/16/2013 03:57 PM, Prakash Surya wrote:
>>> On Wed, Oct 16, 2013 at 06:40:12PM +0100, Saso Kiselkov wrote:
On 10/16/13 6:27 PM, Prakash Surya wrote:
> If the completely dynam
On Wed, Oct 16, 2013 at 12:57:18PM -0700, Prakash Surya wrote:
> On Wed, Oct 16, 2013 at 06:40:12PM +0100, Saso Kiselkov wrote:
> > On 10/16/13 6:27 PM, Prakash Surya wrote:
> > > On Wed, Oct 16, 2013 at 02:57:08AM +0100, Saso Kiselkov wrote:
> > >> On 10/16/13 2:42 AM, Prakash Surya wrote:
> > >>>
On 10/16/2013 04:01 PM, Prakash Surya wrote:
> On Wed, Oct 16, 2013 at 11:55:10AM -0700, garrett.dam...@gmail.com wrote:
>> In a nutshell Richard -- you're looking at this from a computer science
>> theory perspective, without understanding the pragmatic issues. The
>> pragmatic engineering solu
On Wed, Oct 16, 2013 at 04:01:16PM -0400, Richard Yao wrote:
> On 10/16/2013 03:57 PM, Prakash Surya wrote:
> > On Wed, Oct 16, 2013 at 06:40:12PM +0100, Saso Kiselkov wrote:
> >> On 10/16/13 6:27 PM, Prakash Surya wrote:
> >>> If the completely dynamic approach isn't tractable, why split the table
On 10/16/2013 03:57 PM, Prakash Surya wrote:
> On Wed, Oct 16, 2013 at 06:40:12PM +0100, Saso Kiselkov wrote:
>> On 10/16/13 6:27 PM, Prakash Surya wrote:
>>> If the completely dynamic approach isn't tractable, why split the table
>>> into a 2D array? Why not just increase the size of it, and keep
On Wed, Oct 16, 2013 at 11:55:10AM -0700, garrett.dam...@gmail.com wrote:
> In a nutshell Richard -- you're looking at this from a computer science
> theory perspective, without understanding the pragmatic issues. The
> pragmatic engineering solution is to keep the chains small, and optimize a
On Wed, Oct 16, 2013 at 06:40:12PM +0100, Saso Kiselkov wrote:
> On 10/16/13 6:27 PM, Prakash Surya wrote:
> > On Wed, Oct 16, 2013 at 02:57:08AM +0100, Saso Kiselkov wrote:
> >> On 10/16/13 2:42 AM, Prakash Surya wrote:
> >>> OK, that is where I assumed the speed up was coming from (shorter
> >>>
On 16 October, 2013 - garrett.dam...@gmail.com sent me these 9,3K bytes:
> Almost nobody will ever need to tune these hash sizes. I'd argue that
> we could auto tune on boot based on RAM size -- that would be a good
> solution. (Not very many people have systems that allow dynamic RAM
> addition
On Oct 16, 2013, at 2:55 PM, garrett.dam...@gmail.com wrote:
> In a nutshell Richard -- you're looking at this from a computer science
> theory perspective, without understanding the pragmatic issues. The
> pragmatic engineering solution is to keep the chains small, and optimize a
> solution
In a nutshell Richard -- you're looking at this from a computer science theory
perspective, without understanding the pragmatic issues. The pragmatic
engineering solution is to keep the chains small, and optimize a solution for
small chains. That's what Sašo has done. This is the difference
On Oct 16, 2013, at 3:25 AM, Saso Kiselkov wrote:
> On 10/16/13 6:04 AM, sanjeev bagewadi wrote:
>> Joining in late into this discussion
>>
>> I would tend to agree with Richard. I think it is worth trying out AVL
>> for the entries in a bucket.
>> That should make the search O(log (n)) ins
On 10/16/2013 01:27 PM, Prakash Surya wrote:> On Wed, Oct 16, 2013 at
02:57:08AM +0100, Saso Kiselkov wrote:
>> On 10/16/13 2:42 AM, Prakash Surya wrote:
>>> OK, that is where I assumed the speed up was coming from (shorter
>>> chains leading to faster lookups).
>>>
>>> I also assumed there would b
On 10/16/13 6:27 PM, Prakash Surya wrote:
> On Wed, Oct 16, 2013 at 02:57:08AM +0100, Saso Kiselkov wrote:
>> On 10/16/13 2:42 AM, Prakash Surya wrote:
>>> OK, that is where I assumed the speed up was coming from (shorter
>>> chains leading to faster lookups).
>>>
>>> I also assumed there would be
On Wed, Oct 16, 2013 at 02:57:08AM +0100, Saso Kiselkov wrote:
> On 10/16/13 2:42 AM, Prakash Surya wrote:
> > OK, that is where I assumed the speed up was coming from (shorter
> > chains leading to faster lookups).
> >
> > I also assumed there would be a "bucket lock" that needs to be acquired
>
On 10/16/13 6:04 AM, sanjeev bagewadi wrote:
> Joining in late into this discussion
>
> I would tend to agree with Richard. I think it is worth trying out AVL
> for the entries in a bucket.
> That should make the search O(log (n)) instead of O(n) where 'n' is the
> chain length.
Not quite. Fo
On 10/16/13 4:30 AM, Richard Yao wrote:
>
>
> On Oct 15, 2013, at 9:57 PM, Saso Kiselkov wrote:
>
>> On 10/16/13 2:42 AM, Prakash Surya wrote:
>>>
>>> So, if this simply comes down to a hash collision issue, can't we try
>>> and take this a bit further.. Can we make the hash size be completely
Joining in late into this discussion
I would tend to agree with Richard. I think it is worth trying out AVL for
the entries in a bucket.
That should make the search O(log (n)) instead of O(n) where 'n' is the
chain length.
Thanks and regards,
Sanjeev
On Wed, Oct 16, 2013 at 9:00 AM, Richar
On Oct 15, 2013, at 9:57 PM, Saso Kiselkov wrote:
> On 10/16/13 2:42 AM, Prakash Surya wrote:
>>
>> So, if this simply comes down to a hash collision issue, can't we try
>> and take this a bit further.. Can we make the hash size be completely
>> dynamic? Instead of using another heuristic, can
On 10/16/13 2:42 AM, Prakash Surya wrote:
> OK, that is where I assumed the speed up was coming from (shorter
> chains leading to faster lookups).
>
> I also assumed there would be a "bucket lock" that needs to be acquired
> during this lookup similar to the dbuf hash (which would affect
> concurr
On Tue, Oct 15, 2013 at 10:47:22PM +0100, Saso Kiselkov wrote:
> On 10/15/13 10:26 PM, Prakash Surya wrote:
> > On Sun, Oct 13, 2013 at 12:21:05AM +0100, Saso Kiselkov wrote:
> >> The performance gains from this are pretty substantial in my testing.
> >> The time needed to rebuild 93 GB worth of AR
On 10/15/13 10:26 PM, Prakash Surya wrote:
> On Sun, Oct 13, 2013 at 12:21:05AM +0100, Saso Kiselkov wrote:
>> The performance gains from this are pretty substantial in my testing.
>> The time needed to rebuild 93 GB worth of ARC buffers consisting of a
>> mixture of 128k and 8k blocks (total numbe
On Sun, Oct 13, 2013 at 12:21:05AM +0100, Saso Kiselkov wrote:
> The performance gains from this are pretty substantial in my testing.
> The time needed to rebuild 93 GB worth of ARC buffers consisting of a
> mixture of 128k and 8k blocks (total number of ARC buffers: 6.4 million,
> or ~15.4k avera
On 10/15/13 3:34 AM, Richard Yao wrote:
> On 10/12/2013 07:21 PM, Saso Kiselkov wrote:
>> The current implementation is hardcoded guesswork as to the correct hash
>> table size. In principle, it works by taking the amount of physical
>> memory and dividing it by a 64k block size. The result is the
34 matches
Mail list logo