Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> I believe Itagaki-san's motivation for tackling this in the LDC patch 
> was the fact that it can fsync the same file many times, and in the 
> worst case go to an endless loop, and adding delays inside the loop 
> makes it much more likely. After that is fixed, I doubt any of the 
> optimizations of trying to avoid extra fsyncs make any difference in 
> real applications, and we should just keep it simple, especially if we 
> back-patch it.

I looked at the dynahash code and noticed that new entries are attached
to the *end* of their hashtable chain.  While this maybe should be
changed to link them at the front, the implication at the moment is that
without a cycle counter it would still be possible to loop indefinitely
because we'd continue to revisit the same file(s) after removing their
hashtable entries.  I think you'd need a constant stream of requests for
more than one file falling into the same hash chain, but it certainly
seems like a potential risk.  I'd prefer a solution that adheres to the
dynahash API's statement that it's unspecified whether newly-added
entries will be visited by hash_seq_search, and will in fact not loop
even if they always are visited.

> That said, I'm getting tired of this piece of code :).

Me too.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to