To be quite clear: I see the danger of hash collisions and also
understand the risk of higher CPU consumption when the hashes collide
since behind a hash, we have a linear list in the bucket.

But since the "hashofheaders" is currently only built of 32 buckets,
(i.e. a 5 bit hash), with a big number of headers, we will have many
many collisions anyway. So I dont't quite see the point. What am I thinking 
wrong?

Maybe we should use a tree implementation, e.g. an rbtree.
That would give us really quick searching.

Cheers - Michael

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1036985

Title:
  denial of service of too many headers in response

To manage notifications about this bug go to:
https://bugs.launchpad.net/tinyproxy/+bug/1036985/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to