On Thu, 14 Oct 2010 06:04:24 -0400, Stewart Gordon <smjg_1...@yahoo.com> wrote:

On 13/10/2010 21:11, Steven Schveighoffer wrote:
<snip>
No, what is suggested is that we build the complexity guarantees into
the interface. For example, if you had a List interface, and it could
only guarantee O(n) performance for get, you'd call it e.g. slowGet(),
to signify it's a slow operation, and define get to signify a fast
operation. Similarly, 'in' is defined to be fast, so it has no business
being a function on an array.
<snip>

And make get assert(false) if a fast get isn't implementable? I think this is bug-prone - somebody may use it forgetting that it won't work on all List implementations, and then the algorithm will refuse at runtime to work at all rather than degrading gracefully to something that does.

But something else that would be useful is methods in the interface to tell which operations are fast and which are slow.

No, not really. What I meant by "define get to be a fast operation", I mean define the term "get" to mean fast operation, not to define get on the List interface and you implement the one you want. You'd only define slowGet as a method on the List interface, and then compile-time checks that require fast operations wouldn't be able to use List. If it's acceptable for a compile-time checked function to use slowGet, then it could use either.

Essentially, in your interface, you define the operation's complexity, and avoid using terms that imply fast operation. This allows both compile-time interfaces and runtime interfaces.

For example, dcollections' List interface doesn't even define something like get. Why? Because Lists aren't meant to be storage containers for quick access to elements. It used to have a contains method, but I eliminated that because the value to cost ratio was too low.

-Steve

Reply via email to