On Thu, 19 Nov 2009 18:33:22 +0300, gzp <ga...@freemail.hu> wrote:
Bartosz Milewski írta:
dsimcha Wrote:
The one thing that I think has been missing from this discussion is,
what would be
the alternative if we didn't have this "non-deterministic"
reallocation? How else
could you **efficiently** implement dynamic arrays?
In the long run (D3), I proposed using the "unique" type modifier. If
an array is unique, the compiler knows that there are no slices to
worry about, and it can use in-place reallocation to its heart content.
That pretty much solves the performance problem. In the short run
(D2), I would suggest sticking to "reallocate on every extension"
semantics (especially in SafeD) and provide a library solution (a la
C++ std::vector) where the performance of appending is an issue.
It seems as we cannot persuade Andrei and Walter, that it is a
really-really bad and dangerous implementation. Everybody knows what is
going on when an int[] "array" is resized. It doesn't really matters,
how we call it, indeterminism, undefined behaviour, implementation
dependent behaviour. But we all know it's a source of a hardly
detectable, dangerous bug.
As I've said it before it's worse than buffer overrun, as they can be
detected using some memory patterns and memory guarding techniques. But
this bug cannot be detected, it just occurs once in a while, (like
implementing parallel programs without the proper critical sections.)
It's hard enough to create good (parallel) programs by itself, so don't
harden the work with placing such a trap.
So please take a look at the language with a different eye. I know
you've worked a lot on it, but I'm afraid such a mistake could ruin the
whole language. Sorry. Or at least just think why are there so many
posts concerning this feature.
int[] is not a range, it is a range + random_copy_on_resize (replace the
word random with any of your choice, it is not a matter)
I think we can agree that passing "array" int[] as a (non-cont) value
type is highly avoidable. Than why do we allow it ???
With std::vector + iterator you always now when it is invalid (when the
array is resized). But here you don't have an object whose modification
(surly) invalidates the ranges. I'm missing to have reference for the
actual array.
The semantics is not so important for me (now), but
distinct the two object:
1. have array those contains the element + structure, that can be
resized: RandomAccessArray!(int)
2. have slices (ranges) of array, where the structure cannot be altered
only the elements: Rang!(int)
3. decide if int[] stands for a short version for either
RandomAccessArray!(int) or Range!(int), but DO NOT have a mixed meaning.
Gzp
Same here. FWIW, I strongly believe demise of T[new] was a step in a wrong
direction, and I feel highly frustrated about it. It was one of the most
anticipated features that was asked for *years*! And it's gone when it was
so close to be implemented...
The only more or less reasonable answer why T[new] "sucked" was:
On Mon, 19 Oct 2009 01:55:28 +0400, Andrei Alexandrescu
<seewebsiteforem...@erdani.org> wrote:
Returning a distinct type from .dup and ~ makes slices not closed over
these operations, a source of complication, confusion, and bloating.
I see no problem returning T[] when a slice is dup'ed or concatenated.
That's what always happened anyway.