Right, maybe my explanation was unclear. I'll try to repeat (and ask
other people to check my math).

On 2/10/07, Leonard Mada <[EMAIL PROTECTED]> wrote:
>
> So, even in the worst scenario, this algorithm should work faster than
> n* log(n). Also, (n/2 -1)! is worst case, when all residual elements
> would fall inside the sorted array, so we can NOT drop them. So,
No, log((n/2 -1)!) is the worst case for finding where to insert the
elements. It is not the cost of actually inserting them.

In order to insert an element into the middle of a N-sized array, you
must move N/2 elements. Are you assuming that moving N/2 elements can
be done at the same cost of moving 1 element?
This might be possible, but it means you need another N-sized array as filler.

T(n) <
> O(n log(n)), even IF I penalize for inserting one value into the sorted
> array as there would be maximum n/2 inserts and that should cancel out
> with (-n/2) [or add another n/2].
Again, assuming that you can shift N elements of an array with a cost
of 1. Are you convinced it can be done?

 Also note, that as x[0] and x[last]
> are dropped, some of the new x[i] will be either < x[1], so replacing
> x[0] (NO or minimal cost) or > x[last -1], so replacing x[last], without
> cost.
Wrong assumption - when assuming worst case, you assume never to  gain
from actions that cost less then your worst case.

>Because the insert would mean
> moving a contiguous memory segment (NO need to create new array, because
> array is shrinking: insert 1, drop another 2 => array still shrink by
> one), this operation would be very fast.
Yeah, your basic assumption is that moving a contiguous memory segment
takes a cost of 1. If it takes a cost related to the size of the
memory (N), then the continuous movements need O(N*N).

Yours,

Uri David
_______________________________________________
gnumeric-list mailing list
gnumeric-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gnumeric-list

Reply via email to