2008/12/27 J. Andrew Rogers <and...@ceruleansystems.com>:
>
> I think many people greatly underestimate how many gaping algorithm holes
> there are in computer science for even the most important and mundane tasks.
> The algorithm coverage of computer science is woefully incomplete,

Is it? In all my time as a programmer, it's never occurred to me to
think "I wish there was an algorithm to do X". mybe that's just me.
And there are vast numbers of useful algorithms that people use every
day.

> which is
> why after a half century people are *still* finding general, elegant
> algorithms for basic problems, many of which are bloody obvious in
> hindsight.

I wonder (thinking out loud here) are there any statistics for this?
For example if you plot the number of such algorithms that've been
found over time, what sort of curve would you get? (Of course, you'd
have to define "general, elegant algorithm for basic problem", which
might be tricky)

> In short, we have no idea what important and fundamental
> algorithms will be discovered from one year to the next that change the
> boundaries of what is practically possible with computer science.

Is this true? It doesn't seem right to me. AIUI the current state of
the art in operating systems, compilers, garbage collectors, etc is
only slightly more efficient than it was 10 or 20 years ago. (In fact,
most practical programs are a good deal less efficient, because faster
processors mean they don't have to be).

> For example, there is no general indexing algorithm described in computer
> science.  In fact, the only useful indexing algorithm index points on a
> line. Not points in arbitrary space, not intervals on lines, not
> hyper-rectangles in high-dimensionality space, never mind more complex
> relations.  Oddly enough, most computer scientists are ignorant of the fact
> that no useful indexing algorithm exists for most data representations or
> that a vast number of software applications are not tractably implementable
> as a result.

I don't think I understand you. To me "indexing" means what the Google
search engine or an SQL database does -- but you're using the word
with a different meaning aren't you?

> The ability to tractably index almost nothing has consequences. Relational
> database theory describes the manipulation of hyper-rectangles, but we fake
> it very badly with indexes we actually have algorithms for.

Sorry, you've lost me again -- I've never heard of the term
"hyper-rectangles" in relation to relational databases.

> Did you ever
> wonder why no one has built a massively distributed SQL database despite the
> obvious value?

It's not something that has ever occurred to me, no. (If I was going
to build a distributed database it wouldn't be SQL or for that matter
relational -- as it happens I do have an ongoing project to build an
OO programming language that comes with an OO database).

> It is not because it is theoretically impossible, but
> because it is only possible if someone discovers a general algorithm for
> indexing hyper-rectangles -- faking it is not distributable.

How do we know that there is such an algorithm?

> Several other
> big limitations in software are actually based in (the absence of) this
> algorithm.  It is utterly trivial to describe, and there are literally
> several dozen algorithms that come close, but after 40 years no one has
> published such an algorithm.  When such an algorithm is finally published,
> it will completely reset everything we think we know about many algorithms
> and data structures.

That's something I didn't know. (I probably don't know as much as I
ought to about fundamental computer science).

> Spatial indexing, for example, currently uses "insanely, infeasibly much
> computation resource", so no one implements it beyond uselessly trivial
> systems.  But as most people familiar with the minutiae of the related
> theoretical computer science will tell you, not only is it very probable
> that a broadly general algorithm exists, but it will almost certainly scale
> like Google does.  We will go from "intractable" to "insanely cheap" in one
> day.

Well, that'll be cool if it happens.

>> An AGI written by humans would hopefully be a lot more nicely
>> structured than this, but I think it would still consist of large
>> number of modules, none of which was intelligent in itself. How big
>> would it be? The human genome is 750 MB so intelligence could
>> presumably be coded in less than that. I'd guess an AGI could be
>> written in about a tenth that, say 75 MB.
>
> The human genome size has no meaningful relationship to the complexity of
> coding AGI.

Yes it does -- it's an existance proof that it's possible to do it in 750 MB.

> And what ever happened to Machine is Software is Data?

Sorry, I don't understand you.

-- 
Philip Hunt, <cabala...@googlemail.com>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to