Hey, look what my alma mater is up to. The Humanities and Social Sciences
department, no less. Although it was common for undergrads to be in economics
experiments, and this 'test' looks pretty similar. No hard language stuff.
http://turing.ssel.caltech.edu/
-xx- Damien X-)
---
To unsubs
On Sun, Jan 12, 2003 at 03:25:08PM -0500, Ben Goertzel wrote:
> It is clear that traditional formal computing theory, though in principle
> *applicable* to AGI programs, is pretty much useless to the AGI theorist...
Is this anything unique to AGI? Does computing theory have much relevance for
Li
On Sun, Jan 12, 2003 at 11:05:15AM -0500, Pei Wang wrote:
> another related topic: the final state. In my paper I said that my system is
> not a TM, also because it doesn't have a set of predetermined final states
A system using S and K combinators isn't a TM at all; totally different
mechanism o
On Sun, Jan 12, 2003 at 10:37:13AM -0500, Pei Wang wrote:
> See my replies to Ben. As soon as the final answer (not intermidiate
> answer) depends on internal state, we are not talking about the same Turing
> Machine anymore. Of course you can build a thoery in this way, but it is
> already not
On Sun, Jan 12, 2003 at 09:38:26AM -0500, Ben Goertzel wrote:
> To me, the question of what a computational model can do with moderately
> small space and time resource constraints is at least equally "fundamental"
Computability theory: can this be computed?
Complexibility theory: does it take po
On Fri, Jan 10, 2003 at 07:44:36PM -0800, Alan Grimes wrote:
> The User is PO'd about all these programs that write configurations to
> his ~/ directory. He wants _ALL_ configurations to be stored in
> ~/configuration/app (or in some equivalent system) so that he can easily
> keep track of them. H
On Fri, Jan 10, 2003 at 04:09:08PM -0800, James Rogers wrote:
> There really isn't any other way to put this, but it is painfully
> obvious that you don't know very much about systems and software
> engineering. How do you expect to develop AI if questions like this
> stump you? Real-world syste
On Fri, Jan 10, 2003 at 04:52:00PM -0500, Ben Goertzel wrote:
> Well, designing an OS conceptually is a LOT easier than designing one
> pragmatically In the real world of OS design, efficiency is a prime
> consideration. Making an OS that's both natural for AI and efficient on
I've thought
On Thu, Jan 09, 2003 at 11:18:36PM -0800, Alan Grimes wrote:
> Damien Sullivan wrote:
> > Quite possibly. But my point is that the evolutionary root _and_
> > guiding principle would be that of a (Unix, ahem) shell.
>
> Are you nuts?
> Unix is the most user-hostile sy
On Thu, Jan 09, 2003 at 10:57:41PM -0800, Alan Grimes wrote:
> It would be a service-driven motovation system but I would expect a much
> more sophisticated implementation of agency beyond a windows shell or
> something.
Quite possibly. But my point is that the evolutionary root _and_ guiding
p
On Thu, Jan 09, 2003 at 10:23:07PM -0800, Alan Grimes wrote:
> You _MIGHT_ be able to produce a proof of concept that way... However, a
> practical working AI, such as the one which could help me design my my
> next body, would need to be quite a bit more. =\
Why? Why should such a thing require
On Thu, Jan 09, 2003 at 11:24:14AM -0500, Ben Goertzel wrote:
> I think the issues that are problematic have to do with the emotional
> baggage that humans attach to the self/other distinction. Which an AGI will
> most likely *not* have, due to its lack of human evolutionary wiring...
Simplistic
> Gary Miller wrote:
> > That being said other than Cyc I am at a loss to name any serious AI
> > efforts which are over a few years in duration and have more that 5 man
> > years worth of effort (not counting promotional and fundraising).
No offense, but I suspect you need to read more of the li
On Thu, Dec 26, 2002 at 01:44:25PM -0800, Alan Grimes wrote:
> A human level intelligence requires arbitrary acess to
> visual/phonetic/other "faculties" in order to be intelligent.
I'm sure all those blind and deaf people appreciate being considered
unintelligent.
-xx- Damien X-)
---
To
On Thu, Dec 12, 2002 at 01:10:27PM -0500, Michael Roy Ames wrote:
> The idea of putting a baby AI in a simulated world where it might learn
> cognitive skills is appealing. But I suspect that it will take a huge
> number of iterations for the baby AI to learn the needed lessons in that
> situatio
Hi! I joined this list recently, figured I'd say who I am. Well, some of you
may know already, from extropians, where I used to post a fair bit :) or from
my Vernor Vinge page. But now I'm a first year comp sci/cog sci PhD student
at Indiana University, hoping to work on extending Jim Marshall's
16 matches
Mail list logo