27;s useful to say so and make your assumptions concrete.
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
---
agi
Archives: https://www.listbox.com/member/archive
how much processing power you need: if processing is very
expensive, it makes less sense to re-run an extensive test suite
whenever you make a change.
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philos
2008/12/29 Matt Mahoney :
> --- On Mon, 12/29/08, Philip Hunt wrote:
>
>> Incidently, reading Matt's posts got me interested in writing a
>> compression program using Markov-chain prediction. The prediction bit
>> was a piece of piss to write; the compression code is
2008/12/29 Philip Hunt :
> 2008/12/29 Matt Mahoney :
>>
>> Please remember that I am not proposing compression as a solution to the AGI
>> problem. I am proposing it as a measure of progress in an important
>> component (prediction).
>
>[...]
> Turning a p
at prediction. Whereas all programs that're good at
prediction are guaranteed to be good at prediction.
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
---
agi
Archiv
2008/12/28 Philip Hunt :
>
> Now, consider if I build a program that can predict how some sequences
> will continue. For example, given
>
> ABACADAEA
>
> it'll predict the next letter is "F", or given:
>
> 1 2 4 8 16 32
>
> it'll predict
2008/12/27 Matt Mahoney :
> --- On Fri, 12/26/08, Philip Hunt wrote:
>
>> > Humans are very good at predicting sequences of
>> > symbols, e.g. the next word in a text stream.
>>
>> Why not have that as your problem domain, instead of text
>> compression?
source code.
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss
GI could be
>> written in about a tenth that, say 75 MB.
>
> The human genome size has no meaningful relationship to the complexity of
> coding AGI.
Yes it does -- it's an existance proof that it's possible to do it in 750 MB.
> And what ever happened to Machine is Software is
changes, your problem domain would be a more useful one.
While you're at it you may want to change the size of the "chunks" in
each item of prediction, from characters to either strings or
s-expressions. Though doing so doesn't fundamentally alter the
problem.
--
Philip Hunt,
osen training sets which would
bulk up its code and data to many times that (I'm assuming a model
where an AI stores the results of learning in additions to its source
code).
By way of comparison, the Linux kernel is about 60 MB; the Copycat
program is about 0.4 MB (not including g
ining
intelligence this way. Care to enlighten me?
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Fe
to do tasks better than they can (e.g.
play chess) and I see no reason why it shouldn't be possible for self
awareness. Indeed it would be rather trivial to give an AGI access to
its source code.
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.
ing serve any use in this field?
I've never used formal proofs of correctness of software, so can't
comment. I use software testing (unit tests) on pretty much all
non-trivial software thast I write -- i find doing so makes things
much easier.
--
Philip Hunt,
Please avoid sending me Word
eels like to the touch. For me, it was the former. So I don't
think touch is clearly more fundamental, in terms of how it interacts
with our internal model of the world, than vision is.
> Is the reason just that AI researchers spend all day staring at screens and
> ignoring their physical
re was a difference
in capacitance when the wires where further apart or closer together.
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
---
agi
Archives: https://w
umans -- most (if not all) mammalian
species can do it.
Until an AI can do this, there's no point in trying to get it to play
at making cakes, etc.
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
could move around and manipulate a blocks
world. My understanding is that all, or nearly all, the difficulty
comes in programming it. Which is where AI comes in.
> Actually, $$ aside, we don't even **know how** to make a decent humanoid
> robot.
>
> Or, a decently functional m
umanoid, since it's
more obviously a machine).
> On the other hand, making a virtual world such as I envision, is more than a
> spare-time project, but not more than the project of making a single
> high-quality video game.
GTA IV cost $5 million, so we're not talking about pean
taste would probably help too.
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listb
ords, will the simulation be
deep enough to allow that).
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RS
ces that are safe
to sit on, and others that are too wobbly, even if they look the same.
An animals intuitive physics is a complex system. I expect that in
humans a lot of this machinery isd re-used to create intelligence. (It
may be true, and IMO probably is true, that it's not necessary to
re-c
there's no reason why a toy domain needs to
be anything like a virtual world, it could for example be a "software
modality" that can see/understand source code as easily and fluently
as humans interprete visual input.)
AIUI you're mostly thinking in terms of 2 or 3. Fair comm
robably
need to train it in the real world (at least some of the time).
If you don't care whether your AGI can use a screwdriver, why have one
in the virtual world?
--
Philip Hunt,
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/phi
t; to occupy.
Having said that, I'm not aware that nanotechnology or AI are
specifically prohibited by any of the major religions. And if one
society forgoes science, they'll just get outcompeted by their
neighbours.
--
Philip Hunt, <[EMAIL PROTECTED]>
Please avoid sending me Word o
for laying down long-term memories and for
short-term thinking over the order of a few seconds.
--
Philip Hunt, <[EMAIL PROTECTED]>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
-
ently, my understanding is[*] that DNA in various cells in the
mammalian immune system does change as the immune system evolves to
cope with infectious agents; but these changes aren't passed along to
the next generation.)
* if there are any molecular biologists reading, feel free to correct
That was helpful. Thanks.
2008/12/1 Matt Mahoney <[EMAIL PROTECTED]>:
> --- On Sun, 11/30/08, Philip Hunt <[EMAIL PROTECTED]> wrote:
>
>> Can someone explain AIXI to me?
>
> AIXI models an intelligent agent interacting with an environment as a pair of
> interact
hey are not two separate
theories, they are merely rewordings of the same theory. And choosing
between them is arbitrary; you may prefer one to the other because
human minds can visualise it more easily, or it's easier to calculate,
or you have an aethetic preference for it.
--
Philip Hun
be more useful to
>> the advancement of AI, since the Loebner prize is silly.
>>
>> --
>> Philip Hunt, <[EMAIL PROTECTED]>
>
> How does that differ from what is generally called "transfer learning" ?
I don't think it does differ. ("Transfer learnin
could be practically written or is it purely a
theoretical construct?
In short, is there something to AIXI or is it something I can safely ignore?
--
Philip Hunt, <[EMAIL PROTECTED]>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-at
imilar domain).
A bit like the Loebner Prize, except that it would be more useful to
the advancement of AI, since the Loebner prize is silly.
--
Philip Hunt, <[EMAIL PROTECTED]>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
--
and
> 10^8 seconds in the cortex.
10^8 seconds is 3 years! I think that number's wrong.
--
Philip Hunt, <[EMAIL PROTECTED]>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
--
33 matches
Mail list logo