Scratch my statement about it being useless :) It's useful, but no where
near sufficient for AGI like understanding.
On Tue, Jun 29, 2010 at 4:58 PM, David Jones wrote:
> notice how you said *context* of the conversation. The context is the real
> world, and is completely missing.
st.
>
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> ------
> *From:* David Jones
> *To:* agi
> *Sent:* Tue, June 29, 2010 3:43:53 PM
>
> *Subject:* Re: [agi] A Primary Distinction for an AGI
>
> the purpose of text is to convey something.
sn't logically
follow anything. How is having a point of view in example problems prove
that anything learned or developed isn't applicable to general vision?
> Get thee to a roboticist, & make contact with the real world.
>
Get yourself to a psychologist so that they can show you h
t clearly wrong. These examples don't really show
anything.
Dave
On Tue, Jun 29, 2010 at 3:15 PM, Matt Mahoney wrote:
> David Jones wrote:
> > I really don't think this is the right way to calculate simplicity.
>
> I will give you an example, because examples are more c
the purpose of text is to convey something. It has to be interpreted. who
cares about predicting the next word if you can't interpret a single bit of
it.
On Tue, Jun 29, 2010 at 3:43 PM, David Jones wrote:
> People do not predict the next words of text. We anticipate it, but when
>
t; -- Matt Mahoney, matmaho...@yahoo.com
>
>
> ------
> *From:* David Jones
> *To:* agi
> *Sent:* Tue, June 29, 2010 3:22:33 PM
>
> *Subject:* Re: [agi] A Primary Distinction for an AGI
>
> I certainly agree that the techniques and explanation generating algori
29, 2010 at 3:19 PM, Matt Mahoney wrote:
> David Jones wrote:
> > The knowledge for interpreting language though should not be
> pre-programmed.
>
> I think that human brains are wired differently than other animals to make
> language learning easier. We have not been success
2:51 PM, Matt Mahoney wrote:
> David Jones wrote:
> > I wish people understood this better.
>
> For example, animals can be intelligent even though they lack language
> because they can see. True, but an AGI with language skills is more useful
> than one without.
>
&
not in fact connected to the real world any more
> than the verbal/letter signals involved in NLP are.
>
> What you need to do - what anyone in your situation with anything like your
> asprations needs to do - is to hook up with a roboticist. Everyone here
> should be doing that.
rned
> weighted combinations of simpler patterns. I am more familiar with language.
> The top ranked programs can be found at
> http://mattmahoney.net/dc/text.html
>
state of the art in explanatory reasoning is what I'm looking for.
>
>
> -- Matt Mahoney, matmaho...@yaho
plain but it
requires explanatory reasoning to determine the correct real world
interpretation
On Jun 29, 2010 10:58 AM, "Matt Mahoney" wrote:
David Jones wrote:
> Natural language requires more than the words on the page in the real
world. Of...
Any knowledge that can be demonstrate
es they've had.
Dave
On Tue, Jun 29, 2010 at 10:29 AM, Matt Mahoney wrote:
> David Jones wrote:
> > If anyone has any knowledge of or references to the state of the art in
> explanation-based reasoning, can you send me keywords or links?
>
> The simplest explanation of
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning, can you send me keywords or links? I've read
some through google, but I'm not really satisfied with anything I've found.
Thanks,
Dave
On Sun, Jun 27, 2010 at 1:31 AM, David Jon
hand, do not require
more info because that is where the experience comes from.
On Jun 28, 2010 8:52 PM, "Matt Mahoney" wrote:
David Jones wrote:
> I also want to mention that I develop solutions to the toy problems with
the re...
A little research will show you the folly of this app
n computer vision sucks. It is
great at certain narrow applications, but no where near where it needs to be
for AGI.
Dave
On Mon, Jun 28, 2010 at 4:00 PM, Russell Wallace
wrote:
> On Mon, Jun 28, 2010 at 8:56 PM, David Jones
> wrote:
> > Having experience with the full problem is
my
strategy to the alternatives.
Dave
On Mon, Jun 28, 2010 at 3:56 PM, David Jones wrote:
> That does not have to be the case. Yes, you need to know what problems you
> might have in more complicated domains to avoid developing completely
> useless theories on toy problems. But, as yo
ub
problem at once is not a better strategy at all. You may think my strategies
has flaws, but I know that and still chose it because the alternative
strategies are worse.
Dave
On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace
wrote:
> On Mon, Jun 28, 2010 at 4:54 PM, David Jones
> wrote:
In case anyone missed it... Problems are not "AGI". Solutions are. And "AGI"
is not the right adjective anyway. The correct word is "general". In other
words, generally applicable to other problems. I repeat, Mike, you are *
wrong*. Did anyone miss that?
To recap, it has nothing to do with what pr
Yeah. I forgot to mention that robots are not a"alive" yet could act
indistinguishably from what is alive. The concept of alive is likely
something that requires inductive type reasoning and generalization to
learn. Categorization, similarity analysis, etc could assist in making such
distinctions a
Mike,
Alive vs. dead? As I've said before, there is no actual difference. It is
not a qualitative difference that makes something alive or dead. It is a
quantitative difference. They are both controlled by physics. I don't mean
the nice clean physics rules that we approximate things with, I mean t
Crime has its purpose just like many other unpleasant behaviors. When
government is reasonably good, crime causes problems. But, when government
is bad, crime is good. Given the chance, I might have tried to assassinate
Hitler. Yet, assassination is a crime.
On Mon, Jun 28, 2010 at 10:51 AM, Steve
Mike,
you are mixing multiple issues. Just like my analogy of the rubix cube, full
AGI problems involve many problems at the same time. The problem I wrote
this email about was not about how to solve them all at the same time. It
was about how to solve one of those problems. After solving the prob
Jim,
Two things.
1) If the method I have suggested works for the most simple case, it is
quite straight forward to add complexity and then ask, how do I solve it
now. If you can't solve that case, there is no way in hell you will solve
the full AGI problem. This is how I intend to figure out how
d
> that does not start with a rule that eliminates the very potential of
> possibilities as a *general* rule of intelligence) shows that you
> don't fully understand the problem.
> Jim Bromer
>
>
>
> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>
>>
lol.
Mike,
What I was trying to express by the word *expect* is NOT predict [some exact
outcome]. Expect means that the algorithm has a way of comparing
observations to what the algorithm considers to be consistent with an
"explanation". This is something I struggled to solve for a long time
rega
t; So yeah, this is the right idea... and your simple examples of it are
> nice...
>
> Eric Baum's whole book "What Is thought" is sort of an explanation of this
> idea in a human biology and psychology and AI context ;)
>
> ben
>
> On Sun, Jun 27, 2010 at 1:31
A method for comparing hypotheses in explanatory-based reasoning: *
We prefer the hypothesis or explanation that ***expects* more observations.
If both explanations expect the same observations, then the simpler of the
two is preferred (because the unnecessary terms of the more complicated
explana
e with your particular goals,
> your overall philosophy seems to be broadly consistent with this idea) -
> then you can learn from your mistakes, and make your targets more realistic
> still.
>
>
>
> *From:* David Jones
> *Sent:* Thursday, June 24, 2010 6:22 PM
> *To:* agi
Mike, I think your idealistic view of how AGI should be pursued does not
work in reality. What is your approach that fits all your criteria? I'm sure
that any such approach would be severely flawed as well.
Dave
On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner wrote:
> [BTW Sloman's quote is a mo
I have to agree that a big problem with the field is a lack of understanding
of the problems and how they should be solved. I see too many people
pursuing solutions to poorly defined problems and without defining why the
solution solves the problem. I even see people pursuing solutions to the
wrong
I get the impression from this question that you think an AGI is some sort
of all-knowing, idealistic invention. It is sort of like asking "if you
could ask the internet anything, what would you ask it?". Uhhh, lots of
stuff, like how do I get wine stains out of white carpet :). AGI's will not
be a
Rob,
Real evolution had full freedom to evolve. Genetic algorithms usually don't.
If they did, the number of calculations it would have to make to really
simulate evolution on the scale that created us would be so astronomical, it
would not be possible. So, what matt said is absolutely correct. Th
er in the semicircular canals in your ears. This is all very
> > complicated of course. You are more likely to detect motion in objects
> that
> > you recognize and expect to move, like people, animals, cars, etc.
> >
> > -- Matt Mahoney, matmaho...@yahoo.com
> >
>
;
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> --
> *From:* David Jones
> *To:* agi
> *Sent:* Mon, June 21, 2010 9:39:30 AM
> *Subject:* [agi] Re: High Frame Rates Reduce Uncertainty
>
> Ignoring Steve because we are simply going to have to
ink this is a very significant discovery regarding how the brain is able
to learn in such an ambiguous world with so many variables that are
difficult to disambiguate, interpret and understand.
Dave
On Fri, Jun 18, 2010 at 2:19 PM, David Jones wrote:
> I just came up with an awesome idea. I just r
101 - 135 of 135 matches
Mail list logo