Re: [agi] organising parallel processes

2008-05-04 Thread Mike Dougherty
On Sun, May 4, 2008 at 11:28 PM, Stephen Reed <[EMAIL PROTECTED]> wrote:
> be like Skype, the popular non-scum Internet phone service that also
> performs NAT hole punching (a.k.a. NAT traversal).

I was not aware Skype worked like that- thanks for the info.  If you
are using a similar form of UDP-listener to allow the client to make a
connection out where the firewall then allows responses in, you
wouldn't be violating existing protocol.  (and admins could turn off
the feature that auto-whitelists UDP responce)

> services.  Relays could become performance bottlenecks too.   For an initial
> deployment I would like to try direct P2P unless you have a better
> objection, or maybe you could just clarify the remarks you already made,
> given my own clarification herein.

Of course a test network can be direct P2P.  I can configure my
firewall (both dedicated hardware and per-machine software) to allow
whatever I want.  I was suggesting that a dynamic network could allow
nodes to advertise their capability and perform relay services to
clients that do not have direct access.  From the article you posted
above, it seems that the auto-whitelisting of ports for UDP response (
my firewall calls them triggered ports - if I send out port X, expect
legitimate return replies on X+1 through X+Y) - your client
application would only need to access any public node in the cloud to
become an active server.

> Thanks for the great comment.  I do really do not want to waste time with the 
> wrong P2P design decision.

I like to brainstorm.  I know a little bit about computer networking.
I know a little more about programming.  I don't know much about
artificial intelligence design, so I've mostly just been lurking here.

I think if the nodes in your graph were to reinforce the existence of
their connections simply by using them, it would facilitate new
connections forming and becoming available for other nodes according
to whatever propagation rules you devise.  As the developer, you would
only need to understand the mechanism on a theoretical level- there
would be too much dynamic state to micromanage (or hand-code) a
snapshot of the network graph at any given moment.  I assume that a
'conversation' would include all nodes interested in the discussion,
and that when new nodes join they would be brought up to date and
could then contribute resources.  Is there already an existing
framework for this kind of communication?  If you're going to build
it, do you intend to keep the mechanism open enough that it could
transport other kinds of data, or keep it tightly coupled to your
application?

I'm on a tangent now...  it's difficult to think about this kind of
thing via email.  ttyl.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-04 Thread Mike Dougherty
On Sun, May 4, 2008 at 10:00 PM, Stephen Reed <[EMAIL PROTECTED]> wrote:
> Matt (or anyone else), have you gotten as far as thinking about NAT hole
> punching or some other solution for peer-to-peer?

"NAT hole punching" has no solution because it's not a problem you can
fix.  If I administrate the border security for my network and I do
not want your protocol running, I will block the port it uses.  If you
dynamically change ports to avoid this, you'll find your software
blacklisted with a slew of scumware that is actively removed from the
computers it infests.  If you are welcome within the network, it is
much less hassle (for everyone) if you properly ask for access and use
bandwidth intelligently.

To address your issue with P2P being blocked by ISP, you could allow
those nodes with public server capability to proxy connections to
client-only nodes.  I know that sounds like undue pain, but this is
exactly the kind of modular flexibility that distributed agents should
be able to work out in response to varying network conditions  (my
$0.02)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments from a lurker...

2008-04-14 Thread Mike Dougherty
On Mon, Apr 14, 2008 at 4:17 PM, Steve Richfield
<[EMAIL PROTECTED]> wrote:
> > You've merely been a *TROLL* and gotten the appropriate response.  Thanks
> for playing but we have no parting gifts for you.
>
> Who is the "we" you are referencing? Do you have a mouse in your pocket, or
> is that the Royal "we"?  YOU are the only snide asshole/troll whom I have
> had the displeasure of observing on this forum. Can you point to anyone ELSE
> here who acts as you do?

I don't want to participate in calling anyone a Troll.  What I have
observed of Matt's online presence, he was giving you an opportunity
to disprove the Troll status rather than transparently ignoring you.
I'm guessing he'll simply give up soon.

I have little interest in downloading your software and tables and
arcane howto for making it all work.  In my opinion, you really can't
call your product AGI until I can converse with it directly - either
via it's own email address or (for a 'real-time' Turing test) an IRC
channel.

How difficult would it be for you to extend the Dr Eliza interface
with an IRC bot frontend?

If it is as accurate as you claim, it might help a lot more people by
dispensing "see a REAL doctor to get X checked out" than as ... well,
whatever it is now.

Even with an accuracy rate that exceeds "average" doctors, I'll be as
likely to dismiss it as I would dismiss a real doctor - but the
machine doesn't need to play golf or drive expensive cars so it can
devote the time that people can't (or won't).  [I had a doctor say,
"Your iron level is too low, eat more red meat." followed immediately
with, "Your cholesterol is too high, eat less red meat."  I was
thinking, "Your diagnosis is unusable, I want my co-pay back" ]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-12 Thread Mike Dougherty
On Wed, Mar 12, 2008 at 8:54 PM, Charles D Hixson <
[EMAIL PROTECTED]> wrote:

> I think that you need to look into the simulations that have been run
> involving Evolutionarily Stable Strategies.  Friendly covers many
> strategies, including (I think) Dove and Retaliator.  Retaliator is
> almost an ESS, and becomes one if the rest of the population is either
> Hawk or Dove.  In a population of Doves, Probers have a high success
> rate, better than either Hawks or Doves.  If the population is largely
> Doves with an admixture of Hawks, Retaliators do well.  Etc.  (Note that
> each of these Strategies is successful depending on a model with certain
> costs of success an other costs for failure specific to the strategy.)
> Attempts to find a pure strategy that is uniformly successful have so
> far failed.  Mixed strategies, however, can be quite successful, and
> different environments yield different values for the optimal mix.  (The
> model that you are proposing looks almost like Retaliator, and that's a
> pretty good Strategy, but can be shown to be suboptimal against a
> variety of different mixed strategies.  Often even against
> Prober-Retaliator, if the environment contains sufficient Doves, though
> it's inferior if most of the population is simple Retaliators.)
>

I believe Mark's point is that the honest commitment to Friendly as an
explicit goal is an attempt to minimize wasted effort achieving all other
goals.  Exchanging information about goals with other Friendly agents helps
all parties invest optimally in achieving the goals in order of priority
acceptable to the consortium of Friendly.  I think one (of many) problems is
that our candidate AGI must not only be capable of self-reflection when
modeling its goals, but also capable of modeling the goals of other Friendly
agents (with respect to each other and to the goal-model of the collective)
as well as be able to decide when an UnFriendly behavior is worth declaring
(modeling the consequences and impact to the group of which it is a member)
That seems to be much more difficult than a selfish or ignorant Goal Stack
implementation (which we would typically attempt to control via an
imperative Friendly Goal)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] A possible less ass-backward way of computing naive bayesian conditional probabilities

2008-02-25 Thread Mike Dougherty
On Mon, Feb 25, 2008 at 2:51 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
>  But that does stop people from modeling systems in a simplified manner by
>  acting as if these limitations were met.   Naïve Bayesian methods are
>  commonly used.  I have read multiple papers saying that in many cases it
>  proves surprisingly accurate (considering what a gross hack it is) and, of
>  course, it greatly simplifies computation.

Admittedly, I do not have a quantitative grasp of Bayesian methods
(naive or otherwise) but if I understand qualitatively it is about
attempting to reach a conclusion based on complete knowledge based on
a confidence of available knowledge to the unknown.  If I'm already
wrong, please school me.

While walking the dog tonight I was considering the application of
knowledge across different domains.  In this light, I considered the
unknown (or unknowable) part of the problem to be similar to some
amount of chaos in a system that displays a gross-level order.
Increasing the precision of the measurement of the ordered part can
increase the instability of the chaotic part.

Is it possible that a different kind of math is required to model the
chaotic part of a complex system like this?  Something as fundamental
as the discovery of irrational numbers perhaps?

This would have been yet another fleeting thought if I hadn't returned
to this thread about Bayesian (thinking?) and I was curious what
insight the list could offer...

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Mike Dougherty
On Jan 19, 2008 8:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:
> http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
>
> Turing also committed suicide.

That's a personal solution to the Halting problem I do not plan to exercise.

> Building a copy of your mind raises deeply troubling issues.  Logically, there

Agreed.  If that mind is within acceptable tolerance for human life at
peak load of 30%(?) of capacity, can it survive hard takeoff?  I
consider myself reasonably intelligent and perhaps somewhat wise - but
I would not expect the stresses of thousand-fold "improvement" in
throughput would scale out/up.  Even the simplest human foible can
become an obsessive compulsion that could destabilize the integrity of
an expanding mind.  I understand this to be related to the issue of
Friendliness (am I wrong?)

> It follows logically that there is no reason to live, that death is nothing 
> to fear.

Given a directive to maintain life, hopefully the AI-controlled life
support system keeps perspective on such logical conclusions.  An AI
in a nuclear power facility should have the same directive.  I don't
expect that it shouldn't be allowed to self-terminate (that gives rise
to issues like slavery) but that it gives notice and transfers
responsibilities before doing so.

> In http://www.mattmahoney.net/singularity.html I discuss how a singularity
> will end the human race, but without judgment whether this is good or bad.
> Any such judgment is based on emotion.  Posthuman emotions will be
> programmable.

... and arbitrary?  Aren't we currently able to program emotions
(albeit in a primitive pharmaceutical way)?

Who do you expect will have control of that programming?  Certainly
not the individual.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87858522-76fadd


Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Mike Dougherty
On Jan 14, 2008 10:10 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Any fool can mathematize a definition of a commonsense idea without
> actually saying anything new.

Ouch.  Careful.  :)  That may be true, but it takes $10M worth of
computer hardware to disprove.

disclaimer:  that was humor

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=85679890-97e868


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Mike Dougherty
On Jan 10, 2008 10:57 AM, Stephen Reed <[EMAIL PROTECTED]> wrote:
> If I understand your question correctly it asks whether a non-expert
> user can be guided to use Controlled English in a dialog system.  In
>
> This is an idea that I wanted to try at Cycorp but Doug Lenat
> said that it had been tried before and failed, due to great resistance
> among users to Controlled English.  Let's see if this idea can be made
> to work now, or not.

Basically, yes.  I was also cynically suggesting that it would be
difficult to teach the majority of existing human brains how to use
Controlled English - and you wouldn't have to build them first.

If you have a semi-working prototype at some point, please email me an
invitation - I am very interested in such a dialog.  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=84411553-51531a


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Mike Dougherty
On Jan 10, 2008 9:59 AM, Stephen Reed <[EMAIL PROTECTED]> wrote:
> and that the system is to learn constructions for your examples.  The below
> dialog is Controlled English, in which the system understands and generates
> constrained syntax and vocabulary.
>  [user] The elements of a shit-list can be things.
> [texai] Now I understand that "the book is on my shit-list" commonly means
> that the book is an element of the group of things that you hold in
> disregard.

If you successfully have this level of language usage from a machine,
can figure out a way to have people speak as succinctly?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=84214196-da6ba6


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Mike Dougherty
On Jan 6, 2008 3:07 PM, a <[EMAIL PROTECTED]> wrote:
> Creativity is a byproduct of analogical reasoning, or abstraction. It
> has nothing to do with symbols or genetic algorithms! GA is too
> computationally complex to generate "creative" solutions.

care to explain what sounds so absolute as to certainly be wrong?

Is the brain too compurationally complex to generate "creative"
solutions?  (scare quotes persisted)

Or are you suggesting that GA is more computationally complex than your brain?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=82423813-676f3c


Re: [agi] OpenCog

2007-12-28 Thread Mike Dougherty
On Dec 28, 2007 1:55 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Mike Dougherty wrote:
> > On Dec 28, 2007 8:28 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> Actually, that would be a serious miusunderstanding of the framework and
> >> development environment that I am building.  Your system would be just
> >> as easy to build as any other.
> >
> > ... considering the proliferation of AGI frameworks, it would appear
> > that "any other" framework is pretty easy to build, no?  ok, I'm being
> > deliberately snarky - but if someone wrote about your own work the way
> > you write about others, I imagine you would become increasingly
> > defensive.
>
> You'll have to explain, because I am honestly puzzled as to what you
> mean here.

I am not a published computer scientist.  I recognize there are a lot
of brains here working at a level beyond my experience.  I was only
pointing out that using language like "just as easy to build" to
trivialize "your system" could be confrontational.  It may not
deliberately offend anyone, either because they are also not concerned
about this nuance or they discount your attitude as a matter of
course.  I think with slightly different sentence constructions your
ideas would be better received and sound less condescending.  That's
all I was saying on that.

> I mean "framework" in a very particular sense (something that is a
> "theory generator" but not by itself a theory, and which is complete
> account of the domain of interest).  As such, there are few if any
> explicit frameworks in AI.  Implicit ones, yes, but not explicit.  I do
> not mean "framework" in the very loose sense of "bunch of tools" or
> "bunch of mechanisms".

hmm... I never considered framework in that context.  I thought
framework referred to more of a scaffolding to enable work.  As such,
a scaffolding makes a specific kind of building.  Though I can see how
it can be general enough to apply the technique to multiple building
designs.

> As for the comment above:  because of that problem I mentioned, I have
> evolved a way to address it, and this approach means that I have to
> devise a framework that allows an extremely wide variety of Ai systems
> to be constructed within the framework (this was all explained in my
> paper).  As a result, the framework can encompass Ben's systems as
> easily as any other.  It could even encompass a system built on pure
> mathematical logic, if need be.

I believe I misunderstood your original statement.  This clarification
makes more sense.


> Oh, nobody expects it to arise "automatically" - I just want the
> system-building process to become more automated and less hand-crafted.

Again, I agree this is a good goal - but isn't it akin to optimizing
too early in a development process?  Sure, there are well-known
solutions to certain classes of problem.  Building a sloppy
implementation to those solutions is foolish when there are existing
'best practice' methods.  Is there currently a best practice way to
achieve AI?  Let me preemptively agree that we should all continuously
strive to implement better practices than we may currently be
comfortable with - we should be doing that anyway.  (how can we build
self-improving systems if we are not examples of such ourselves)

> > My guess is that any system that is generalized enough to apply across
> > design paradigms will lack the granular details required for actual 
> > implementation.
> On the contrary, that is why I have spent (am still spending) such an
> incredible amount of effort on building the thing.  It is entirely
> possible to envision a cross-paradigm framework.

With a different understanding of your use of "framework" I am less
dubious of this position.

> Give me about $10 million a year in funding for the next three years,
> and I will deliver that system to your desk on January 1st 2011.

Well, I'd love to have the cash on hand to prove you wrong.  It would
be a nice condition to have for both of us.

> There is, though, the possibility that a lot of effort could be wasted
> on yet another AI project that starts out with no clear idea of why it
> thinks that its approach is any better than anything that has gone
> before.  Given the sheer amount of wasted effort expended over the last
> fifty years, I would be pretty upset to see it happen yet again.

Considering the amount of wasted effort in every other sector that I
have experience with, I think you should keep your expectations low.
Again, I would like to be wrong.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=80057282-a98eae


Re: [agi] OpenCog

2007-12-28 Thread Mike Dougherty
On Dec 28, 2007 8:28 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Actually, that would be a serious miusunderstanding of the framework and
> development environment that I am building.  Your system would be just
> as easy to build as any other.

... considering the proliferation of AGI frameworks, it would appear
that "any other" framework is pretty easy to build, no?  ok, I'm being
deliberately snarky - but if someone wrote about your own work the way
you write about others, I imagine you would become increasingly
defensive.

> My purpose is to create a description language that allows us to talk
> about different types of AGI system, and then construct design
> variations autonmatically.

I do believe an academic formalism for discussing AGI would be
valuable to allow different camps to identify their
similarity/difference in approach and implementation.  However, I do
not believe that AGI will arise "automatically" from meta-discussion.
My guess is that any system that is generalized enough to apply across
design paradigms will lack the granular details required for actual
implementation.  I applaud the effort required to succeed at your
task, but it does not seem to me that you are building AGI as much as
inventing a lingua franca for AGI builders.

I admit in advance that I may be wrong.  This is (after all) just a
friendly discussion list and nobody's livelihood is being threatened
here, right?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=79882049-5a2bf8


Re: [agi] NL interface

2007-12-28 Thread Mike Dougherty
On Dec 28, 2007 12:45 AM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
> That's why I want to build an interface that lets users provide grammatical
> information and the likes.  The exact form of the GUI is still unknown --
> maybe like a panel with a lot of templates to choose from, or like the
> "autocomplete" feature.

I have previously recommended the interface used in the Alice
programming environment.  (www.Alice.org)

The object browser can be directly acted upon, or the objects can be
drag/dropped into the programming pane where each of the object's
methods are exposed, then the parameters for each method are supplied.
 It quickly becomes an intuitive process.  The resulting statement
makes the syntax obvious and each choice can be updated by reselecting
from a picklist.  Even if you have no interest in animation, the
programming interface does a really good job of providing flexibility
without being too complicated.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=79873507-07eb34


Re: [agi] AGI and Deity

2007-12-22 Thread Mike Dougherty
On Dec 22, 2007 8:15 PM, Philip Goetz <[EMAIL PROTECTED]> wrote:
> > > Dawkins trivializes religion from his comfortable first world perspective
> > ignoring the way of life of hundreds of millions of people and offers little
> > substitute for what religion does and has done for civilization and what has
> > came out of it over the ages. He's a spoiled brat prude with a glaring
> > self-righteous desire to prove to people with his copious superficial
> > factoids that god doesn't exist by pandering to common frustrations. He has
> > little common sense about the subject in general, just his
> > >
> >
> > Wow.  Nice to see someone take that position on Dawkins.  I'm ambivalent,
> > but I haven't seen many rational comments against him and his views.
>
> Nice?  Why?  I thought you wanted rational comments.  "Rational" by
> definition means comments giving reasons, which the above do not.

I used the term "nice" where perhaps 'surprising' or 'refreshing'
might have been more appropriate to my intention.  Many of the list I
have read are so anti-religion that I would not expect an AGI thread
to be equally anti-Dawkins.

my use of "rational" might have been sub-optimal also.  Typically
anti- groups exist because they are threatened by whatever it is they
are against.  It appeared to me that John Rose was making a somewhat
informed dismissal of Dawkins theory rather than a
kneejerk/conditioned priori reaction.  Maybe I assumed those opinions
were formed in response to common domain knowledge of Dawkins.

i responded primarily to your question: why  - Hopefully this explains
motivation for my original comment without introducing too many new
'irrational' arguments.   :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78928262-8a6673


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Mike Dougherty
On Dec 14, 2007 10:07 AM, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> > If we're not making people smarter with currently available resources,
> > why would we invest in research to discover expensive new technologies
> > to make people smarter?  We need that money to invest in research for
> > expensive new technologies to allow people to be lazier.
>
> You are thinking mostly about the USA, it seems.
>
> I was thinking mostly about the People's Republic of China.

I admit, I am commenting only on my experience in/with USA.

Is China pushing its people into being smarter?  Are they giving
incentives beyond the US-style capitalist reasons for being smart?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=76315923-4ba235


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Mike Dougherty
On Dec 14, 2007 8:33 AM, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> So, if a certain nation were to make laws allowing this, and to encourage
> research into this, then potentially they could gain a dramatic advantage
> over other nations...
>
> There does therefore seem a possibility for a "brain enhancement race"
> if a case is made to some national government that within say 10-20 years
> effort a massively productivity-increasing brain-enhancement could be made.

Are there any efforts at using Nootropic drugs in a 'brain enhancement
race' ?  I haven't heard about it, but then I wouldn't because the
program would be kept secret.

Making the general public smarter is not in the best interest of
government, who wants to keep us fat dumb and (relatively) happy
(read: distracted).

If we're not making people smarter with currently available resources,
why would we invest in research to discover expensive new technologies
to make people smarter?  We need that money to invest in research for
expensive new technologies to allow people to be lazier.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=76063874-be9528


Re: [agi] The Function of Emotions is Torture

2007-12-12 Thread Mike Dougherty
On Dec 12, 2007 9:27 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> It also shows a very limited understanding of emotions.

What do you hope to convey by making comments like this?

I often wonder how arrogance and belittling others for their opinions
has ever made a positive contribution to a creative endeavor.  Nobody
has yet proven their pet theory leads absolutely to AGI (which seems
to mean many things to different people, but overall it hasn't been
done).  So what right does that give any individual to denounce
another?

If we, as custodians of our creations, are unable to be consistently
civil to one another then I fear successfully producing 'human-like or
greater' levels of intelligence may be unimaginably bad for us.

Are we ready to become parents to AGI while we act like children?

...Just a semi-random thought sparked by this evening's thread about
emotional torture of AGI...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=75492930-4d8a37


Re: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-12 Thread Mike Dougherty
On 12/12/07, James Ratcliff <[EMAIL PROTECTED]> wrote:
>   This would allow a large amount of knowledge to be extracted in a
> distributed manner, keeping track of the quality of information gathered
> from each person as a trust metric, and many facts would be gathered and
> checked for truth.

> Something along the lines of a higher quality Yahoo Questions, with an
> active component, and central knowledge base.
> I think the knowledge base is one of the most important pieces of these, and
> hope to start seeing some more of ppls ideas and implementations of KR db's.

I believe where you said "central knowledge base" you mean
"distributed KB" - right?  The idea of keeping local KB at each node
shares the burden for storage/bandwidth to every node in the network.
Your trust metrics are how nodes conditionally connect for per-topic
fact-checking.

I have already volunteered my free CPU/bandwidth to a prototype of
this model.  Of course, I'd like to be a collaborator of mechanisms
involved in addition to a user of the grid.  Even if it starts out
only a toy or hobby, it would still teach us a great deal.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=75442948-fd876c


Re: [agi] AGI and Deity

2007-12-10 Thread Mike Dougherty
On Dec 10, 2007 6:59 AM, John G. Rose <[EMAIL PROTECTED]> wrote:

>  Dawkins trivializes religion from his comfortable first world perspective
> ignoring the way of life of hundreds of millions of people and offers little
> substitute for what religion does and has done for civilization and what has
> came out of it over the ages. He's a spoiled brat prude with a glaring
> self-righteous desire to prove to people with his copious superficial
> factoids that god doesn't exist by pandering to common frustrations. He has
> little common sense about the subject in general, just his
>

Wow.  Nice to see someone take that position on Dawkins.  I'm ambivalent,
but I haven't seen many rational comments against him and his views.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74209029-86d66a

Re: Re[4]: [agi] Do we need massive computational capabilities?

2007-12-08 Thread Mike Dougherty
On Dec 8, 2007 5:33 PM, Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> What you describe - is set of AGI nodes.
> AGI prototype is just one of such node.
> AGI researcher doesn't have to develop all set at once. It's quite
> sufficient to develop only one AGI node. Such node will be able to
> work on single PC.
>

Then I'd like to quantify terminology.  What is the sum of N "AGI Nodes"
where N > 1?  Is that a community of discrete AGI, or a single multi-nodal
entity?

I don't imagine that a single node is initially much more than a narrow-AI
data miner.  The twist that separates this from any commercially available
OLAP cube processor is the infrastructure for aquiring new information from
distributed nodes.  In this sense, I imagine that the internode
communications and transaction record contains the 'complexity' (from
another recent thread) that allows interesting behaviors to emerge - if not
AGI, then at least a novelty worth pursuing.

If the node that was a PC on the internet is a CPU in a supercomputer (or a
PC in a Beowulf cluster) is it more or less a part of the whole?
Semantically I'm not sure you can say "this node is an AGI" any more than
you can say "This neuron contains the intelligence"

I do agree with you that any intelligence that is capable of
asking/answering a question can be considered a 'node' in distributed AGI.
But this high level of agreement makes many assumptions about shared
definitions of important terms.  I would like to investigate those
definitions without the typical bickering about who is right or wrong
because (imo) there are only different perspectives.  The first team to
produce AGI will not necessarily disprove that any other strategy will not
work.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74013728-9789fb

Re: Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Dougherty
On Dec 7, 2007 7:41 PM, Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> > No, my proposal requires lots of regular PCs with regular network
> connections.
>
> Properly connected set of regular PCs would usually have way more
> power than regular PC.
> That makes your hardware request special.
> My point is - AGI can successfully run on singe regular PC.
> Special hardware would be required later, when you try to scale
> out working AGI prototype.
>

I believe Matt's proposal is not as much about the exposure to memory or
sheer computational horsepower - it's about access to learning experience.
A supercomputer atop an ivory tower (or in the deepest government
sub-basement) has an immense memory and speed (and dense mesh of
interconnects, etc., etc.) - but without interaction from outside itself,
it's really just a powerful navel-gazer.

Trees do not first grow a thick trunk and deep roots, then change to growing
leaves to capture sunlight.  As I see it, each node in Matt's proposed
network enables IO to the us [existing examples of intelligence/teachers].
Maybe these nodes can ask questions, "What does my owner know of A?" - the
answer becomes part of its local KB.  Hundreds of distributed agents are now
able to query Matt's node about A (clearly Matt does not have time to answer
500 queries on topic A)  During the course of "processing" the local KB on
topic A, there is a reference to topic B.  Matt's node automatically queries
every node that previously asked about topic A (seeking first likely
authority on the inference)  - My node asks me, "What do you know of B?  Is
A->B?"  I contribute to my node's local KB, and it weights the inference for
A->B.  This answer is returned to Matt's node (among potentially hundreds of
other relative weights) and Matt's node strengthen the A->B inference based
on received responses.  At this point, the distribution of weights for A->B
are all over the network depending on the local KB of each node and the
historical traffic of query/answer flow.   After some time, I ask my node
about topic C.  It knows nothing of topic C, so it asks me directly to
deposit information to the local KB (initial context) - through the course
of 'conversation' with other nodes, my answer comes back as the aggregate of
the P2P knowledge within a query radius.  On a simple question I may only
allow 1 hour of think time, for a deeper research project that radius of
query may be allowed to extend 2 weeks of interconnect.  During my research,
my node will necessarily become "interested" in topic C - and will likely
become known among the network as the local expert.  ("local expert" for a
topic would be a useful designation to weigh each node for primary query
targets as well as 'trusting' the weight of the answers from each node)

I don't think this is vastly different from how people (as working examples
of intelligence nodes) gather knowledge from peers.

Perhaps this approach to "intelligence" is not an absolute definition as
much as a "best effort/most useful answer to date" intention.  Even if this
schema does not extend to emergent AGI, it builds a useful infrastructure
that can be utilized by currently existing intelligences as well as whatever
AGI does eventually come into existence.

Matt, is this coherent with your view or am I off base?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73898638-6a4fad

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Dougherty
On Dec 6, 2007 8:23 AM, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> On Dec 5, 2007 6:23 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> > resistance to moving onto the second stage. You have enough psychoanalytical
> > understanding, I think, to realise that the unusual length of your reply to
> > me may possibly be a reflection of that resistance and an inner conflict.
>
> What is bizarre to me, in this psychoanalysis of Ben Goertzel that you 
> present,
> is that you overlook [snip]
>
> Mike, you can make a lot of valid criticisms against me, but I don't
> think you can
> claim I have not originated an "interdependent network of creative ideas."
> I certainly have done so.  You may not like or believe my various ideas, but
> for sure they form an interdependent network.  Read "The Hidden Pattern"
> for evidence.

I just wanted to comment on how well Ben "accepted" Mike's 'analysis.'
 Personally, I was offended by Mike's inconsiderate use of language.
Apparently we have different ideas of etiquette, so that's all I'll
say about it.  (rather than be drawn into a completely off-topic
pissing contest over who is right to say what, etc.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73157985-48127a


Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-04 Thread Mike Dougherty
On Dec 3, 2007 11:03 PM, Bryan Bishop <[EMAIL PROTECTED]> wrote:
> On Monday 03 December 2007, Mike Dougherty wrote:
> Another method of doing search agents, in the mean time, might be to
> take neural tissue samples (or simple scanning of the brain) and try to
> simulate a patch of neurons via computers so that when the simulated
> neurons send good signals, the search agent knows that there has been a
> good match that excites the neurons, and then tells the wetware human
> what has been found. The problem that immediately comes to mind is that
> neurons for such searching are probably somewhere deep in the
> prefrontal cortex ... does anybody have any references to studies done
> with fMRI on people forming Google queries?

...and a few dozen brains from which we can extract the useful parts?  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71797586-08a419


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Mike Dougherty
On Dec 3, 2007 12:12 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> I get it : you and most other AI-ers are equating "hard" with "very, very
> complex," right?  But you don't seriously think that the human mind
> successfully deals with language by "massive parallel computation", do you?

Very very complex tends to exceed one's ability to properly model and
especially predict.  Even if the human mind invokes some special kind
of magical cleverness, do you think you (judging from your writing)
have some unique ability to isolate that function (noun) without
simultaneously using that function (verb) ?   I often imagine that I
understand the working of my own mind almost perfectly.  Those that
claim to have grasped the quintessential bit typically end up so far
over the edge that they are unable to express it in meaningful or
useful terms.

> Isn't it obvious that the brain is able to understand the wealth of language
> by relatively few computations - quite intricate, hierarchical,
> multi-levelled processing, yes, (in order to understand, for example, any of
> the sentences you or I are writing here), but only a tiny fraction of the
> operations that computers currently perform?

I believe you are making that statement because you wish it to be
true.  I see no basis for anything to be "obvious" - especially the
formalism required to define what the term means.  This is due
primarily to the complexity associated with recursive self-reflection.

> The whole idea of massive parallel computation here, surely has to be wrong.
> And yet none of you seem able to face this to my mind obvious truth.

We each continue to persist in our delusions.  Yours may be no
different in the end. :)

> I only saw this term recently - perhaps it's v. familiar to you (?) - that
> the human brain works by "look-up" rather than "search".  Hard problems can
> have relatively simple but ingenious solutions.

How is the look-up table built?  Usually by experience.  When we have
enough similar experiences to "look up" a solution to general adaptive
intelligence, we will have likely been close enough to it for so long
that (probably) nobody will be surprised.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71652723-808348


Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Mike Dougherty
On Dec 3, 2007 5:07 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> When a user asks a question or posts information, the message would be
> broadcast to many nodes, which could choose to ignore them or relay them to
> other nodes that it believes would find the message more relevant.  Eventually
> the message gets to a number of experts, who then reply to the message.  The
> source and destination nodes would then update their links to each other,
> replacing the least recently used links.

> I wrote my thesis on the question of whether such a system would scale to a
> large, unreliable network.  (Short answer: yes).
> http://cs.fit.edu/~mmahoney/thesis.html
>
> Implementation detail: how to make a P2P client useful enough that people will
> want to install it?

That sounds almost word-for-word like something I was visualizing
(though not producing as a thesis)

I believe the next step of such a system is to become an abstraction
between the user and the network they're using.  So if you can hook
into your P2P network via a firefox extension, (consider StumbleUpon
or Greasemonkey) so it (the agent) can passively monitor your web
interaction - then it could be learn to screen emails (for example) or
pre-chew either your first 10 google hits or summarize the next 100
for relevance.  I have been told that by the time you have an agent
doing this well, you'd already have AGI - but i can't believe this
kind of data mining is beyond narrow AI (or requires fully general
adaptive intelligence)

Maybe when I get around to the Science part of my BS degree (after the
Arts filler) I will explore to a greater depth for a thesis.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71648663-f0a7ee


Re: [agi] Where are the women?

2007-11-28 Thread Mike Dougherty
On Nov 28, 2007 9:23 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> An open-ended, ambiguous language is in fact the sine qua non of AGI.
> Thankyou for indirectly pointing that out to me.

Would you agree that an absolutely precise language with zero
ambiguity would be somewhat stifling for use in a "creative" mode?

It seems to me that new points are discovered when different observers
attempt to relate their positions relative to a third point of
discussion.  The analogies, misunderstandings, reconciliation, and
meta-symbols that are required for even the simplest agreement often
generates more context about the other party in the conversation than
the point upon which they eventually agree.

you think?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=69974416-f4c42d


Re: [agi] Where are the women?

2007-11-28 Thread Mike Dougherty
On Nov 28, 2007 9:20 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Sunday, November 25, 2007
>
> Think Geek. Bet you're not picturing a woman.

Nothing about a [computer] geek necessarily implies gender at all.

To be fair, ask this same question but replace women with any other
'minority' and see if it's still a problem.

Also, ask the question about how many of these stereotypical "geeks"
are successfully employed in the real world these days.  Perhaps the
reason there are so few computer geeks is because those who are
responsible for maintaining corporate computer systems have had to
mature into roles less obviously geek.

I have a very anti-bias bias :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=69403374-a2080b


Re: Re[4]: [agi] Funding AGI research

2007-11-21 Thread Mike Dougherty
On Nov 20, 2007 8:27 PM, Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> Start with weak AI programs. That would push technology envelope
> further and further and in the end AGI will be possible.

Yeah - because "weak" AI is so simple.  Why not just make some
run-of-the-mill narrow AI with a single goal of "Build AGI"?  You can
just relax while it does all the work.

"It's turtles all the way down"

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=67862446-cc7a80


Re: [agi] Human vs human-level Intelligence

2007-11-07 Thread Mike Dougherty
On 11/7/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> There is no reason why properly designed AGIs with world knowledge and the
> power to compute from it would have any less common sense than humans.

Ok.  Are you also going to sufficiently cripple your AGI's ability to
think rationally that they are completely comparable in skills as a
human?  With super-human skill at "common sense" and equally
superhuman rationality, will this AGI be considered mentally healthy
if the observing psychologist is not augmented to extra-super-human
reasoning?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62531990-bd4e7c


[agi] Human vs human-level Intelligence

2007-11-07 Thread Mike Dougherty
http://psychcentral.com/news/2007/11/02/the-logic-of-schizophrenia/1480.html

While reading this article I thought about the discussion point
regarding whether humans will be able to relate to AGI as it grows to
(and beyond) "human-level" intelligence.

"..the results of the study suggest that on a straightforward
interpretation, people with schizophrenia reason more logically than
healthy controls either because they are better at logic, or because
they are worse at common sense."

"better at logic and worse at common sense" seems to me to describe
current hopes for AGI - does that imply a 'pathological' state of
thinking?

Are these the kinds of questions one asks an AGI Psychologist?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62502251-5f06e0


Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Mike Dougherty
On 11/3/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> Ok, shaping the reality gives you pleasure. Machine would read it and
> offer you many orders of magnitude stronger neverending pleasure of
> the same type. And you would say "no, thanks"? There is certain
> pleasure threshold after which the "I want it" gets iresistable no
> matter what risks are involved.

You are describing a very convoluted process of drug addiction.  If I
can get you hooked on heroine or crack cocaine, I'm pretty confident
that you will abandon your desire to produce AGI in order to get more
of the drugs to which you are addicted.

You mentioned in an earlier post that you expect to have this
monstrous machine invade my world and 'offer' me these incredible
benefits.  It sounds to me like you are taking the blue pill and
living contentedly in the Matrix.  If you are going to proselytize
that view, I suggest better marketing.  The intellectual requirements
to accept AGI-driven nirvana imply the rational thinking which
precludes accepting it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60741497-8715c9


Re: [agi] NLP + reasoning + conversational state?

2007-11-03 Thread Mike Dougherty
On 11/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Google uses a cluster of 10^6 CPUs, enough to keep a copy of the searchable
> part of the Internet in RAM.

And a list of millions of hits is the ideal way to represent the
results, right?  Ask.com is publicly mocking this fact in an effort to
make themselves look better.  Kartoo.com does a good job of presenting
the relationship of search results to each other.

Suppose you get a tip about some cog sci research that might be
relevant to AGI.  You ask one of your undergraduate assistants to dig
up everything they can find about it.  Sure, they use Google.  They
use Lexisnexis.  They use a dozen primary data gathering tools.
Knowing you don't want 4Gb of text, they summarize all the information
into what they believe you are actually asking for - based on earlier
requests you have made, their own understanding of what you are
looking for and whatever they learn during the data collection
process.  A good research assistant gets recruited for graduate work,
a bad research assistant probably gets a pat on the back at the end of
the semester.

My question was about the feasibility of a narrow-AI research agent as
a useful step towards AGI.  Even if it's not fully adaptable for
general tasks, the commercial viability of moderate success would be
profitable.  Or is commercial viability too mundane a consideration
for ivory tower AGI research?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60737904-74aafd


Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Mike Dougherty
On 11/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Well, one alternative is to deduce that aluminum is a mass noun by the low
> frequency of phrases like "an aluminum is" from a large corpus of text (or
> count Google hits).  You could also deduce that aluminum is an adjective from
> phrases like "an aluminum chair", etc.  More generally, you would cluster
> words in the high dimensional vector space of their immediate context, then
> derive rules for moving from cluster to cluster.
>
> However, the fact that this method is not used in the best language models
> suggests it may exceed the computational limits of your PC.  This might
> explain why we keep wading into the swamp.

It is doubtful this kind of examination of information can be
'conversational language' on PC computation for a while.  What do you
think about the feasibility of a research request using this method?
ex:  Find interesting information about: aluminum - to which the
program builds a structure of information that it can continue
refining and expanding until I return to check on it several hours
later.  If I think it's on the right track for my definition of
interesting, I could let it continue researching for days.  At the end
of several days work, it would have a body of 'knowledge' that
represents a cost to compile which makes it a local authority on this
subject.  Assuming someone else might request information about the
same topic, my local knowledge store could be included in preliminary
findings.

Clearly a distributed network of nodes is never going to be capable of
the brute-force speed of knowing all things in one place.  I don't
usually seek to know all things at once, just a useful number of
things about a limited topic. That might good enough to make the
effort worthwhile.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60638592-961890


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Mike Dougherty
On 11/2/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> It might currently be hard to accept for association-based human
> minds, but things like "roses", "power-over-others", "being worshiped"
> or "loved" are just waste of time with indirect feeling triggers
> (assuming the nearly-unlimited ability to optimize).

I am also assuming you have something more to your bliss-engine than
mind-crushing pleasure.  I believe the strong revulsion expressed
against that is due to the appearance of futility of that state of
being.  If your plan is to consume endless pleasure sensation with no
return on the investment of resource you represent, then "unlimited"
optimization includes removing the power drain you represent to the
collective who actually want to DO something.

I admit to being fairly unmotivated at times, but the scenario you
describe is untenable to my current unaugmented state of being - I
think I would be less interested in the non-existance you propose when
faced with an increased ability to shape my shared reality.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60592441-f743ad


[agi] The Prize Is Won; The Simplest Universal Turing Machine Is Proved

2007-10-26 Thread Mike Dougherty
http://blog.wolfram.com/2007/10/the_prize_is_won_the_simplest.html

Can someone tell me what this means in the context of this list?

Also, that "machine" appears to be fractal.  Is it truly fractal, or
am I incorrectly assuming that due to a grossly self-similar pattern?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=58073664-ac32a0


Re: [agi] Human memory and number of synapses.. P.S.

2007-10-20 Thread Mike Dougherty
On 10/20/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> Images are *not* an efficient way to store data.  Unless they are
> three-dimensional images, they lack data.  Normally, they include a lot of
> unnecessary or redundant data.  It is very, very rare that a computer stores
> any but the smallest image without compressing it.  And remember, an image
> can be stored as symbols in a relational database very easily as a set of
> x-coords, y-coords, and colors.

maps ARE symbols.  Whether it's a paper street map or Google maps,
they're a collection of simple symbols that represent the objects
they're "mapping."  At the most ridiculous, each pixel on the screen
is a symbol that your optic nerve detects and passes to your brain to
find some meaningful correspondence to interpret.

I think the point that Mark is making is that the representation
(display) of data can resemble a map - but the map (or "image") is
only one possible interpretation of the data.  There are algorithms to
provide close-enough approximations of details where there is
insufficient data.  ex:  It is unlikely that an elevation map would
have a 1000 meter variance over a 2 meter gap in the data points if
either side of the gap are equal elevations.  That kind of 'smoothing'
can not be done with images alone - there must be data. If you do have
only map images, you would have to extract data from the map before
you can use it effectively against other data.  So why store the data
in an image in the first place?  Arguably, the data storage mechanism
is irrelevant - there will be decisions made about performance
depending on the initial acquisition and later retrieval realities:
maybe a camera streams video directly to disk to achieve high
throughput, then later analysis compresses the scene into a symbolic
representation at less than a realtime rate.  You can't really argue
that the video stream is an ideal way to manage the details in a
knowledgebase. (eh Mike?)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55761625-a2d246


Re: [agi] Poll

2007-10-18 Thread Mike Dougherty
On 10/18/07, Derek Zahn <[EMAIL PROTECTED]> wrote:
>  Because neither of these things can be done at present, we can barely even
> talk to each other about things like goals, semantics, grounding,
> intelligence, and so forth... the process of taking these unknown and
> perhaps inherently complex things and compressing them into simple language
> symbols throws out too much information to even effectively communicate what
> little we do understand.

Are you suggesting that a narrow AI designed to improve communication
between researchers would be a worthwhile investment?  Imagine it as
the scaffolding required to support the building efforts.  "Natural"
language is enough of a problem in its own right that we have
difficulty talking to each other, to say nothing of building
algorithms that can do it even as poorly as we do.  At least if there
were a way to exchange the context along with an idea, there might be
less confusion between sender and receiver.  The danger of
contextually rich posts (the kind Richard Loosemore often authors) is
that there is too much information to consume.  That's where I think
narrow Assistive Intelligence could add the sender's assumed context
to a neutral exchange format that the receiver's agent could properly
display in an unencumbered way.  The only way I see for that to happen
is that the agents are trained on/around the unique core conceptual
mode of each researcher.

(I know... that's brainstorming with no idea how to begin any implementation)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55034905-bea938


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-07 Thread Mike Dougherty
On 10/7/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
> ... logic is unsuited for conversation...

what a great quote

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50946633-33f0fb


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-06 Thread Mike Dougherty
On 10/6/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> I am sorry, Mike, I have to give up.
>
> What you say is so far away from what I said in the paper that there is
> just no longer any point of contact.

oh.  So we weren't having a discussion.  You were having a lecture and
I was missing the point.  That's fine.  This is a form of
entertainment for me.  I don't have anything to prove.

thanks for your consideration.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50875268-6eac9d


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-06 Thread Mike Dougherty
On 10/6/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> In my use of GoL in the paper I did emphasize the prediction part at
> first, but I then went on (immediately) to talk about the problem of
> finding hypotheses to test.  Crucially, I ask if it is reasonable to
> suppose that Conway could have written down the patterns he *wanted* to
> see emerge, then found the rules that would generate his desired patterns.
>
> It is *that* question that is at the heart of the matter.  That is what
> the paper was all about, and that issue is the only one I want to
> defend.  It is so important that we do not lose sight of that context,
> because if we do ignore that (as many people have done), we just go
> around in circles.

Is it reasonable:  I doubt precisely stating your goal is enough to
reach it.  (that is, unless you're Oprah and believe very strongly in
The Secret)

I just realized your question is if Conway could have written two
frames of cells, then reverse-engineered the transformations that move
from A to B.  That transformation would be absolutely correct in
getting from A to B, however as a candidate for the Universal ruleset,
it would have to apply to every transformation from B to C or X to Y.
Probably this candidate would prove unusable outside the fragile case
for which is was written.   I can write a very simple loop to output
the records of a table with known fields, it takes much more
consideration to generalize the solution to any number of unknown
fields.

Consider states T1 and T5.  Use the same transformation hypothesis
generator employed in the paragraph above.  Given four steps from T1
to T5, there may have been one complete transform and three static
states or four 'normal' transformations.  How can a T1 to T5
transformation rule be written?  Consider a cyclic behavior with a
period of 4 - the transformation rule would have to observe a static
state because it's observation moments are not granular enough to
detect the changes.  A glider with a period below the observation
interval would give rise to a transformation rule describing, "Given
this collection of cells, the next observation in open space it will
appear to have moved one unit left"  Of course that rule requires open
space, the number of configurations of impact with other cells during
the observation interval give rise to an explosion of possibility.
The hypothesis generation algorithm will have a computational
complexity that is orders of magnitude larger than the classical GoL
rules making observations/computes at each 1 unit of time.

To pull back from the simplistic GoL example, consider the planetary
motion example.  I think I better understand the rules prediction you
were talking about - the true planetary motion rules are as
unavailable to Kepler as an observer in the GoL world.  So by
observation, he detects a regularity to the moon's path around the
earth and works out a theory for why that happens.  Then he uses the
theory to predict the future state of the moon - and he's right.  Has
he found the absolutely Truth in planetary motion?  No.  He has found
a good enough approximation for the purpose of predicting local
observed phenomenon.  Is there an extra term in the True formula, for
which our local observation conveniently sets a value of 1 in a
multiplication process?  Then this predictive function has limitations
on use.  it is still sufficiently useful when the hidden variable
maintains the value of 1 (for our locally observable universe)  Think
of a multidimensional motion function that has been curried down from
higher dimensions, leaving only those dimensions Kepler could observe.

I initially thought we were discussing the patterns than can arise
from examining the actual rules, rather than trying to discover the
rules from observation of states.  In the context of AGI research, I
think the discovery of explanations is a much more interesting
problem.  I think resource limitations make brute force "compute every
possible permutation" approaches to hypothesis generation absolutely
unfeasible.  Even with only a few known parameters, the combinatorial
explosion will cripple the largest machine we have - but with an
unknown number of parameters, the task of finding every permutation is
impossible.  So the ability to reason about classes and test
hypothesis by proof (without requiring exhaustive search) is important
to working intelligence.  I feel there is a great deal of value in
reasoning about AGI as a class of computation rather than a single
solution or program.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50795592-49b3a8


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Mike Dougherty
On 10/5/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> To be abstract, you could subsitute "semi-Thue system", "context-free
> grammar", "first-order logic", "Lindenmeyer system", "history monoid",
> etc. for GoL, and still get an equivalent argument about complexity
> and predicatability.  Singling out GoL as somehow "special" is a red
> herring; the complexity properties you describe are shared by a variety
> of systems and logics.

So you are agreeing with Richard using confrontational language?

Richard's point to me earlier was exactly this issue about GoL.
Perhaps this was because I bit down hard on some "extremely simple"
case that I have had some experience (unlike many of the lengthy
graduate papers discussed here)  You could equally substitute
gibberish words for GoL and 'get an equivalent argument' because the
discussion is about the properties of the entire class rather than an
specific instance.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50699248-61f722


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Mike Dougherty
On 10/5/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> My stock example:  planetary motion.  Newton (actually Tycho Brahe,
> Kepler, et al) observed some global behavior in this system:  the orbits
> are elliptical and motion follows Kepler's other laws.  This corresponds
> to someone seeing Game of Life for the first time, without knowing how
> it works, and observing that the motion is not purely random, but seems
> to have some regular patterns in it.
>
> Having noticed the global regularities, the next step, for Newton, was
> to try to find a compact explanation for them.  He was looking for the
> underlying rules, the low-level mechanisms.  He eventually realised (a
> long story of course!) that an inverse square law of gravitation would
> predict all of the behavior of these planets.  This corresponds to a
> hypothetical case in which a person seeing those Game of Life patterns
> would somehow deduce that the rules that must be giving rise to the
> patterns are the particular rules that appear in GoL.  And, to be
> convincing, they would have to prove that the rules gave rise to the
> behavior.

with GoL you started with the rules and try to predict the behavior.
with planetary motion you observe the behavior and try to discover the rules.

Consider the observation of an oscillating spring or a bouncing ball.
There is an exact function to determine the high-school physics
version of these events.  Of course they always account for "in a
frictionless vacuum" or some other means of eliminating the damping
effects of the environment.  Is the basic function to compute the
trajectory of a launch sufficient to know where the shell will land?
On a windless day, probably.  In a stiff breeze, there may be
otherwise inexplicable behaviors.  Eliminating retrograde orbits
required a fundamental shift in perspective (literally changing the
center of the universe)

If there were a million-line CA world:  So it's a million lines, it'll
take more time but it's the same class of problem, no?  Or are we
talking about rules where one cell can modify it's own rules?  Isn't
that the crux of the RSI argument?  Imagine a GoL cell that
spontaneously gains the power to not die of loneliness until the round
after it's isolated.  Suppose also that this cell is able to confer
this ability to any cells that it spawns.  The GoL universe is
fundamentally changed.  Does the single evolved cell have to know the
other rules to add this one?  Have you ever played the drinking game
'asshole' ?  If the game goes on long enough, I doubt anyone can track
all of the rules :)  I digress.

Like those classic physics problems, we don't really need to have the
ideally compact formula to have a usefully working rule.  I think the
real intelligence is getting work done without a complete formula.
Otherwise it would be equivalent to our current computation- nobody is
getting excited about the bubblesort algorithm today.  I guess another
level of intelligence would be the leap from bubblesort to a recursive
method because a better O() efficiency.

.. gotta stop here because there's too much distraction around me to
think clearly.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50615645-82967d


Re: [agi] Religion-free technical content

2007-10-05 Thread Mike Dougherty
On 10/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > Then I guess we are in perfect agreement.  Friendliness is what the
> > average
> > person would do.
>
> Which one of the words in "And not my proposal" wasn't clear?  As far as I
> am concerned, friendliness is emphatically not what the average person would
> do.

Yeah - Computers already do what the average person would:  wait
expectantly to be told exactly what to do and how to behave.  I guess
it's a question of how cynically we define the average person.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50390046-8654d8


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Mike Dougherty
On 10/5/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> I hear you, but let me quickly summarize the reason why I introduced GoL
> as an example.

Thank you.  I appreciate the confirmation of understanding my point.
I have observed many cases where the back and forth bickering over
email lists have been based in an unwillingness to concede an other's
point.  I am the first to admit that I have more questions than
answers.

> I wanted to use GoL as a nice-and-simple example of a system whose
> overall behavior (in this case, the existence of certain patterns that
> are "stable" or "interesting") seems impossible to predict from a
> knowledge of the rules.  I only wanted to use GoL to *illustrate* the
> general class, not because I was interested in GoL per se.

Gotcha - GoL is an example case of a class.  You threw it out there to
make a point.  Let's just say is the only symbol on the table.  In
order to assimilate the idea you are proposing, the model needs to be
examined.  So if we discuss this one example it is not to the
exclusion of the concept you're trying to illustrate, but a precursor
to it.  In my own concept formation, this step is like including
libraries or compiling a function.  I think sometimes you get
frustrated that it takes so long for people accomplish this step.
Part of the problem is that email is such a low bandwidth medium.
(another part is that the smarter we are, the quicker we "get" stuff
and we assume others should be as capable)

> The important thing is that this idea (that there are some systems that
> show interesting, but unexplainable, behavior at the global level) has
> much greater depth and impact than people have previously thought.

Can you give an example of a ruleset that CAN be used to predict
global behavior?

"interesting but unexplainable behavior" - would you define this class
to include chaos or chaotic systems?  I'm trying to reason to the
general case, but I don't have enough other properties of the class in
mind to usefully visualize. (conceptualize?)  I think those
researchers who have invested in studying chaos are people who have
given this idea a great deal of depth and impact.  It's a hard problem
because our normal 'scientific' method fails almost by definition.  I
believe the framework you have discussed is a proposal for a method of
investigating this behavior.  Am I far off, or am I in the general
vicinity?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50387859-7fcf22


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-04 Thread Mike Dougherty
On 10/4/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> All understood.  Remember, though, that the original reason for talking
> about GoL was the question:  Can there ever be a scientific theory that
> predicts all the "interesting creatures" given only the rules?
>
> The question of getting something to recognize the existence of the
> patterns is a good testbed, for sure.

Given finite rules about a finite world with an en effectively
unlimited resource, it seems that every "interesting creature" exists
as the subset of all permutations minus the noise that isn't
interesting.  The problem is in a provable definition of interesting
(which was earlier defined for example as 'cyclic')  Also, who is
willing to invest unlimited resource to exhaustively search a "toy"
domain?  Even if there were parallels that might lead to formalisms
applicable in a larger context, we would probably divert those
resources to other tasks.  I'm not sure this is a bad idea.  Perhaps
our human attention span is a defense measure against wasting life's
resources on searches that promise fitness without delivering useful
results.

In the case of RSI, the rules are not fixed.  I wouldn't dare call
them mathematical infinite, but an evolving ruleset probably should be
considered functionally unlimited.  I imagine Incompleteness applies
here, even if I don't know how to explicitly state it.  I believe
finding "all" of the interesting creatures is nearly impossible.
Finding "an" interesting creature should be possible given a
sufficiently exact definition of interesting.  After some amount of
search, the results probably have to be expressed as a confidence
metric like, "given an exhaustive search of only 10% of the known
region, there we found N number of candidates that match the criteria
within X degree of freedom.  By assessment of the distribution of
candidates in the searched space, extrapolation suggests there may be
{prediction formula result} 'interesting creatures' in this universe"

the Drake equation is an example of this kind of answer/function.
Ironic that it's purpose is to determine the number of intelligences
in our own universe.  Of course Fermi paradox, testable hypothesis,
etc. etc. - the point is not about whether GoL searches or SETI
searches are any more or less productive than each other.  My interest
is in how intelligences of any origin (natural human brains,
human-designed CPU, however improbable aliens) manage to find common
symbols in order to create/exchange/consume ideas.  If we have this
difficulty communicating with each other given the shared KB of
classes (archetypes?) of human existence, how likely is it that we
will even recognize non-human intelligence if/when we encounter it?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50221522-7d52f7


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-04 Thread Mike Dougherty
On 10/4/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Do it then.  You can start with interesting=cyclic.

should GoL gliders be considered cyclic?

I personally think the candidate-AGI that finds a glider to be similar
to a local state of cells from N iterations earlier to be particularly
astute.  (assuming this observation is learned rather than hard-coded
by the developer)

Human 'players' of GoL will stop a run after reaching a stable cycle
because it is no longer interesting.  The collection of cells
comprising a glider stops being interesting when we predict that it
will never 'hit' anything.  Some of the seeds that ship with popular
GoL implementations are absolutely amazing.  I'm sure after I
understand their nature the novelty will wear off.  :)

I've been thinking about GoL, pattern recognition, vision, concept
representation.  If have some ideas that I'd like to experiment with,
but I'm not really sure yet how to express them let alone implement a
test.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50122339-b77f2c


Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> I think your notion that post-grads with powerful machines would only
> operate in the space of ideas that don't work is unfair.

Yeah, i can agree - it was harsh.  My real intention was to suggest
that NOT having a bigger computer is not excuse for not yet having a
design that works.  IF you find a design that works, the bigger
computer will be the inevitable result.

> Your last paragraph actually seems to make an argument for the value of
> clock cycles because it implies general intelligences will come through
> iterations.  More opps/sec enable iterations to be made faster.

I also believe that general intelligence will require a great deal of
cooperative effort.  The frameworks discussion (Richard, et al) could
provide positive pressure toward that end.  I feel we have a great
deal of communications development in order to even begin to express
the essential character of the disparate approaches to the problem,
let alone be able to collaborate on anything but the most basic ideas.
 I don't have a solution (obviously) but I have a vague idea of a type
of problem.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49620438-6f8601


Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> In fact, if the average AI post-grad of today had such hardware to play
> with, things would really start jumping.  Within ten years the equivents
> of such machines could easily be sold for somewhere between $10k and
> $100k, and lots of post-grads will be playing with them.

I see the only value to giving post-grads the kind of computing
hardware you are proposing is that they can more quickly exhaust the
space of ideas that won't work.  Just because a program has more lines
of code does not make it more elegant and just because there are more
clock cycles per unit time does not make a computer any smarter.

Have you ever computed the first dozen iterations of a sierpinski
gasket by hand?  There appears to be no order at all.  Eventually over
enough iterations the pattern becomes clear.  I have little doubt that
general intelligence will develop in a similar way:  there will be
many apparently unrelated efforts that eventually flesh out in
function until they overlap.  It might not be seamless but there is
not enough evidence that human cognitive processing is a seamless
process either.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49495105-78df69


Re: [agi] intelligent compression

2007-10-03 Thread Mike Dougherty
On 10/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> The higher levels detect complex objects like airplanes or printed words or
> faces.  We could (lossily) compress images much smaller if we knew how to
> recognize these features.  The idea would be to compress a movie to a written
> script, then have the decompressor reconstruct the movie.  The reconstructed
> movie would be different, but not in a way that anyone would notice, in the
> same way that pairs of images such as
> http://www.slylockfox.com/arcade/6diff/index.html would have the same
> compressed representations.

Is this because we use a knowledgebase of classes for things like
"airplane" that can be used to fill in the details that are lost
during compression?

Can that KB be seeded, or must it be experientially evolved from a
more primitive precept?  Consider how little useful skill a human baby
has compared to other animals.  Perhaps thats the trade-off for a high
potential general intelligence, there must be a lot of faltering and
(semi-) useless motion while learning the basics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49489831-5af343


Re: [agi] intelligent compression

2007-10-02 Thread Mike Dougherty
On 10/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> It says a lot about the human visual perception system.  This is an extremely
> lossy function.  Video contains only a few bits per second of useful
> information.  The demo is able to remove a large amount of uncompressed image
> data without changing the compressed representation in our brains by
> exploiting only the lowest levels of the visual perception function.

re: exploiting "only" the lowers levels

What are the higher levels of visual function?  How could they be exploited?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49185222-8ff5a8


[agi] intelligent compression

2007-10-02 Thread Mike Dougherty
On 9/22/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> You understand that I am not proposing to solve AGI by using text compression.
>  I am proposing to test AI using compression, as opposed to something like the
> Turing test.  The reason I use compression is that the test is fast,
> objective, and repeatable.  It is less expensive to maintain a compression
> benchmark than a Loebner prize.

demo/discussion: http://www.seamcarving.com/
try it: http://rsizr.com/

What are the implications of this for robot vision and memory?  I
understand that this is not technically AGI.  It seems to me that this
is approximating the kind of selective importance that we use to
remember details about a scene.  This is narrow intelligence for
preserving the apparent visual object in a picture, but I imagine
there would be an analogous process for preserving other data implied
by a scene.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48869171-de3d2d


Re: [agi] A problem with computer science?

2007-09-28 Thread Mike Dougherty
On 9/28/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Not necessarily.  In my work I measure intelligence to 9 significant digits.

Ok sure, by what unit are you measuring?  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48038766-fffc59


Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread Mike Dougherty

On 6/11/07, James Ratcliff <[EMAIL PROTECTED]> wrote:

Interesting points, but I believe you can get around alot of the problems
with two additional factors,
a. using either large quantities of quality text, (ie novels, newspapers) or
similar texts like newspapers.
b. using a interactive built in 'checker' system, assisted learning where
the AI could consult with humans in a simple way.


I would hope that a candidate AGI would have the capability of
emailing anyone who has ever talked with it.  ex:  After a few
minutes' chat, the AI asks the human for their email in case there it
has any follow up questions - the same way any human interviewer
might.  If 10 humans are asked the same question, the statistically
oddball response can probably be ignored (or reduced in weight) to
clarify the answer.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Dougherty

On 5/6/07, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Yes, I'll match my understanding and knowledge of, and ideas on,  the
free will issue against anyone's.

Arrogant much?

>> I just introduced an entirely new dimension to the free will debate. You
literally won't find it anywhere. Including Dennett. Free thinking. If we
are free to decide,  then it follows we are also free to think

Oh, please . . . .


Seriously.  The only other identity I have ever encountered with such
zealous believe in their own accomplishments is A. T. Murray /
Mentifex.   I wonder what would happen if these two super-egos (pun
intended) were to collide?

Sorry to contribute so little to the actual discussion, but really...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] What would motivate you to put work into an AGI project?

2007-05-03 Thread Mike Dougherty

On 5/3/07, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:

On 5/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> But how does Speagram resolve ambiguities like this one? ;-)
>
Generally, Speagram would live with both interpretations until one of
them fails or it gets a chance to ask the user.


How would that be possible?  I don't even know how to imagine such a thing.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Dougherty

On 4/30/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

The linguistic sign bears NO RELATION WHATSOEVER to the signified.


true


The only signs that bear relation to, and to some extent reflect,  reality
and real things are graphics [maps/cartoons/geometry/ icons etc] and images
[photos, statues, detailed drawings, sound recordings etc.].


What does "warm" look like?  How about "angry" or "happy"?  Can you
draw a picture of "abstract" or "indeterminate"?  I understand (i
think) where you are coming from, and I agree wholeheartedly - up to
the point where you seem to imply that a picture of something is the
totality of its character.  I don't believe that's what you are
saying, but you did not specify how far your analogy should be taken.


A picture is not worth a thousand words, it is worth an INFINITY of words.


Careful throwing around INFINITY like that :)  Last time I looked, my
desktop resolution was only so high, and that while there are a great
number of permutations of meaning that can be inferred from those
pictures, eventually the value curve of all that can be said probably
looks logarithmic.

Concepts (should?) grow like crystals, with new ideas along the
incomplete edges.  They're never really complete as long as new ideas
continue to be incorporated, but only those (ideas) that follow the
existing structure and pattern can fit.

Does anyone have a more formal definition of the concept tree
mentioned earlier in this thread? (A url to a whitepaper or something
would be great)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Dougherty

On 4/30/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

it is in the human brain. Every concept must be a tree, which can
continually be added to and fundamentally altered.  Every symbolic concept
must be grounded in a set of graphics and images, which are provisional and
can continually be redrawn.



That plastic template, as with all concepts, is permanently open to
revision. Probably, all the visualisations of house that your brain produces



And that is how we learn language - and indeed all our knowledge about the
world - provisionally. Everyone's personal history of learning is a history
of continually having ascribed meanings corrected.


graphics, image, redrawn, visualizations - all indicative of a high
degree of visual-spatial thinking.  I'm curious, are your own AGI
efforts are modelled on this mode of thought?  I ask because I wonder
if the machine intelligence we build will "envision" concepts in an
analogous way to our own processes.  If we (humans) currently
visualize because that part of our brain evolved the largest bandwidth
and working set out of necessity for survival, what pressure would
facilitate that evolution in the machine we build?  (or is it by
design that we model the machine after our own thought process)


Is the notion of a 'template' too fixed even in plastic?  Though it
requires a lot of computation, I imagine the probability would need to
be calculated in real-time at each point in context.  If the root node
of the 'house' tree were evaluated for a realtor it would weight the
leaves associated with structural information and property value more
highly than if the 'house' concept were evaluated as a sibling idea to
'home.'  Essentially every fact needs a confidence metric to determine
how well it relates to the current scope of investigation.  In the
case of double and triple entendre, we humans (sometimes) delight in
the unexpected relation across different contexts by way of a
particular word's multiple potential meanings.

Everyone's personal history of relationships between ideas is what
makes each of us unique.  In the elephant/chair scenario, my own
childhood of watch cartoons prevailed in visualizing a context where
an elephant in a chair was not a physics problem.  If an AGI is
raised/trained on cartoons, it will probably develop a wildly
different perspective of subjective reality than if it trained in a
military application.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-30 Thread Mike Dougherty

On 4/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

There is something fascinating going on here - if you could suspend your
desire for precision, you might see that you are at least half-consciously
offering contributions as well as objections. (Tune in to your constructive
side).


suspended.  I do most of my best thinking half-consciously.

You made an interesting point about the response that AI should have
for the directive, "Move from A to B or D" - That is to ask for
clarification.  I think that is an important point.  We seem to expect
"computers" to correctly do our bidding even when we aren't sure what
we actually want.  (ex: Google has to guess what I'm looking for if I
enter "AJAX", since I just looked up javascript I am probably not
interested in the cleaning product)  There are context clues, which
are important to grasp - which I think is what Richard was suggesting
(AGI had better be able to figure out context because people assume so
much)  It seems extra daunting to expect a machine to divine this
context when a human can simply ask for it.

fwiw - Mike, thanks for understanding my point over just the words in
my post.  I feel it is the sender/author's responsibility to write
clearly in order that the message content is easily consumed.  It's
the reader's task to overcome the transmission errors and to fill in
the gaps where the sender is unclear.  I believe this is a truism that
must be on the table when attempting to build machine intelligence
which interacts with humans.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Dougherty

On 4/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:

The idea that human beings should constrain themselves to a simplified,
artificial kind of speech in order to make life easier for an AI, is one
of those Big Excuses that AI developers have made, over the years, to
cover up the fact that they don't really know how to build a true AI.

It is a temptation to be resisted.

No retreat to hard-coded blocks world programs.


You're right - we should continue to use language poorly as is our
right as humans to communicate past each other without identifying the
failure of either the sender or the recipient for message integrity.
I see now how that makes much more sense for email lists, so it should
apply well to "true AGI"

I'm not exactly clear on "true AGI" - do any humans possess this trait?

ok, I know there's a snarky tone here, but I thought I had a valid
point (I'm sure I'll be shown my error soon enough)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Dougherty

On 4/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

He has a simple task: "Move from A to B or D". But the normal answer "Walk
it" is for whatever reason no good, blocked.


Disambiguate-
1. Move from starting point A to either B or D
2. Move from either A to B or take another option D

I feel we should practice unambiguous speech with each other so we can
have some hope of conversation with machine intelligence.  The less
guess it has to do about what we actually meant, the more productive
the dialog can be.  It helps between people too.

humorous example:  My wife and I had finished a discussion about
various ways to contain the dog in our yard.  After a long pause, I
asked, "How would you feel about fencing in our yard?"  She threw me a
shocked expression and asked, "Are you challenging me to a duel?"  I
laughed, "I meant: putting a fence around the yard, she understood: a
sword fight with rapiers"

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Uh oh ... someone has beaten Novamente to the end goal!

2007-04-29 Thread Mike Dougherty

On 4/29/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

Holding a new computer at your home such as myself will take very
little space( less than 2 square meters) and this will never waste
your time (you can use your new computer whenever you want) and you
will be of course able to continue your private life with your
friends/ boy friend without any change


Sounds like a new way to get distributed IP addresses for SPAM/ddos -
"Put my uploaded consciousness contained in this 2 square meter box on
your network and go about your normal life"

Yeah, right.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] rule-based NL system

2007-04-28 Thread Mike Dougherty

On 4/28/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

And what if I say to you: "sorry but the elephant did sit on the chair" -
how would you know that I could be right?


I could assign a probability of truthfulness to this statement that is
dependant on how many other assertions you have made and the frequency
with which those assertions have proven to be accurate models of the
eventual reality they predicted or described.  If after a sufficient
number of occurrences of truthful assertions, there is a level of
trust associated to the believability of your future statements.
Suppose you intentionally lied to me.  Future probability assignments
would have to include the measurement of your proven inaccuracy.
Hopefully a system built on this principle has some failsafe for
statements like "I am lying."


except in rare cases no such rules. You've actually made them up - and your
brain did that for you by using its imagination. It's only by imagination
that you can work out which of thousands of animals can or can't sit in a


Is imagination derived from earlier encounters with elephants and
chairs?  My original mental picture was a cartoonish elephant in an
equally cartoonish chair.  I had no details of weight or physics - I
assumed the elephant was the primary object of the sentence and
therefor the chair would need to accomodate the elephant.  If the
sentence were "the chair was sat on by an elephant" it would have
conjured a different meaning due to the primacy of the objects.  This
is where an unambiguous language would help prevent the parse errors
inherent in english (or possibly even human language in general)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] AGI and Web 2.0

2007-03-29 Thread Mike Dougherty

On 3/29/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:

How does the new phenomenon of web-based collaboration change the way we
build an AGI?  I feel that something is amiss in a business model if we
don't make use of some form of "Web 2.0 ".

A problem is that we cannot take universal ballots every time on every
trivial issue.  So probably we need a special adminstrative committee for
decision-making.


I think the primitive prototype should be about inter-node
communication methodology.  A basic API for "Tell me what you know
about X, Y, Z"  would allow nodes utilizing different storage or
processing methods to interact with each other.  ex:  I ask for
information about some process flow and I get back a chart.  I am not
particularly good at consuming a chart, so I store this content as
possibly relevant but currently less than ideally consumable.
Eventually I may develop a way to get the meaning out of that media
format.  Meanwhile if someone asks ME for that same process flow, I
can communicate in my more 'native' expression of
words/paragraphs/etc. and simply pass along the chart.  That consumer
might prefer the chart.  Assuming I pass along the chart with proper
source identification, I have communicated not only my knowledge of
the subject, but a potential forward reference for further query.
(conceivably the source of that chart might have gained new
information on the subject while I was storing it)

whether nodes represent people in a social network or neurons in a
brain, I believe the interconnect protocol is what makes the whole
greater than the mere sum of the parts.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-09 Thread Mike Dougherty

On 3/9/07, Pei Wang <[EMAIL PROTECTED]> wrote:

This understanding assumes a "you" who does the "pointing", which is a
central controller not assumed in the Society of Mind. To see
intelligence as a toolbox, we would have to assume that somehow the
saw, hammer, etc. can figure out what they should do in building the
deck all by themselves.


with a million monkeys on a million typewriters in a million years...

Do you think those monkeys would be any more likely to produce "X" if
they had a any idea what A, B, and C were?  Even if they fail at
producing work X, the efforts P, Q, R would likley be entertaining or
useful.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-08 Thread Mike Dougherty

On 3/6/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:


> Well what is intelligence if not a collection of tools?  One of the hardest

Thinking of a mind as a toolkit is misleading.  A mind must contain a
collection of tools that
synergize together so as to give rise to the appropriate high-level
emergent structures and dynamics.
The tools are there, but focusing on their individual and isolated
functionality is not terribly
productive in an AGI context.


Yeah, if I leave a workbench worth of carpentry tools on a pile of
lumber, I don't expect to have an emergent deck arise...

(though that'd be cool)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: **SPAM** Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-21 Thread Mike Dougherty

On 2/21/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:

The language thread has been reasonably abstruse already, but proposing
doing AI by stored procedures in relational databases backed by
~10 ms access time devices... Hey, why not tapes? I think you could
implement a reasonably competent Turing machine with an Ultrium
tape library. Why not implementing it on that? You can't beat the
costs per bit. As to 10^23 words/s, well...


I'm working on an AI driven by water wheel and memory stored as rubber
ducks floating in a pond.  The current problems I'm faced with is
maintaining accurate tracking of regional rainfall (for both the wheel
and the pond) as well as minimizing wind effects on the ducks on the
pond.

I'm surprised anyone has such strong opinions about what absolutely
will not work considering that we do not have evidence of what
absolutely will work.  Clearly there are conceivable tasks that would
sub-optimal to do in a commercial database.  Isn't the whole idea of
modular development that the performance failure of a DB can be
managed after the consumer of that data shows significant promise?  Or
do we have to assume that AI developers should write every module from
scratch?  I wouldn't want to spend considerable effort building a
rigid or formal knowledge base because I'd feel that I should stick
with it in order to justify its cost even after it appeared that the
engine (brain?) might be better with a different KB.

Since there is still no agreed-upon right way to do A[G]I, doesn't it
make sense to be able to rapidly try as many different potential
solutions as possible in order to assess each method's promise?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Re: Languages for AGI

2007-02-18 Thread Mike Dougherty

On 2/18/07, Mark Waser <[EMAIL PROTECTED]> wrote:

personal toolbox).  The programmers who are ending up out of work are the
ones who keep re-inventing the wheel over and over again.


Thinking about the amount of redundant (wasted) effort involved with
starting from scratch on an AI project, I considered an old adage and
modified it:

If you are not standing on the shoulders of giants, you are likely to
be trampled by them.

.. though I guess in the case of AGI, even giants have only taken a
few tentative steps

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-12 Thread Mike Dougherty

On 2/11/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:

We don't use Bayes Nets in Novamente because Novamente's knowledge
network is loopy.  And the peculiarities that allow standard Bayes net
belief propagation to work in standard loopy Bayes nets, don't hold up


I know what you mean by the term "loopy" but you should be careful how
you use it in casual conversation else you risk painting a very
different picture of NM.  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Probabilistic consistency

2007-02-07 Thread Mike Dougherty

On 2/7/07, Kevin Peterson <[EMAIL PROTECTED]> wrote:


My program crashes, prints something about 8192.
My program crashes, prints something about 10001.
My program crashes, prints something about 3721.



I'd wonder if you've seen the movie "Pi" and perhaps taken it too seriously :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Mike Dougherty

On 1/19/07, Joel Pitt <[EMAIL PROTECTED]> wrote:

It's been a while since I looked at Lojban or your Lojban++, so was
wondering if english sentences translate well into Lojban without the
sentence ordering changing? I.e. given two english sentences, are
there any situations where in lojban the sentences would be more
correctly put in the reverse order? If there are, then manually
inserting placemarks in the original and translated version could be
used to delineate between regions of meaning and assist an AI in
reading the text while learning english.

I bet it'd be a great way of learning Lojban too! ;)


Lojban/Lojban++ is inherently an explicit language, right?  Then given
an environment of objects and actions, the AI's-avatar could be asked
to perform actions that we pick from an interface.  How many
person-hours of interaction have gone into telling a guy in a chicken
suit to flap his arms, jump, etc.?  Imagine how much more fun people
would have with a greater range of action/object potential.  If this
were a game along the same vein as the Google Image labeler, where
another participant verified that the AI correctly completed the
requested action the "language" could be more easily learned- English
expressed from person1 to person2, lojban++ from person2 expressed to
AI, confirmation from person1 that AI completed request.  Win for the
AI to see english and lojban++ of the same action, Win for person2 to
have direct experiential learning by translating to lojban++ (I/we
need interactive learning mechanisms to be fluent enough in lojban++
to think clearly in it)  and person1 gets the same kicks as telling
the man in the chicken suit to hop on one foot.  (I never really
understood that, but people forwarded that URL a lot)

Ben, I used lojban++ in this example and was specifically thinking of
NM because you have expressed (near-)readiness for virtual embodiment.
I would love to be able to interact with your baby via an avatar of
my own, but I am currently less than baby-capable with respect to
lojban++.  (Although this semester I am taking Discrete math, so that
may help with the 'logical' thinking)  Frankly, I feel I need to
better understand how my own brain works before I can attempt to build
a copy.  Hopefully as my skills rise to meet this challenge, the
interface tools will mature to lower the prerequisites for
involvement.

humour:  I originally spelled "Labeller" and gmail's spellcheck
offered "libeller" - which would be a fun google product, wouldn't it?
"Image Libeller"

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Mike Dougherty

On 1/6/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:


Needless to say, I don't consider cleaning up the house a particularly
interesting goal for AGI projects.  I can well imagine it being done
by a narrow AI system with no capability to do anything besides
manipulate simple objects, navigate, etc.

Being able to understand natural language commands pertaining to
cleaning up the house is a whole other kettle of fish, of course.
This, as opposed to the actual house-cleaning, appears to be an
"AGI-hard" problem...



But if the AGI were built, wouldn't it be the intelligence to pretty much
the entire world of human-moron level housecleaning robots?  All they'd need
is wifi to get instructions from the main brain.

But then a real AGI would likely become the main brain for just about ever
process control program we use, so the term quickly changes from human-moron
level to human-level moron.  :)

I really want to see a central traffic computer take driving away from all
the unqualified (or disinterested) drivers on the roads.  I'd really like to
see companies get incentives to allow "knowledge workers" work from home
offices to save commute time and fuel resources, but until that happens
(yeah, the employer wants to give up their sense of control?) it would be
nice to reclaim that time by allowing me to focus on what *I* want rather
than on driving.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Sophisticated models of spiking neural networks.

2006-12-26 Thread Mike Dougherty

in general, how do we indicate the odd one out of that set?  Sure it's
"obvious" that the color is important in this case - but I see two circles
and that the square is more similar to the circle(s) because of the higher
number of sides.  Therefor the triangle is the "odd one."

What rules does an evolving neural net use for determining the pattern in
order to determine the exception to the pattern?

On 12/26/06, Nathan Cook <[EMAIL PROTECTED]> wrote:


The training set should have problems of (at least) two forms to test my
hypotheses:
(1) after 'hearing' a sequence of pulses, reproduce them, and (2) after
being presented with several images (e.g. red circle, red square, red
triangle, green circle), indicate the odd one out. Being able to do either
of them should show I'm on to something.
Is that any help? I was reluctant to give too much away because it's a
rather far fetched concept, but as you can see, the neurons have to be
capable of doing a lot. I think I can justify taking this one of many
options in neural networks, if only because no-one seems to have let the
neurons themselves compete before.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mike Dougherty

On 12/5/06, BillK <[EMAIL PROTECTED]> wrote:


Your reasoning is getting surreal.

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

"What's the point?" - I think that's an even better question than defining

degrees of local rationality (good) vs irrationality (bad)  The whole notion
of arbitrarily defining subjective terms as good or better or bad seems
foolish.

If we're going to talk about evolutionary psychology as a motivator for
actions and attribute reactions to stimuli or enviornmental pressures then
it seems egocentric to apply labels like "rational" to any of the
observations.

Within the scope of these discussions, we put ourselves in a superior
non-human point of view where we can discuss the "human decisions" like
animals in a zoo.  For some threads it is useful to approach the subject
that way.  For most it illustrates a particular trait of the biased
selection of those humans who participate in this list.

hmm...  just an observation...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Mike Dougherty

On 12/4/06, Brian Atkins <[EMAIL PROTECTED]> wrote:


Can you cause your brain to temporarily shut down your visual cortex and
other
associated visual parts, reallocate them to expanding your working memory
by
four times its current size in order to help you juggle consciously the
bits you
need to solve a particularly tough problem? No.



I can close my eyes in order to visualize a geometric association or spatial
relationship...

When I fall asleep and dream about a solution to a problem that I am working
on, there are 'alternate' cognitive processes being performed.

I know... I'm just playing devil's advocate.  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread Mike Dougherty

On 11/27/06, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:


The problem is that this thing, "on", is not definable in n-space via
operations like AND, OR, NOT, etc.  It seems that "on" is not definable by
*any* hypersurface, so it cannot be learned by classifiers like feedforward
neural networks or SVMs.  You can define "apple on table" in n-space, which
is the set of all configurations of apples on tables; but there is no way to
define "X is on Y" as a hypervolume, and thus to make it learnable.



perhaps my view of a hypersurface is wrong, but wouldn't a subset of the
dimensions associated with an object be the physical dimensions?  (ok,
virtual physical dimensions)

Is "On" determined by a point of contact between two objects?  (A is on B
and B is on A)
Or is there a dependancy on the direction of gravity? (A is on B, but B is
on the floor)

You say that "on" could not be learned - why not?  In this case it would
seem that the meaning would effectively be "cultural" and the meaning would
depend on the semantic usage/intent of the tutors..

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Understanding Natural Language

2006-11-26 Thread Mike Dougherty

On 11/26/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:


But I really think that the metric properties of the spaces continue to
help
even at the very highest levels of abstraction. I'm willing to spend some
time giving it a shot, anyway. So we'll see!



I was thinking about the N-space representation of an idea...  Then I
thought about the tilting table analogy Richard posted elsewhere (sorry, I'm
terrible at citing sources)  Then I starting wondering what would happen if
the N-space geometric object were not an idea, but the computing machine -
responding to the surface upon which it found itself.  So if the 'computer'
(brain, etc.) were a simple sphere like a marble affected by gravity on a
wobbly tabletop, the phase space would be straightforward.  It's difficult
to conceive of an N dimensional object in an N+m dimensional tabletop being
acted upon by some number of gravity analogues.

Is this at least in the right direction of what you are proposing?  Have you
projected the dimensionality of the human brain?  That would at least give a
baseline upon which to speculate - especially considering that we have
enough difficulty understanding "perspective" dimension on a 2D painting,
let alone conceive of (and articulate) dimensions higher than our own.
(assuming the incompleteness theorem isn't expressly prohibiting it)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Mike Dougherty

On 11/22/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:


Well, in the language I normally use to discuss AI planning, this
would mean that

1)keeping charged is a supergoal
2)The system knows (via hard-coding or learning) that

finding the recharging socket ==> keeping charged



If "charged" becomes momentarily plastic enough to include the analog to the
kind of feeling I have after a good discussion, then the supergoal of being
"charged" might include the subgoal of attempting conversation with others,
no?

Would you see that as an interesting development, or a potential for a
future mess of "inappropriate" associations?  Would you try to correct this
attachment?  Directly, or through reconditioning?  I'll stop here because I
see this easily sliding into a question of AI-parenting styles...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Mike Dougherty

I'm not sure I follow every twist in this thread.  No... I'm sure I don't
follow every twist in this thread.

I have a question about this compression concept.  Compute the number of
pixels required to graph the Mandelbrot set at whatever detail you feel to
be a sufficient for the sake of example.  Now describe how this 'pattern' is
compressed.  Of course the ideal compression is something like 6 bytes.
Show me a 6 byte jpg of a mandelbrot  :)

Is there a concept of compression of an infinite series?  Or was the term
"bounding" being used to describe the attractor around which the values
tends to fall?  chaotic attractor, statistical median, etc.  they seem to be
describing the same tendency of human pattern recognition of different types
of data.

Is a 'symbol' an idea, or a handle on an idea?  Does this support the
mechanics of how concepts can be built from agreed-upon ideas to make a new
token we can exchange in communication that represents the sum of the
constituent ideas?   If this symbol-building process is used to communicate
ideas across a highly volatile link (from me to you) then how would these
symbols be used by a single computation machine?  (Is that a hard takeoff
situation, where the near zero latency turns into an exponential increase in
symbol complexity per unit time?)

If you could provide some feedback as a reality check on these thoughts, I'd
appreciate the clarification... thanks.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Information Learning Systems

2006-10-27 Thread Mike Dougherty
On 10/27/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
I am working on another piece now that will scan through news articles and pull small bits of information out of them, such as:  Iran's nuclear program is only aimed at generating power. The process of uranium enrichment can be used to generate electricity.
  Iran's uranium enrichment program aims only to generate electricty.What do you do when the intended meaning of "power" is "political power"?  English is pretty unintuitive, especially when it comes to the clever use of double-entendre that many intellectuals enjoy.  If (from this example) electricity were  confused with political power, it would make a huge mess of understanding.  I have no suggestion for a solution, I am just curious how disambiguation works in your system.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] [META] Is there anything we can do to keep junk out of the AGI Forum?

2006-07-26 Thread Mike Dougherty
On 7/26/06, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
The bane of mailing lists is well-intentioned but stupid people...not only mailing lists; I'd say they're a bane everywhere.

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] procedural vs declarative knowledge

2006-06-02 Thread Mike Dougherty
On 6/2/06, Charles D Hixson <[EMAIL PROTECTED]> wrote:
Rule of thumb:  First get it working, doing what you want.  Thenoptimize.  When optimizing, first check your algorithms,  then check tosee where time is actually spent.  Apply extensive optimization only to
the most used 10% (or less) of the code.  If you need to optimize morethan that, then you need to either redesign from the base, or get afaster machine.Expect that you will need to redesign pieces so often while in
development that it's better to chose the form of code that's easiest tounderstand, redesign, and fix than to optimize it.  Only whendevelopment is essentially complete is it time to give optimization forspeed or size serious consideration.
That said, do you agree that some applications call for a 'ground up' build mentality?  For example, adding "security" after an application is nearly finished is usually a terrible approach (despite being incredibly common)


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] procedural vs declarative knowledge

2006-05-30 Thread Mike Dougherty
After reading Ben's response I had to ask- what possible value would
there be in NOT pre-compiling reusable procedures?  Advocating a
strict adherence to a single type of general purpose container when
there is a clear advantage to specialization sounds like idealistic
dogma.  When my existence is threatened by nanopathogens, I
definitely want protective AI making decisions that lead quickly to
effective action.  Those extra few hundred clock ticks may be the
difference between my continued consciousness and becoming raw material
for a nanoswarm.  (Hopefully by the time this scenario is an
actual threat I will have long since moved on to more durable/resilient
hardware)On 5/30/06, Yan King Yin <[EMAIL PROTECTED]> wrote:
Do you store procedural and declarative in 2 different places?
It sounds like cheating because you
may store the entire procedure for solving some fixed problems
instead of searching for solutions in the declarative knowledgebase.

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Google wants AI for search... The first step..

2006-05-23 Thread Mike Dougherty
They have a long road ahead.  I recently sent an email via gmail that contained the word "computronium."  the google spellchecker (while slickly executed) was unable to identify this word.  I googled it, and the first link was a wikipedia reference.  So if Google Spellcheck can't '
just*GoogleIt'  when it doesn't know a word, then their integration efforts may already have major hurdles to overcome.  (my two cents)
On 5/22/06, Danny G. Goe <[EMAIL PROTECTED]> wrote:







Fellow AI ...
 
"Seems that Google wants a search engine that knows exactly what 
you want"...  
 

http://news.google.com/news?ie=utf8&oe=utf8&persist=1&hl=en&client=google&ncl=http://news.independent.co.uk/business/news/article570273.ece
 
I doubt once that google gets this far they will stop there. 
 
They have the means and the structure to do AI totally. 
 
The question remains is who is going to get AI developed first and what 
will they use it for? 
 
We live in interesting times. 
These future events will have a most profound effect upon societies around 
the world. 
 
Comments? 
 
Dan Goe
 
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]