Philip,
I have
mixed feelings on this issue (filling an AI mind with knowledge from
DB's).
I'd
prefer to start with a tabula rasa AI and have it learn everything via
sensorimotor experience -- and only LATER experiment with feeding DB knowledge
directly into its knowledge-store
Philip,
I think it's important for a mind to master SOME domain (preferably more
than one), because advanced and highly effective cognitive schemata are only
going to be learned in domains that have been mastered. These cognitive
schemata can then be applied in other domains as well, which are
Philip,
You and I have chatted a bit about the role of simulation in cognition, in
the past. I recently had a dialogue on this topic with a colleague (Debbie
Duong), which I think was somewhat clarifying. Attached is a message I
recently sent to her on the topic.
-- ben
Debbie,
Let's
So my guess is that the fastest (and still effective) path to learning
would be:
- *first* a partially grounded experience
- *then* a fully grounded mastery
- then a mixed learning strategy of grounded and non-grounded as need
and oportunity dictates
Cheers, Philip
Well, this
What you said to Debbie Duong sound intuitively right to me. I think
that most human intuition would be inferential rather than a simulation.
but it seems that higher primates store a huge amount of data on the
members of their clan - so my guess is that we do a lot of simulating of
the
Well, this appears to be the order we're going to do for the Novamente
project -- in spite of my feeling that this isn't ideal -- simply due
to the way the project is developing via commercial applications of the
half-completed system. And, it seems likely that the initial
partially
Hi,
WordNet is an interesting resource; we have fed it into Novamente and
reasoned on it using PTL. Actually we've combined WordNet with some
statistical word relationships derived from text-analysis. One runs into
some memory issues on a 32-bit machine, mostly due to the bulk of the
Peter,
Thanks for the reference to the site -- no, I don't know anything about
them, though.
It seems they're heavily focused on sensorimotor intelligence at this phase,
with a few additions like
-- route planning
-- similarity matching between perceptual situation
It's very cool stuff, but I
I'm reading the book Richard M Golden (1996) Mathematical
Methods for Neural Network Analysis Design. Basically:
(1) A dynamical ANN activates the next state according to
its current state, so there exists an objective function
for all states such that V(x) = V(y) if state x is at
least
Hi,
1) it has to assume that its *past experience* is a decent predictor of
its
*future experience*
No. An adaptive system behaves according to its experience,
because that is
the only guidance the system has --- I know my past experience is not a
decent predictor of my future, but I
By the way, an interesting example is the following:
1, 2, 4, 8, 16, 32, 64, 128, ___ ?
Which all of us will give the answer 256, but a simple
Bayesian generalization will give 99.
Hmmm... Bayesian inference with a Solomonoff-Levin universal prior would
probably give 256, as this
Ben,
We seem to agree that probability theory can/should be applied in certain
situations, but not in certain others. Now the problem is the
condition for
the application.
Not exactly. I think that probability theory is nearly always useful, but
that in some situations it can be used
4:04 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Bayes rule in the brain
Ben Goertzel wrote:
BTW, to me, the psychological work on human bias, heuristics, and
fallacy (including the well known work by Tversky and Kahneman)
contains many wrong results --- the phenomena are correctly
According to my experience-grounded semantics, in NARS truth value (the
frequency-confidence pair) measures the compatibility between a statement
and available (past) experience, without assuming anything about the real
world or the future experience of the system.
I know you also accept a
Here is an old paper of Pei's on the Wason card experiment:
http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/wang.evidence.pdf
Attached is a Word document discussing the Wason card experiment from the
perspective of Probabilistic Term Logic.
Basically, I disagree with Pei that
, according to a reality.
An adaptive system behaves according to its past experience, but
it does not
have to treat its experience as an approximate description of the real
world.
Pei
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday
and for AI. A detailed criticism to the Bayesian approach in AI
can be found in my paper at
http://www.cis.temple.edu/~pwang/drafts/Bayes.pdf (a revision of
it has been
accepted by Artificial Intelligence).
Pei
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL
1. How many meta levels of thought is
the Novamente system going to be capable of? Is it a set number based on
it's structure, or will it be able to create new levels on-the-fly as it
thinks it needs them? Will this require structural self-modification or
is it a built-in
Hi all,
My latest late-night speculative thoughts on Friendly AI, Cosmic AI and the
Singularity may be read at...
http://www.goertzel.org/dynapsyc/2004/AllSeeingAI.htm
Be warned: This is hi-fi, sci-fi stuff, not concerning technicalities of
AGI (I needed a brief break from all the highly
Hi all,
It's not entirely AGI-focused (though AGI is mentioned), but I started
rereading some of my old favorite philosophers of science a couple weeks
ago, and the result was that I couldn't restrain myself from writing an
essay on the philosophy of science (mostly while sitting in the Sao
will
never occur anymore!
ben g
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Jeremy Smith
Sent: Friday, January 16, 2004 5:50 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] RE: Odd error in Mindplex paper
Ben Goertzel wrote:
p.s. I'm surprised no one
Hi all,
Someone just pointed out to me, offlist, an embarrassing typographical
oddity in my online paper on Mindplexes.
I'm generally a silent observer of the [EMAIL PROTECTED] However, I
wanted to point out an apparent typographical oversight or potentially
distasteful 'hack' in your
p.s. I'm surprised no one pointed out this aberration in the paper to me
before, but, I guess that's an indication of how few people have read that
paper in the months since i posted it ;-)
ben g
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Thursday, January
In
principle --- of course -- once we have an AGI, the AGI will be able to build
narrow AI systems better than we can... for those cases where narrow AI systems
are still appropriate...
Lacking the AGI, however, one has to design these hacks based on one's
knowledge of the application
J. Maxwell Legg wrote:
Would you still consider as ungrounded the reading information that
passes through my mind? Common sense indicates that that textual
information is grounded to me just because of my choices.
Information acquired through language is never as fully grounded as
information
I think that creating AGIs is only half the job. The other half is
organising their successful introduction into society. I would strongly
recommend that once the coding side of AGI development is looking
good that *all* the parties engaged in creating AGIs ensure that
effective efforts
Philip,
I think that modeling of transition scenarios could be interesting, but I
also think we need to be clear about what its role will be: a stimulant to
thought about transition scenarios. I think it's extremely unlikely that
such models are going to be *accurate* in any significant sense.
Brad,
Regarding the Singularity, I personally view the sort of discussion we've
been having as a discussion about the late pre-Singularity period.
Regarding AGIs' gradual ascendance to superiority over humans My guess
is that AGIs will first attain superiority over humans in specialized
Mike,
I
agree that a baby AGI with clear dramatic promise will supercharge the AGI
funding scene. And as you know I'm mighty eager to get to that
stage!!! ;-)
-- Ben
G
-Original Message-From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]On Behalf Of deeringSent:
Saturday,
Brad wrote:
So that if/when Ben succeeds, how is anyone to know that they're looking
at a real baby AI, and not some slight enhancement of the AIBO? They
won't. Only you, I and maybe 998 other other people would understand the
significance and these 1000 only because we're well versed with
Owen,
I don't know if you meant that email to go to the whole list, but hey, it
was interesting reading ;)
Since Peter doesn't read every message on this list, you might want to mail
him directly at [EMAIL PROTECTED]
Peter, sounds like you've got an enthusiastic new recruit!
-- Ben G
Mike,
I want
to comment on your "just around the corner" hypothesis, as it relates to
Novamente What you said about Novamente isn't inaccurate, but your
phrasing might be misleading to some.
My
"12-18 months" statement was a statement that, if all goes well, we'll be done
Hi
Mike,
About
Novamente project progress...
The
reason I haven't given progress updates to this list lately is that I've been
even more insanely busy than usual, due to a combination of AI work and
(Novamente-related) business work and personal-life developments. So
recreational
Brad,
Hmmm... yeah, the problem you describe is actually an implementation issue,
which is irrelevant to whether one does synchronoous or asynchronous
updating.
It's easy to use a software design where, when a neuron sends activation to
another neuron, a check is done as to whether the target
Yep, you're right of course. The trick I described is workable only for
simplified formal NN models, and for formal-NN-like systems such as Webmind.
It doesn't work for neural nets that more closely simulate physiology, and
it also isn't relevant to systems like Novamente that are less NN-like
Hi,
Actually, in attractor neural nets it's well-known that using random
asynchronous updating instead of deterministic synchronous updating does
NOT
change the dynamics of a neural network significantly. The
attractors are
the same and the path of approach to an attractor is about the
Pei,
Thanks for your thoughtful comments! Here are some responses...
-
*. S = space of formal synapses, each one of which is identified with a
pair (x,y), with x Î N and y ÎNÈS.
Why not x ÎNÈS?
-
No strong reason -- but, I couldn't see a need for that degree of generality
in
Hi,
For those with the combination of technical knowledge and patience required
to sift through some fairly mathematical and moderately speculative cog-sci
arguments... some recent thoughts of mine have been posted at
http://www.goertzel.org/dynapsyc/2003/HebbianLogic03.htm
The topic is:
**How
OK, it's not AGI, but it's damn interesting ;-)
-- Ben
In Pioneering Study, Monkey Think, Robot Do
By SANDRA BLAKESLEE
Published: October 13, 2003
onkeys that can move a robot arm with thoughts alone have brought the merger
of mind and machine one step closer.
In experiments
Lots of big words in there, but unless you believe that there was a
creator, or that for some reason computers can't simulate physical laws
complex enough to evoke a nice fitness landscape (ie quantum randomness
is necessary for evolution), nothing that you've said
countermands my point
In the spirit of AIXItl but more practical, see Juergen's new work on the
Goedel Machine AGI architecture
http://www.idsia.ch/~juergen/goedelmachine.html
I don't think this is really a practical AGI architecture, but I think it's
really interesting ... I do like the direction this research
How complex may the environment
be maximally
for an ideal, but still realistic, agi agent (thus not a
solomonof or AIXI
agent) to be still succesful? Does somebody know how to calculate (and
formalise) this?
Bye,
Arnoud
There are two different questions here, and I'm not sure which one
You're arguing that experiences are projected into the social domain by the use of language. But in my view, they are merely projected into the *social* domain by the use of language. There are several different perspectives on language. The perspective that language is based on rules is one
I see physics as a collection of patterns in the experienced
world. It's a
very, very powerful and intense collection of patterns. But
nevertheless,
it's not totally comprehensive, in the sense that there are
some patterns
in the experienced world that are not part of physics, but
On Monday 08 September 2003 14:37, Ben Goertzel wrote:
The problem is to
fit qualia
into a pure physicalistic ontology. Physical theories because of their
success have become the measure of all things.
I understand your perspective, but mine is different. I'm not
so sure
I am not sure if I define qualia exactly the same way as Dennett or not;
that would take some thought to figure out...
However, I think it's clear that qualia -- examples thereof, though perhaps
not the abstract concept -- are found useful by people in conducting
conversations about their own
The problem is to
fit qualia
into a pure physicalistic ontology. Physical theories because of their
success have become the measure of all things.
I understand your perspective, but mine is different. I'm not so sure that
physical theories are the measure of all things. Physicalistic
Arnoud, it appears there is agreement that qualia exist -- as a very real
illusion. The problem of qualia, however, seems to be the
question of how
to represent/implement qualia in a thinking machine (assuming this must be
designed-in for true and full consciousness.)
...
From a wider
Qualia by their definition (ineffable, non-causal etc.) have no
function, can
have no function in the system, that I do agree with Dennett ('quining
qualia'). I also agree with Dennett that if the behaviour of a system is
completely explained nothing remains, all extra ontology is just
already been said.
Warmest regards,
Tim
On Sunday, September 7, 2003, at 08:38 PM, Ben Goertzel wrote:
Qualia by their definition (ineffable, non-causal etc.) have no
function, can
have no function in the system, that I do agree with Dennett ('quining
qualia'). I also agree
I would define consciousness more simply as being able to measure
the impact
of your existence on those things you observe.
...
I would say that
consciousness is at its
essence a purely inferred self-model, which naturally requires a fairly
large machine to support the model.
Cheers,
Now yes an AI can handle multiple streams, but you are going to
pay for it somehow, either with multiple independent memory systems for
each stream which must later be integrated, or by a hugely increased
processing cost for analyzing and consolidating a single memory system.
My advice is
Any others in the Washington DC area --
I'm posting this to announce the quarterly DC Transhumanists
meeting scheduled for Thursday of this first week of September.
TIME: Sept 4, 7:00PM.
Location: Hamburger Hamlet (restraunt) Crystal City Virginia. Convenient
to the metrorail stop of the same
(may have to
extend some
electrodes\contacts to the floor). If you wanted to lay out some
dough, you
could have a room full of these pads so the bot could move freely while
constantly recharging, thus meeting your objective..
--Kevin
- Original Message -
From: Ben Goertzel [EMAIL
Title: RE: [agi] Web Consciousness and self consciousness
Hi
Shai,
I read
your brief article on consciousness.
Much
of what you say is agreeable to me -- I do think that human consciousness has a
lot to do with the kind of "attentional dynamics" of active "objects" that you
mention.
I
Title: RE: [agi] Web Consciousness and self consciousness
The URL I was referring to
is...
http://goertzel.org/dynapsyc/2000/ConsciousnessInWebmind.htm
--
ben
I
have a slightly different phraseology for discussing these topics,but I
think my ideas on attentional dynamics are
hi,
One is that I wonder whether it's worth building into Novamente a pre-
set predisposition to distinguish between 'me' and 'not' me.
My guess is that this will emerge pretty simply and naturally.
Some external observations will correlate closely with internal sensations
(these are the
Hi,
It´s possible that their hardware would help with Novamente. I emailed with them a couple years ago, though, and it was pretty clear that their systems were still in a research phase, not ready for use in heavyweight applications. A couple other drawbacks I discovered then
-- applications
Hi,
This email is to announce an interesting upcoming conference in California,
at which I (and many others more famous than I) will be speaking..
-- Ben G
Web: http://www.accelerating.org
The Accelerating Change Conference will be a forum to
Sorry about that, folks!
I filter out about 5 spam e-mails a day through listbox.com's interface, but
somehow I made a processing error and that one slipped past...
ben g
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Brad Wyble
Sent: Sunday, July 20,
Hi,
This kind of built-in capability certainly isn't *necessary* but it might be
useful. This kind of issue is definitely worth exploring...
More thoughts later ;-)
ben
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Philip Sutton
Sent: Saturday, July
Brad wrote:
It's an interesting idea, to raise Novababy knowing that it can adopt
different bodies at will. Clearly this will lead to a rather
different
psychology than we see among humans --- making the in-advance design of
educational environments particularly tricky!!
First of
Assuming low level feature extraction is hardcoded like edge detection,
motion,
and depth, then the first thing an intelligence would need to learn is
correlation between objects in different sensory streams.
The assumption of hard-coding is not something I would assume,
For initial
Hi Ben,
It's an interesting idea, to raise Novababy knowing that it can adopt
different bodies at will. Clearly this will lead to a rather
different psychology than we see among humans --- making the in-advance
design of educational environments particularly tricky!!
What do you see
Actually, isn't the concept of self developed as a baby matures?
For humans
at least, there is a very strong bond between mother and child which is
nurtured through nursing, playing, etc. When the mother leaves
the room, the
baby starts to cry because it thinks that part of itself is
It's an interesting idea, to raise Novababy knowing that it can adopt
different bodies at will. Clearly this will lead to a rather different
psychology than we see among humans --- making the in-advance design of
educational environments particularly tricky!!
On the other hand, creating a
: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, July 11, 2003 11:53 AM
Subject: [agi] Educating an AI in a simulated world
Hi,
One of the things I've been thinking about lately is the
potential use of
our (in development) Novamente AI system to control the behavior
into this discussion ;)
--Kevin
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, July 04, 2003 12:36 PM
Subject: RE: [agi] Request for invention of new word
An existing term for this kind of system is distributed mind
-Original Message-From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]On Behalf Of DeeringSent:
Friday, July 04, 2003 1:06 PMTo:
[EMAIL PROTECTED]Subject: Re: [agi] Request for invention of new
word
AND ?
AND a collective-level conscious
theater?
How the heck
Mindplex is good. It beats multi-mind which was my default idea.
Thanks Mr. Yudkowsky!
ben g
I/we/Google suggest mindplex.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe,
Hmmm... this is not AGI but it's mighty interesting -- temporary human
intelligence enhancement via electrostimulation...
-- Ben Goertzel
From the new york times...
*
Savant for a Day
By LAWRENCE OSBORNE
n a concrete basement at the University of Sydney, I sat in a chair
My feeling on dog-level intelligence is that the *cognition* aspects of
dog-level intelligence are really easy, but the perception and action
components are significantly difficult and subtle.
In other words, once a dog's brain has produced abstract patterns not tied
to particular environmental
Hi,
The
general BDI concept is hard to argue with -- minds need beliefs, desires and
intentions. Slide 6 of the www.cs.toronto.edu presentation you cite
below, is certainly applicable to Novamente. The goals are Goalnodes and
goal maps, the precompiled plans are composite schemata, the
I agree with Shane ... this approach suffers from the same sort of problem
that AIXI suffers from, Friendliness-wise
When the system is smart enough, it will learn to outsmart the posited
Control Code, and the ethics-monitor AGI You might want to avoid this
by making the ethics-monitor AGI
To me
the distinction is between
A)
"Explicit programming-in of ethical principles" (EPIP)
versus
B)
"Explicit programming-in of methods specially made for the learning of ethics
through experience and teaching"
versus
C)
"Acquisition of ethics through experience and teaching,
Hi,
I
don't see that you've made a convincing argument that a society of AI's is safer
than an individual AI. Certainly among human societies, the only analogue
we have, society-level violence and madness seems even MORE common than
individual-level violence and madness. Often societies
Well, that's one hell of a good reason to slow down the whole AGI
project. Doesn't it strike you that it's kind of reckless to create
something that could change society/the world drastically and bring it
on before society has had the time to develop some safequards or
safety net?
This
Ben,
In reply to my para saying :
if the one AGI goes feral the rest of us are going to need to access
the power of some pretty powerful AGIs to contain/manage the feral
one. Humans have the advantage of numbers but in the end we may not
have the intellectual power or speed to counter
for
the month of March. You can start posting again in April if you wish.
I enjoy many of your posts and value your intellectual contributions, and I
hope you'll rejoin again with a renewed commitment to keep the attacks on
the level of ideas rather than people.
-- Ben Goertzel
Alan Grimes
Ben Goertzel wrote:
Yes, I see your point now.
If an AI has a percentage p chance of going feral, then in the case of
a society of AI's, only p percent of them will go feral, and the odds
are that other AI's will be able to stop it from doing anything bad.
But in the case of only
Eliezer is certainly correct here -- your analogy ignores probabilistic
dependency, which is crucial.
Ben
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Eliezer S. Yudkowsky
Sent: Tuesday, March 04, 2003 1:33 AM
To: [EMAIL PROTECTED]
Subject: Re:
*
But the idea of
having just one Novamente seems somewhat unrealistic and quite risky to
me.
If the Novamente design is
going to enable boostraping as you plan then your one Novamente is going to
end up being very powerful. If you try to be the gatekeeper to this one
Philip,
What would help me
to understand this idea would be to understand in more detail what kinds of
rules you want to hardwire.
Do you want to
hardwire, for instance, a rule like "Don'tkill
people."
And then give it
rough rule-based definitions of "don't", "kill" and "people", and
***
At the moment you have truth and attention
values attached to nodes and links. I'm wondering whether you need to have
a third numerical value type relating to 'importance'. Attention has a
temporal implication - it's intended to focus significant mental resources on a
key issue in the
Hi,
I disagree that we have a problem converting procedural to
declarative for all domains.
Sure, you're right. Here as in many other areas, the human brain's
performance is highly domain-variant.
That said, Novamente would be far better at it than we. With the
ability to understand it's
Yes, getting this data is what the entire field of neurophys is
about. Being able to extract it without using surgery,
electrodes, amplifiers, and gajillions of manhours would be
outstanding. A lack of data is the primary thing holding
neuroscience back and to a large degree, the depth of
We need one of the technologies to evolve to the point where it delivers
decent spatial AND temporal resolution...
That's exactly what I meant actually: combined FMRI and MEG
within the same experiment. You get data from each
simultaneously and combine them afterwards, using the
A funny article...
http://www.salon.com/tech/feature/2003/02/26/loebner_part_one/index.html
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Loebner is not himself an AI researcher, so far as I know.
-- Ben G
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of [EMAIL PROTECTED]
Sent: Wednesday, February 26, 2003 10:38 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Loebner prize
On the serious
But hopefully the bandwidth of communication is compensated by
the power of
parallel processing. So long as communication between ants or
processing nodes
is not completely blocked, some sort of intelligence should
self-organize, then
its just a matter of time. As programmers or engineers
Kevin's random babbling follows:
Is there a working definition of what complexity exactly is?
It seems to
be quite subjective to me. But setting that aside for the moment...
I view complexity as part of a web of concepts that also, centrally,
includes pattern
Roughly, an entity is complex
Kevin said:
I would say that complex information about anything can be
conveyed in ways
outside of your current thinking, but if you ask me to prove
it, I cannot.
There is evidence of it in things like the ERP experiment which show the
existence of a possible substrate that we have
In this sense, I wonder if the universe does not already know everything
that we(sentient beings) have ever known and will ever know. In
fact, this
is my current thinking, which doesn't have to be shared by others :-p
But do you mean the PHYSICAL universe, or some other interpretation of
Well...you should know by now that i always include both the noumenal and
the phenomenal as being identical..in this case as the universe.
As for why it matters? It only matters if we are able to realize
this truth
directly (i haven't). In other words, realize our own identity
and
Well, in principle, Novamente is intended to be able to learn from zippo --
i.e. NO explicitly encoded knowledge.
However, the architecture does support the loading-in of prefab knowledge.
Whether, and in what ways, it is possible to introduce prefab knowledge into
a learning AGI system, without
Hmmm...
So, I'm thinking: The human brain is wired to do a lot of abstract cognition
in terms of metaphorical maps of the environment, and these are tied in with
macro-world classical physics
This may be part of the reason we're so bad at thinking about the quantum
microworld
So: Maybe in
Yeap, there's well developed theories about how an autoassociate
network like CA3 could support multiple, uncorrelated attractor
maps and sustain activity once one of them was activated. The
big debate is about how they are formed.
The standard way attractors are formed in formal ANN
Hi,
Using artificial rules, such as hardball winner-take-all and
synaptic weight normalization, it's doable to get ANN's to do this.
But in an autoassociative network with realistic biophysical
properties, controlling activity to prevent runaway synaptic
modification is a very large
I wrote, pertaining to problems of positive feedback causing erroneous or
uncontrollable dynamics:
The fact that similar problems occur in Novamente inference as well as in
the brain, suggests that they're general system-theoretic
problems in some
sense, perhaps occurring in any distributed
It's
cool...
But I
wonder how much we can really learn from studying neurons in
vitro...
ben
g
-Original Message-From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]On Behalf Of KevinSent:
Monday, February 24, 2003 8:52 PMTo:
[EMAIL PROTECTED]Subject: [agi] neuron chip
1301 - 1400 of 1549 matches
Mail list logo