Re: [agi] a2i2 news update

2007-07-25 Thread justin corwin

On 7/25/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

Of course, numerical comparisons are petty, unfair and invidious. But being
that sort of person, I can't help noticing that Peter is promising to
increase his staff to 24 soon. Will that give him the biggest AGI army? How
do the contenders stack up here?


Trivially, no, as Cyc claims 40 employees. But it's not clear how many
are actually involved in AI development, and of course, many people
have doubts about Cyc's methodology.

Personally, though, I do think we have the most people directly
working on a project of this kind, which is something I find
personally significant, because there are things in AI that require a
great deal of work, which manpower makes more feasible.

--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=25302459-7ee1b2


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-03 Thread justin corwin

My reasons for joining a2i2 could only be expressed as subportions of
6 and 8 (possibly 9 and 4).

I joined largely on the strength of my impression of Peter. My
interest in employment was to work as closely as possible on general
artificial intelligence, and he wanted me to work for him on precisely
that. His opinions on the subject were extremely pragmatic, and
focused on what worked. I appreciated that, thinking that so long as I
could support my opinions, they would be respected.

In retrospect, I doubt I would have joined if I had tried to evaluate
a2i2 theoretically from my own design/organizational perspective.
Peter and I still do not have identical ideas about AGI(or the
business of developing AGI), but I agree about all the specific issues
we've dealt with thus far, and I have come to think that the process
and resources an organization can bring to bear on it's problems are
much more important than the precise design, opinions, or data they
have at any given time.

If I had to find a new position tomorrow, I would try to find (or
found) a group which I liked what they were 'doing', rather than their
opinions, organization, or plans.

That said, I wouldn't have joined if I hadn't been offered stock or
equivalent ownership of the work. Not because of the implied later
capital gains, but because I wouldn't want my work effectively
contributing to an organization in which I had no formal say or
control. I expect Peter will remain the overwhelming majority owner of
a2i2 for the foreseeable future, but the responsibility is important
to me.

--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] SOTA

2007-01-12 Thread justin corwin

On 1/12/07, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:

Philip Goetz wrote:
>
> Haven't been googling.  But the fact is that I've never actually
> /seen/ one in the wild.  My point is that the market demand for such
> simple and useful and cheap items is low enough that I've never
> actually seen one.


The term for this type of thermostat is a 'set-back' thermostat, and
they were originally designed to save energy and heating/cooling bills
by having programmable periods. They have become increasingly complex.
All my most recent houses had essentially little calendar computers in
them.

They are extremely common in new construction, as this link shows:

http://www.marketresearch.com/product/display.asp?productid=1354170&g=1

"STUDY HIGHLIGHTS
- Honeywell/Magicstat was the top brand of thermostat bought in 2003.
- Two-thirds of the thermostats purchased in 2003 were set-back models.
- The average price paid for the electronic set-back thermostats was $70.
- Thermostats were purchased mostly from builders/contractors and home
centers. "


Check any hardware store, there's a whole shelf.  I bought one for my
last apartment.  I see them all over the place.  They're really not rare.

Moral: in AI, the state of the art is often advanced far beyond what
people think it is.


There are really two things being talked about here. One is SOTA,
which almost by definition, is beyond what people think it is, and the
other is market availability, or practical availability, which is very
different than SOTA technology.

SOTA AI technology is essentially that which you, knowing the latest
theories, build yourself. There is no such thing as a SOTA AI system,
the way there are SOTA stereo systems, or SOTA crypto systems
available, because the market availability of the technology does not
have the same characteristics.


--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Language modeling

2006-10-23 Thread justin corwin

I don't exactly have the same reaction, but I have some things to add
to the following exchange.

On 10/23/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:
> Children also learn language as a progression toward increasingly complex 
patterns.
> - phonemes beginning at 2-4 weeks
> - phonological rules for segmenting continuous speech at 7-10 months [1]
> - words (semantics) beginning at 12 months
> - simple sentences (syntax) at 2-3 years
> - compound sentences around 5-6 years

ARR!

Please don't do this.  My son (like many other kids) had finished about
fifty small books by the time he was 5, and at least one of the Harry
Potter books when he was 6.

You are talking about these issues at a pre-undergraduate level of
comprehension.


Anecdotal evidence is always bad, but I will note that I myself was
reading Tolkein(badly) by 1st grade, and when I was five was scared
badly by a cold war children's book "Nobody wants a Nuclear War".

There are also other problems with neat progressions like this. One
glaring one is that much younger children can learn sign
language(which is physically much easier) and communicate fairly
complicated concepts far in advance of speech, so much so that many
parent courses now suggest and support learning and teaching baby sign
language so as to be able to communicate desires, needs, and
explanations with the child much earlier.


--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread justin corwin

I want to strongly agree with Richard on several points here, and
expand on them a bit in light of later discussion.

On 10/20/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

It used to be a standing joke in AI that researchers would claim there
was nothing wrong with their basic approach, they just needed more
computing power to make it work.  That was two decades ago:  has this
lesson been forgotten already?


This is very true then, and continues to be now. For those who use the
explanation of insufficient computing power, I would question what
approaches you would expect to be viable at higher computing power?
How do they scale? Why would they work better with more computation?

Relatedly, very very few AI research programmes operate in strict real
time. Many use batch processes, or virtual worlds, or automated
interaction scripts. It would be trivial to modify these systems to
behave as if they had 10 times as much computational power, or a
thousand times. Even if it took 1,000,000 seconds(11 1/2 days) for
every second of intelligent behavior with currently available
computing power, the results would be worth it, and unmistakeable, if
true.

I suspect that this would not work, as simply increasing computing
power would not validate current AI systems.


A completely spurious argument.  You would not necessarily *need* to
"simulate or predict" the AI, because the kind of "simulation" and
"prediction" you are talking about is low-level, exact state prediction
(this is inherent in the nature of proofs about Kolmogorov complexity).


This very important, and I strongly agree that "analysis" of this kind
is unhelpful. It's easy to show that heat engines and turbines and all
sorts of things are so insanely complex that they can't possibly be
modeled in the general case. But we needn't do so. We are interested
in the behavior of certain parameters of such systems, and we can
reduce the space of the systems we investigate(very few people build
turbines with disconnected parts, or assymetrical rotation, for
example).


It is entirely possible to build an AI in such a way that the general
course of its behavior is as reliable as the behavior of an Ideal Gas:
can't predict the position and momentum of all its particles, but you
sure can predict such overall characteristics as temperature, pressure
and volume.


This is the only claim in this message I have any disagreement with
(which must be some sort of record given my poor history with
Richard). I agree that its true in principle that AIs can be made this
way, but I'm not yet convinced that it's possible in practice.

It may be that the goals of and motivations from such artificial
systems are not one of those characteristics that lies on the surface
of such boiling complexity, but within it. I have the same
disagreement with Eliezer, about the certainty he places on the future
characteristics of AIs, given that no one here is describing the
behavior of a specific AI system, such conclusions strike me as
premature, but perhaps not unwarrented.

--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Discussion group meeting at Kifune Restaurant in LA on AI

2006-10-18 Thread justin corwin

As some of you may know, we have a reoccuring discussion group at
Kifune, a local japanese restaurant in Marina del Rey. Every few times
we meet I like to mention it on the wider discussion groups, but we
have an announcement list yahoogroup that sends updates on each
meeting.

The attendees are primarily employees, associates, and friends of a2i2
in general and Peter Voss in particular.

The group centers around general transhumanism officially, but we tend
to AI and real current efforts and events as they happen. We
occasionally have discussion topics and guests of expertise.

Here is the event announcement that went out today:
--
Hello Friends,

We're going to have another Kifune discussion group, at which we'll
discuss the recent happenings at the AGI Workshops Novamente put on,
the Alcor Conference, and a few new companies and initiatives in our
field of AI.

If you can make it, we'd enjoy your company. RSVP if you can.

PS. The usual details are at:
http://groups.yahoo.com/group/kifune/files/details.html

Justin Corwin


If you need help getting to Kifune, have questions, or want to ask
about future meetings, just contact me.

--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] estimated cost of Seed AI

2005-06-13 Thread justin corwin
On 6/13/05, Eugen Leitl <[EMAIL PROTECTED]> wrote:
> It would take today's equivalent of Manhattan Project today (with a longer
> duration to boot); of course the hardware problem can only improve with time.

As with any complex project to create something, theory is everything
here. Projects that were impossible, with any size of staff or pool of
resources in 1901 can be done by a tiny firms, using previous work and
self developed or purchased software( see the design of large office
buildings, for example ).

So if a large reserve of known science, existing software tools, and
good theory collide, it may in fact be done very cheaply.

Unfortunately, neither the first, nor the second seem to exist, except
as generic software design and programming science. So the onus is
entirely on the theory. I think it's very implausible to imagine the
first project to develop any functioning AI software will make few
enough mistakes to make such cheap development as Richter and
Yudkowsky seem to posit. Refactoring a respectable code-base to allow
for one invalidated assumption can wipe out man-years, easily. A
single person, or even a tiny team will be quickly swamped by having
to innovate in all directions while simultaneously troubleshooting,
maintaining, updating, and refactoring the software.

I also personally believe (based on admittedly minor experience, and
extensive reading) that delays in development aren't linear. If you
wear four hats on the job, it doesn't take you four times as long to
do the work, it takes much longer, if it gets done at all. Getting
swamped by your own project probably won't just add time to the
completion date, it can put it in doubt of occurring, as you disappear
into piled up plumbing work on the first floor, and the exposed
superstructure of your proposed shiny building rusts away. (to be
metaphoric, recklessly).

I don't have hard predictions, personally. To take something of a
party line approach, I can point out that a2i2 has a target of a four
year development cycle, from when we can scale up. I am confident that
we can hit our targets within this, and show results. I guess that
puts me somewhere in the middle of this disagreement.

A Manhattan Project estimation sounds good, and big, but it seems to
be kind of a 'this is a huge problem' comparison, and thus lacks some
force with me. Why the Manhattan Project? Why not building the Panama
Canal, or the Empire State Building, or the Apollo program? All of
these projects were big, but they had very different profiles and
staffing, costs and sources of money, technology, resources. I don't
think AI will follow any of their profiles any more than they followed
the profiles of building the Spanish Armada, or Stonehenge or the
NotreDame Cathedral. It's a different kind of development.

What is more interesting than this kind of hand wavy 'time, money, and
people' predictions, is a technical distance prediction. This is a
technical, engineering problem. In those terms, what is the distance
that must be covered before it can be done? We have some of the
necessary components in the public marketplace already, fast and big
computers, modern operating systems, object oriented languages, smart
compilers, a whole bunch of math and computer science theory. These
are things anyone can have, for pennies(comparatively). What's
missing? With that list of what's missing, what has to be developed
specially? What will be developed anyway? What can you commission, or
buy?

Then, with that list, how long will that take? How much will it cost?
That's a much better series of inquiry, because this bare estimation
route will simply lead to opaque comparisons of the implementation
cost of incomplete AI theories.

I know a lot of us are simply not willing, not ready, or not able to
do that kind of public comparison, but it would be more interesting. A
little closer to the problem.


-- 
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Model simplification and the kitchen sink

2004-10-24 Thread justin corwin
nd
> modeling algorithms and heuristics of the kind used in chemical
> engineering do not seem to be taught in computer science even though
> they have always been eminently relevant as far as I could tell.

This is very interesting, and I'm ashamed to say that I have to read a
lot more before I can comment on this discrepency. Do you see any
other fields that have similar approaches to this kind of modelling
accuracy? The only other examples I can think of are things like
uncertain game theory (risk under uncertainty), and verifier theory
(the science of science, particularly knowledge domains), neither of
which seem to have associated clean (or at least well defined) math
for the integration.

I don't know that I could actually use the math, but it would be nice,
as it tends to shakeout any serious inconsistencies, and show the
ranges and bounds of the stated relationships.

thanks,

-- 
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]