Joshua Fox wrote:
Josh,
Your point about layering makes perfect sense.
I just ordered your book, but, impatient as I am, could I ask a question
about this, though I've asked a similar question before:
Why have not the elite of intelligent and open-minded leading
AI researchers not attempted
"J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: On Monday 11 June 2007 08:12:08
pm James Ratcliff wrote:
> 1. Is anyone taking an approach to AGI without the use of Symbol Grounding?
You'll have to go into that a bit more for me please.
Symbol grounding is something of a red herring. There's a
>> If you think my scheme "cannot be fair" then the alternative of traditional
>> management can only be worse (in terms of fairness, which in turn affects
>> the quality of work being done). The situation is quite analogous to that
>> between a state-command economy and a free market (or actua
On Monday 11 June 2007 08:12:08 pm James Ratcliff wrote:
> 1. Is anyone taking an approach to AGI without the use of Symbol Grounding?
Symbol grounding is something of a red herring. There's a whole raft of
philosophical conundrums (qualia among them) that simply evaporate if you
take the syste
Hi Jiri,
A VNA, given sufficient time, can simulate *any* substrate. Therefore,
if *any* substrate is capable of simulating you (and thus pain), then a VNA
is capable of doing so (unless you believe that there is some other magic
involved).
Remember also, it is *not* the VNA that feel
Keep going ... won't be too long until you invent fungible tokens for your
people that act as a medium of exchange, a store of value, and a unit of
account.
On Monday 11 June 2007 07:22:46 pm YKY (Yan King Yin) wrote:
> An additional idea: each member's vote could be weighted by the
> member's
1. Is anyone taking an approach to AGI without the use of Symbol Grounding?
Or is that intrinsic in everyones approaches at this stage?
(short of some Neural Network approaches)
2. How do you describe Symbol Grounding for an AGI?
What do you consider the best ways to have the system get Symbol Gro
Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- James Ratcliff wrote:
> I believe that is just a simple rule that you can input in most systems, and
> it will match that point, but promgramming in each of those rules is a very
> costly affair.
Fortunately, natural language (unlike artificial lang
Even if they received credit for the 7,000 lines, it would be worth very little
in the overall scheme, and any code that was not good could be marked as "too
be fixed" or optimized fairly easily, (similar again to the Wiki markups) to
where that credit could be diminished...
and any obvious spa
Sure, until we give an AGI rights :}
Quote: I stand here today and will not abide the abusing of AGI rights!
Derek Zahn <[EMAIL PROTECTED]> wrote:P { margin:0px; padding:0px } body {
FONT-SIZE: 10pt; FONT-FAMILY:Tahoma } Matt Mahoney writes:
> Below is a program that can feel pain. It is
Yeah I looked a bit on the wiki about the "qualia" but was unable to find
anything concrete enough to comment on, seems to be some magical fluffery.
"bodily sensations" = input from touch stimuli
"perceptual experiences" = input information (data)
both of these we have and can process...
the las
--- James Ratcliff <[EMAIL PROTECTED]> wrote:
> I believe that is just a simple rule that you can input in most systems, and
> it will match that point, but promgramming in each of those rules is a very
> costly affair.
Fortunately, natural language (unlike artificial language) has a structure
t
And here's the human psuedocode:
1. Hold Knife above flame until red.
2. Place knife on arm.
3. a. Accept Pain sensation
b. Scream or respond as necessary
4. Press knife harder into skin.
5. Goto 3, until 6.
6. Pass out from pain
Matt Mahoney <[EMAIL PROTECTED]> wrote: Below is a program t
I believe that is just a simple rule that you can input in most systems, and it
will match that point, but promgramming in each of those rules is a very costly
affair.
James
Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- James Ratcliff wrote:
> Interesting points, but I believe you can get aro
--- Derek Zahn <[EMAIL PROTECTED]> wrote:
> Matt Mahoney writes:> Below is a program that can feel pain. It is a
> simulation of a programmable> 2-input logic gate that you train using
> reinforcement conditioning.
> Is it ethical to compile and run this program?
Well, that is a good question.
An additional idea: each member's vote could be weighted by the
member's total amount of contributions. This way, we can establish a
network of genuine contributors via self-organization, and protect against
mischief-makers, "nonsense", or sabotage, etc.
YKY
-
This list is sponsored by AGI
On Monday 11 June 2007 03:22:04 pm Matt Mahoney wrote:
> /* pain.cpp - A program that can feel pleasure and pain.
> ...
Ouch! :-)
Josh
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=23
Here is a program that feels pain. It is a simulation of a 2-input logic gate
that you train by reinforcement learning. It "feels" in the sense that it
adjusts its behavior to avoid negative reinforcement from the user.
/* pain.cpp - A program that can feel pleasure and pain.
The program simul
--- James Ratcliff <[EMAIL PROTECTED]> wrote:
> Interesting points, but I believe you can get around alot of the problems
> with two additional factors,
> a. using either large quantities of quality text, (ie novels, newspapers) or
> similar texts like newspapers.
> b. using a interactive built
Matt Mahoney writes:> Below is a program that can feel pain. It is a simulation
of a programmable> 2-input logic gate that you train using reinforcement
conditioning.
Is it ethical to compile and run this program?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
Below is a program that can feel pain. It is a simulation of a programmable
2-input logic gate that you train using reinforcement conditioning.
/* pain.cpp
This program simulates a programmable 2-input logic gate.
You train it by reinforcement conditioning. You provide a pair of
input bits (0
On Monday 11 June 2007 02:06:35 pm Joshua Fox wrote:
...
> Could I ask also that you take a stab at a psychological/sociological
> question: Why have not the leading minds of AI (considering for this
> purpose only the true creative thinkers with status in the community,
> however small a fraction
On 6/11/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> I'm sorry about the confusion. Let me correct by saying: it *is* to
your advantage to exaggerate your contributions, but your peers won't allow
it.
Cool.
I'll then move back to my other point that is probably better phrased as
"I don't b
James,
Frank Jackson (in "Epiphenomenal Qualia") defined qualia as
"...certain features of the bodily sensations especially, but also of
certain perceptual experiences, which no amount of purely physical
information includes.. :-)
If it walks like a human, talks like a human, then for all those
Josh,
Thanks for that answer on the layering of mind.
It's not that any existing level is wrong, but there aren't enough of
them, so
that the higher ones aren't being built on the right primitives in current
systems. Word-level concepts in the mind are much more elastic and plastic
than logi
Monday, June 11, 2007, Mark Waser wrote:
MW> The only scheme that I'd possibly accept based on lines of code
MW> would be one where if someone else wrote a tighter program, the original
MW> writer would get negative credit (i.e. something like
MW> if they wrote 7,000 lines and I re-did it with
On Monday 11 June 2007 12:12:26 pm Mark Waser wrote:
> ... The last thing that I want to do is *anything* that encourages people
to write more code ...
The classic apocryphal story is of the shop where they had this fellow who was
an unbelievably productive programmer -- up until the day he d
>> Has anyone tried a test of something as simple as "per line of code" /
>> function?
My first "official" programming course was a Master's level course at an
Ivy League college. The course project was a full-up LISP interpreter. My
program was ~800-900 lines and passed all testing with
Correct, but I don't believe that systems (like Cyc) are doing this type of
Active learning now, and it would help to gather quality information and
fact-check it.
Cyc does have some interesting projects where it takes a proposed statment and
when a engineer is working with it, will go out an
Has anyone tried a test of something as simple as "per line of code" / function?
Meaning that each "function" or "module" could have a % value associated with
it (set by many users average rating)
And then simply giving credit by line of code input.
Anyone writing cruddy long code would initiall
On 6/11/07, James Ratcliff <[EMAIL PROTECTED]> wrote:
Interesting points, but I believe you can get around alot of the problems
with two additional factors,
a. using either large quantities of quality text, (ie novels, newspapers) or
similar texts like newspapers.
b. using a interactive built in
Two different responses to this type of arguement.
Once you "simulate" something to the fact that we cant tell the difference
between it in any way, then it IS that something for most all intents and
purposes as far as the tests you have go.
If it walks like a human, talks like a human, then for
Interesting points, but I believe you can get around alot of the problems with
two additional factors,
a. using either large quantities of quality text, (ie novels, newspapers) or
similar texts like newspapers.
b. using a interactive built in 'checker' system, assisted learning where the
AI cou
On 6/6/07, Peter Voss <[EMAIL PROTECTED]> wrote:
'fraid not. Have to look after our investors' interests… (and, like Ben, I'm
not keen for AGI technology to be generally available)
But at least Novamente makes a convinceable amount of their ideas
available IMHO.
P.S. "Probabilistic Logic Netw
I'll try to answer this and Mike Tintner's question at the same time. The
typical GOFAI engine over the past decades has had a layer structure
something like this:
Problem-specific assertions
Inference engine/database
Lisp
on top of the machine and OS. Now it turns out that this is plenty to bu
Josh,
Your point about layering makes perfect sense.
I just ordered your book, but, impatient as I am, could I ask a question
about this, though I've asked a similar question before: Why have not the
elite of intelligent and open-minded leading AI researchers not attempted a
multi-layered approa
36 matches
Mail list logo