Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Josh Treadwell
 reply. They'd love
to
know the best answer.  No more need for all these different schools of
investors to argue so furiously, no more need for all these schools just
of
investment AI/ computation alone to keep arguing either.Pei's cracked it,
guys. Over here.

You really would do well to think very long and hard about that simple
problem - it will change your life. I hope you will have the courage to
answer the problem.

(BTW MOST of the problems humans face in everyday life can be represented
as
investment problems - it's a basic, not an eccentric problem).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.





--
Josh Treadwell
  [EMAIL PROTECTED]
  480-206-3776

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Josh Treadwell

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:


 No, I keep saying - I'm not asking for the odd narrowly-defined task -
but rather defining CLASSES of specific problems that your/an AGI will be
able to tackle. Part of the definition task should be to explain how if you
can solve one kind of problem, then you will be able to solve other distinct
kinds.



Did nature have a specific task in mind when our brains evolved?  Much like
an AGI, we as humans are capable of doing MANY things.  To sum it up, AGI
could be described as a machine that is capable of using pattern
recognition, classification, and analysis to produce better pattern
recognition, classification and analysis systems for itself.  The results of
this apply to every problem that could ever be asked to solve.

The traditional approach to AI is to do exactly what you're asking: solve
individual problems and build them up until we have something that, on every
observable level, is equivalent to a thinking person.  For the last 50
years, this hasn't produced any promising results in terms of cognition.

It's interesting -  I'm not being in any way critical - that this isn't

getting through.





--
Josh Treadwell
  [EMAIL PROTECTED]
  480-206-3776

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] SOTA

2006-10-20 Thread Josh Treadwell




Bill, Richard, etc,
Children don't have a great grasp of language, but they have all the
sensory and contextual mechanisms to learn a language by causal
interaction with their environment. Semantics are a learned system,
just as words are. In current AI we're programming semantic rules into
a huge neural database, and asking it to play an big matching game.
These two types of learning may give the same result, but it's not the
same process by a long shot. Every time we logically code an
algorithm, we're only mimicking the logic function of a learned neural
process, which doesn't allow the tiered complexity and concept grasping
that sensory learning does.  

Because language uses discrete semantic rules, it's easy to fall into
the trap of thinking computers, given enough horsepower, are capable of
human thought. Give a computer as many semantic algorithms, metaphor
databases, and reaction grading mechanisms as you want, but it takes
much deeper and differentiated networks to apply those words and derive
a
physical meaning beyond grammatical or metaphorical boundaries. This
is
the difference between a system that resembles intelligence, and an
intelligent system. The resembling system is only capable of
processing information based on algorithms, and not reworking an
algorithm based on the reasoning for executing the function.
Whether our AGI is conscious or not, it could still be functionally
equivalent to a human mind in terms of output. The recursive
bidirectional nature of neurons and their relation to forming a gestalt
is something we're barely able to grasp as a concept, let alone code
for. The nature of our hardware is going to have to change to
accommodate these multidimensional and recursive problems in
computing. 





Josh Treadwell
 Systems Administrator
 [EMAIL PROTECTED]
 direct:480.206.3776

C.R.I.S. Camera Services
250 North 54th Street
Chandler, AZ 85226 USA
p 480.940.1103 / f 480.940.1329
http://www.criscam.com



BillK wrote:
On 10/20/06, Richard Loosemore [EMAIL PROTECTED]
wrote:
  
  
For you to blithely say "Most normal speaking requires relatively
little

'intelligence'" is just mind-boggling.


  
  
I am not trying to say that language skills don't require a human
  
level of intelligence. That's obvious. That is what make humans human.
  
  
But day-to-day chat can be mastered by children, even in a foreign
language.
  
  
Watch that video I referenced in my previous post, of an American
  
chatting to a Chinese woman via a laptop running MASTOR software.
  
http://www.research.ibm.com/jam/speech_to_speech.mpg
  
  
Now tell me that that laptop is showing great intelligence to
  
translate at the basic level of normal conversation. Simple subject
  
object predicate stuff. Basic everyday vocabulary.
  
No complex similes, metaphors, etc.
  
  
There is a big difference between discussing philosophy and saying
  
"Where is the toilet?"
  
That is what I was trying to point out.
  
  
Billk
  
  
-
  
This list is sponsored by AGIRI: http://www.agiri.org/email
  
To unsubscribe or change your options, please go to:
  
http://v2.listbox.com/member/[EMAIL PROTECTED]
  
  


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.



Re: [agi] SOTA

2006-10-20 Thread Josh Treadwell




Philip Goetz wrote:
On 10/20/06, Josh Treadwell [EMAIL PROTECTED]
wrote:
  
  
The resembling system is only capable of processing information based
on

algorithms, and not reworking an algorithm based on the reasoning for

executing the function.

  
  
This appears to be the same argument Spock made in an old Star Trek
  
episode, that the computer chess-player could never beat the person
  
who programmed it. Note to the world: It is wrong. Please stop
  
using this argument.
  

It's not the same. A chess program is merely comparing outcomes and
percentages, while adapting algorithmically to play styles. It's a
discrete system within which logically written functions are executed.
Yes, it adapts to moves and keeps a track of which moves are going on,
but there is no higher order AI that is thinking "out of the box" about
the problem. It simply approaches, computes based on a database of
moves, and weighs it's advantages and disadvantages. A chess program
never reworks it's strategy based on it's own reasoning of why it's
playing. It just does, and does well. Yes it could beat us, but it's
akin to saying a calculator is faster at math than we are.

Josh Treadwell
 Systems Administrator
 [EMAIL PROTECTED]
 direct:480.206.3776

C.R.I.S. Camera Services
250 North 54th Street
Chandler, AZ 85226 USA
p 480.940.1103 / f 480.940.1329
http://www.criscam.com


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.



Re: A Mind Ontology Project? [Re: [agi] method for joining efforts]

2006-10-16 Thread Josh Treadwell






  
  The second
project that hasnt
started yet is the Loglish language parser project. The goal of this
project
would be to build a richly featured parser library for Loglish, a
composite-language
of Lojban and English designed by Dr. Goertzel. (More information on
Loglish here:
  http://www.agiri.org/forum/index.php?showtopic=125)
This project requires a lot of specific domain knowledge: parser
generators, computational
linguistics, formal logic, etc.
  

Is this referring to Ben's "lojban++" or "loglish"? I wasn't sure if
there was a difference. It seems lojban++ is an update to his loglish
proposition: 

http://www.goertzel.org/papers/lojbanplusplus.pdf



-- 


Josh Treadwell



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.

---BeginMessage---

Hi all,

I have spent some time recently mulling over the details of a
partially-new language for communicating between humans and AI's.  The
language is (tentatively) called Lojban++ and is described here:

http://www.goertzel.org/papers/lojbanplusplus.pdf

Of course, I don't think that a language like this solves the
fundamental problems of AGI design/dynamics/teaching.  By no means.
However, I think it can be a valuable tool, in terms of making the
teaching process easier and smoother.  Humans come with a lot of
inbuilt inductive bias that helps us learn natural languages.  AGI's
don't have this particular inductive bias, unless one explicitly
builds it in, which is very hard.  Thus it makes sense, for AGI
teaching purposes, to use a language that can be mastered without any
particular inductive bias, because it's closer to the thought level
without so much arbitrariness.  Lojban++, a pidgin combining some
English vocabulary with the already existing logic-based language
Lojban, seems to me to fit the bill...

Building a Lojban++ parser and semantic-mapper will require a fair bit
of work, and if anyone on this list is interested in taking on this
project (on an open-source basis, most likely) I'd be eager to talk
about it...

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
---End Message---


Re: [agi] Is a robot a Turing Machine?

2006-10-02 Thread Josh Treadwell

Sergio,
   I think brains are classical devices as well, although I also 
believe there to be a difference between simple classical systems and 
systems exhibiting a complexity threshold.  When you introduce enough 
autonomous agents into a system, the emergent behavior generates a new 
threshold.  And this process is recursive, so that once a threshold is 
met, another new system is coined.  There are no definite threshold 
levels but rather vague analog tipping points depending on the 
characteristics of the input system.  Thus, nth level systems (cognition 
- biological learning algorithms - neurons -  molecules - atoms - 
sub-atomic systems - universal constant limits, etc) are exchanging 
evolutionary information with each other, and cross-influencing each 
others behavior.
   This brings us to orders of magnitude and the hierarchal 
perspective. Successful evolutionary growth tends to have higher level 
systems (person) that see its lower systems (cells) as expendable, but 
higher forms (society) as worth more than itself.  From a certain 
vantage point on this hierarchy, a low enough system is replaceable by 
a process that is able to predict an outcome or output up to an 
unimportant round (ie analog vs. digital).  My dilemma occurs when 
consciousness comes into the picture.  There may be a certain point 
where existing, in the way we understand it, parallels some universal 
constant or planck time persistence.  This trait may not follow into 
our AGI systems.  We might be taking the same evolutionary road as our 
AI, but they lack the quantum parallel necessary to be conscious.  
Perhaps systemic cognition and true consciousness are separable?  A 
cognitive understanding of systems might itself be a higher, yet 
unnecessary step up from conscious cognition.   So the question would 
become, are cognitive systems or consciousness more important to 
reproduce first?


Sorry about the bad english/vagueness.  Lunch came and went a little too 
fast today for any editing.


-Josh


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]