Re: [opencog-dev] Pros and cons

2017-05-02 Thread nageenn18
Another question is that how atomspace is different from chunking technique?Can 
we merge chunking with atoms concept??Is there any representation of featuers 
like chunk with dimension/value pairs?

Sent from my Samsung Galaxy smartphone.
 Original message From: 'Nil Geisweiller' via opencog 
 Date: 5/2/17  10:37 AM  (GMT+05:00) To: 
opencog@googlegroups.com Subject: Re: [opencog-dev] Pros and cons 


On 04/28/2017 06:49 PM, Linas Vepstas wrote:
>
>
> On Fri, Apr 28, 2017 at 1:34 AM, Nageen Naeem  > wrote:
>
> is opencog knowledge representation language is able to learn things?
>
>
> Yes, but that is a topic of current active research.  There are three
> ways to do this:
> 1) use moses
> 2) use the pattern miner
> 3) use the language-learning subsystem.
> 4) the neural net subsystem, Ralf is working on that, its a kind-of
> generalization of the earlier "destin", and using tensorflow under the
> covers.  So far, it's been used to create facial expressions (for use in
> humanoid robots)

Reasoning can be used too , you could for instance query

Implication
   
   Variable "$X"

via the backward chainer and it would fill the blanks with $X that 
directly and indirectly match. That is an inefficient form of learning, 
but still.

Nil

>
> I'm currently am working on language learning and have vague plans to
> port it over to the pattern miner, someday.  I haven't looked at the
> pattern miner yet, I'm guessing that it remains at a rather primitive,
> low level, for now.
>
> Basically, moses is "mature" the other three are not, they're in very
> active development.
>
> --linas
>
> On Friday, April 28, 2017 at 9:47:45 AM UTC+5, Daniel Gross wrote:
>
> Hi Linas,
>
> I guess i should further ask:
>
> What determines the expressiveness of OpenCogs representation, the
> one that is bult-into its inference.
>
> thank you,
>
> Daniel
>
> On Thursday, 27 April 2017 05:27:45 UTC+3, linas wrote:
>
>
>
> On Wed, Apr 26, 2017 at 2:06 PM, Nageen Naeem
>  wrote:
>
> how I can differentiate knowledge representation in OpenCog
> and traditional knowledge representation techniques.
>
>
> Opencog is really pretty traditional in its representation form.
> There are whizzy bits: the ability to assign arbitrary
> valuations to the KR (e.g. floating point probabilities). Maybe
> I should say that opencog allows you to "design your own KR",
> although it provides a reasonable one, based on the PLN books.
>
> There's a pile of tools not available in other KR systems,
> including a sophisticate pattern matcher, a prototype pattern
> miner, a learning subsystem, an NLP subsystem.  Its an active
> project, its open source, with these last two distinguishing it
> from pretty much everything else.
>
> --linas
>
>
>
> On Thursday, April 27, 2017 at 12:02:16 AM UTC+5, Nageen
> Naeem wrote:
>
> basically, i want to compare knowledge representation
> techniques, want to compare knowledge representation in
> OpenCog and in clarion? any description, please.
>
> On Wednesday, April 26, 2017 at 11:54:11 PM UTC+5, linas
> wrote:
>
>
>
> On Wed, Apr 26, 2017 at 1:41 PM, Nageen Naeem
>  wrote:
>
> OpenCog didn't shift to java from c++?
>
>
> You are welcome to study https://github.com/opencog
> for the source languages used.
>
>
> Thanks for defining pros and cons if there is
> any paper on comparison with other architecture
> kindly recommend me.
>
>
> Ben has written multiple books on the archtiecture
> in general.  The wiki describes particular choices.
>
> I am not aware of any other
> (knowledge-representation) architectures that can do
> what the atomspace can do.  So I'm not sure what you
> want to compare against. Triplestore? various
> actionscripts? Prolog?
>
> --linas
>
>
> On Wednesday, April 26, 2017 at 9:36:04 PM
> UTC+5, Ben Goertzel wrote:
>
> OpenCog did not shift from Java to C++, it
> was always C++
>
> The advantage of Atomspace is that it allows
> fine-grained semantic
> representations of all forms of knowledge in
> a common framework.  The
> disadvantage is, this makes 

Re: [opencog-dev] Pros and cons

2017-05-02 Thread nageenn18
Dear all, Can anyone here explain in detail tge concept of truth value-stregnth 
-confidence-countWhat is the concept of attention value.Explain with example 
please


Sent from my Samsung Galaxy smartphone.
 Original message From: 'Nil Geisweiller' via opencog 
 Date: 5/2/17  10:45 AM  (GMT+05:00) To: 
opencog@googlegroups.com Cc: gross...@gmail.com, Linas Vepstas 
 Subject: Re: [opencog-dev] Pros and cons 
On 04/28/2017 06:11 PM, Ben Goertzel wrote:
> to implement new inference rules, you code new ImplicationLinks,
> wrapped with LambdaLinks etc. ...

Some precision. You can encode rules as data using for instance 
ImplicationLinks, then use PLN or any custom deduction, modus-ponens, 
etc rules defined as BindLinks to reason on these. Or directly encode 
your rules as BindLinks. The following example demonstrates the 2 ways

https://github.com/opencog/atomspace/tree/master/examples/rule-engine/frog

Nil


>
> new inference rules coded as such Atoms, can be executed perfectly
> well by the URE rule engine...
>
> quantitative truth value formulas associated with new inference rules
> can be coded in Scheme or python and wrapped in GroundedSchemaNodes
>
> easy peasy...
>
>
> On Fri, Apr 28, 2017 at 11:09 PM, Daniel Gross  wrote:
>> Hi Linas,
>>
>> Thank you.
>>
>> What is the mechanism to endow new language elements in atomese with an
>> (custom) inference semantics.
>>
>> thank you,
>>
>> Daniel
>>
>>
>>
>>
>> On Friday, 28 April 2017 17:47:16 UTC+3, linas wrote:
>>>
>>>
>>>
>>> On Wed, Apr 26, 2017 at 11:43 PM, Daniel Gross  wrote:

 Hi Linas,

 Yes your intuition is right.

 Thank you for your clarification.

 What is the core meta-language that is OpenCog into which PLN can be
 loaded.
>>>
>>>
>>> Its the system of typed atoms and values values.
>>> http://wiki.opencog.org/w/Atom    http://wiki.opencog.org/w/Value
>>>
>>> You can add new types if you wish (you can remove them too, but stuff will
>>> then likely break) with the new types defining teh new kinds of knowledge
>>> you want to represent.
>>>
>>> There is a rich set of pre-defined types, which encode pretty much
>>> everything that is generically useful, across multiple projects that people
>>> have done.  We call this "language" "atomese"
>>> http://wiki.opencog.org/w/Atomese
>>>
>>> We've gone through a lot of different atom types, by trial and error; the
>>> current ones are the ones that seem to work OK.  There are over a hundred of
>>> them.
>>>
>>> PLN uses only about a dozen of them, such as ImplicationLink,
>>> InheritanceLink, and most importantly, EvaluationLink.
>>>
>>> Using EvaluationLink is kind-of-like inventing a new type. So most users
>>> are told to use that, and nothing else.  Some types seem to deserve a
>>> short-hand notation, and so these get hard-coded for various reasons
>>> (usually for performance reasons).
>>>
>>> --linas


 Daniel



 On Thursday, 27 April 2017 05:42:02 UTC+3, linas wrote:
>
>
>
> On Wed, Apr 26, 2017 at 9:13 PM, Daniel Gross  wrote:
>>
>> Hi Linas,
>>
>> I guess it would be good to differentiate between the KR architecture
>> and the language. Would be great if there exists some kind of comparison 
>> of
>> the open cog language to other comparable KR languages.
>
>
> I don't quite understand.  However, if I were to take a guess at the
> intent.
>
> opencog allows you to design your own KR language; it doesn't much care,
> it provides a set of tools. These include a data store, a rule engine with
> backward and forward chainers, a pattern matcher, a pattern miner.
>
> Opencog does come with a default "KR language", PLN -- its described in
> multiple PLN books.  But if you don't like PLN, you can create your own KR
> language. All the parts are there.
>
> The "cognitive architecture" is something you'd layer on top of the KR
> language (and/or on top of various neural nets, and/or on top of various
> learning algorithms, etc).
>
> opencog does not have a particularly firm "architecture" per se; we
> experiment and try to make things work, and learn from that. Ben would say
> that there is an architecture, it just hasn't been implemented yet.  
> There's
> a lot to do, we're only getting started.
>
> --linas
>>
>>
>> Then there are cognitive architectures, which can be compared. I think
>> Ben has a number of architectures compared in his book.
>>
>> i guess one then needs a kind of "composite" -- what an
>> architecture+language can do, since an architecture likely takes 
>> advantage
>> of the language features.
>>
>> Daniel
>>
>> On Wednesday, 26 April 2017 21:54:11 UTC+3, linas wrote:
>>>
>>>
>>>
>>> On