Re: [opencog-dev] Pros and cons

2017-05-02 Thread Ben Goertzel
When an Atom is used by a cognitive process its STI gets boosted
("stimulated"), along with (to a much lesser degree) its LTI value

The ECAN module has an importance-spreading agent that spreads STI and
LTI values around along the links in the Atomspace

That's what's happening now; fancier methods of adjusting STI and LTI
using predictive modeling have been thought through but not
implemented/tested...

Low LTI can cause something get saved to disk, not necessarily deleted
forever ... knowledge can be retrieved from disk without being
relearned..




On Wed, May 3, 2017 at 12:21 PM, Daniel Gross  wrote:
> Hi,
>
> Thank you for the example.
>
> Perhaps i can ask a follow up question:
>
> How are the STI values set (how do we know what is relevant for now), at
> what time and which processes in open cog are responsible for them. I assume
> that STI values are set for whole groups of atoms.
>
> When and for what purpose are STI values changed.
>
> Also, how are LTI values arrived at, it seems to me that LTI values are like
> forgetting -- once its gone, its gone, unless re-learned.
>
> thank you,
>
> Daniel
>
> On Tuesday, 2 May 2017 18:17:47 UTC+3, Vishnu Priya wrote:
>>
>> Hi,
>>
>>
>> InheritenceLink Nageen human 
>>
>> strength - represents True/false
>> Confidence - expresses degree of strength, expresses  how certain/
>> uncertain the strength is.
>>
>> InheritenceLink Nageen human <.9, .9>
>>
>> InheritenceLink Nageen monster <.9, .1>
>> this indicates that there exists very small evidence that Nageen is
>> monster.
>>
>> Atoms are usually represented with attentional values. They are of
>> following types.
>>
>> STI: This value indicates how relevant this atom is to the currently
>> running process/context
>> LTI: This value indicates how relevant this atom might be in future
>> processes/context (Atoms with low LTI have no future use and get delete if
>> the AS gets to big)
>> VLTI: This is a simple boolean that indicates that this atom should never
>> be deleted. (Useful for system components that are written in Atomese)
>>
>> -Cheers,
>> Vishnu
>>
>>
>> On Tuesday, 2 May 2017 16:41:19 UTC+2, Nageen Naeem wrote:
>>>
>>> Dear all,
>>> Can anyone here explain in detail tge concept of truth value
>>> -stregnth
>>> -confidence
>>> -count
>>> What is the concept of attention value.
>>> Explain with example please
>>>
>>>
>>>
>>> Sent from my Samsung Galaxy smartphone.
>>>
>>>  Original message 
>>> From: 'Nil Geisweiller' via opencog 
>>> Date: 5/2/17 10:45 AM (GMT+05:00)
>>> To: ope...@googlegroups.com
>>> Cc: gros...@gmail.com, Linas Vepstas 
>>> Subject: Re: [opencog-dev] Pros and cons
>>>
>>> On 04/28/2017 06:11 PM, Ben Goertzel wrote:
>>> > to implement new inference rules, you code new ImplicationLinks,
>>> > wrapped with LambdaLinks etc. ...
>>>
>>> Some precision. You can encode rules as data using for instance
>>> ImplicationLinks, then use PLN or any custom deduction, modus-ponens,
>>> etc rules defined as BindLinks to reason on these. Or directly encode
>>> your rules as BindLinks. The following example demonstrates the 2 ways
>>>
>>>
>>> https://github.com/opencog/atomspace/tree/master/examples/rule-engine/frog
>>>
>>> Nil
>>>
>>>
>>> >
>>> > new inference rules coded as such Atoms, can be executed perfectly
>>> > well by the URE rule engine...
>>> >
>>> > quantitative truth value formulas associated with new inference rules
>>> > can be coded in Scheme or python and wrapped in GroundedSchemaNodes
>>> >
>>> > easy peasy...
>>> >
>>> >
>>> > On Fri, Apr 28, 2017 at 11:09 PM, Daniel Gross 
>>> > wrote:
>>> >> Hi Linas,
>>> >>
>>> >> Thank you.
>>> >>
>>> >> What is the mechanism to endow new language elements in atomese with
>>> >> an
>>> >> (custom) inference semantics.
>>> >>
>>> >> thank you,
>>> >>
>>> >> Daniel
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> On Friday, 28 April 2017 17:47:16 UTC+3, linas wrote:
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Wed, Apr 26, 2017 at 11:43 PM, Daniel Gross 
>>> >>> wrote:
>>> 
>>>  Hi Linas,
>>> 
>>>  Yes your intuition is right.
>>> 
>>>  Thank you for your clarification.
>>> 
>>>  What is the core meta-language that is OpenCog into which PLN can be
>>>  loaded.
>>> >>>
>>> >>>
>>> >>> Its the system of typed atoms and values values.
>>> >>> http://wiki.opencog.org/w/Atomhttp://wiki.opencog.org/w/Value
>>> >>>
>>> >>> You can add new types if you wish (you can remove them too, but stuff
>>> >>> will
>>> >>> then likely break) with the new types defining teh new kinds of
>>> >>> knowledge
>>> >>> you want to represent.
>>> >>>
>>> >>> There is a rich set of pre-defined types, which encode pretty much
>>> >>> everything that is generically useful, across multiple projects that
>>> >>> people
>>> >>> have done.  We call this "language" "atomese"
>>> >>> http://wiki.opencog.org/w/Atomese
>>> 

Re: [opencog-dev] Pros and cons

2017-05-02 Thread Daniel Gross
Hi,

Thank you for the example. 

Perhaps i can ask a follow up question:

How are the STI values set (how do we know what is relevant for now), at 
what time and which processes in open cog are responsible for them. I 
assume that STI values are set for whole groups of atoms. 

When and for what purpose are STI values changed. 

Also, how are LTI values arrived at, it seems to me that LTI values are 
like forgetting -- once its gone, its gone, unless re-learned. 

thank you,

Daniel

On Tuesday, 2 May 2017 18:17:47 UTC+3, Vishnu Priya wrote:
>
> Hi,
>
>
> InheritenceLink Nageen human 
>
> strength - represents True/false
> Confidence - expresses degree of strength, expresses  how certain/ 
> uncertain the strength is.
>
> InheritenceLink Nageen human <.9, .9>
>
> InheritenceLink Nageen monster <.9, .1>
> this indicates that there exists very small evidence that Nageen is 
> monster.
>
> Atoms are usually represented with attentional values. They are of 
> following types.
>
> STI: This value indicates how relevant this atom is to the currently 
> running process/context
> LTI: This value indicates how relevant this atom might be in future 
> processes/context (Atoms with low LTI have no future use and get delete if 
> the AS gets to big)
> VLTI: This is a simple boolean that indicates that this atom should never 
> be deleted. (Useful for system components that are written in Atomese)
>
> -Cheers,
> Vishnu
>
>
> On Tuesday, 2 May 2017 16:41:19 UTC+2, Nageen Naeem wrote:
>>
>> Dear all, 
>> Can anyone here explain in detail tge concept of truth value
>> -stregnth 
>> -confidence
>> -count
>> What is the concept of attention value.
>> Explain with example please
>>
>>
>>
>> Sent from my Samsung Galaxy smartphone.
>>
>>  Original message 
>> From: 'Nil Geisweiller' via opencog  
>> Date: 5/2/17 10:45 AM (GMT+05:00) 
>> To: ope...@googlegroups.com 
>> Cc: gros...@gmail.com, Linas Vepstas  
>> Subject: Re: [opencog-dev] Pros and cons 
>>
>> On 04/28/2017 06:11 PM, Ben Goertzel wrote:
>> > to implement new inference rules, you code new ImplicationLinks,
>> > wrapped with LambdaLinks etc. ...
>>
>> Some precision. You can encode rules as data using for instance 
>> ImplicationLinks, then use PLN or any custom deduction, modus-ponens, 
>> etc rules defined as BindLinks to reason on these. Or directly encode 
>> your rules as BindLinks. The following example demonstrates the 2 ways
>>
>> https://github.com/opencog/atomspace/tree/master/examples/rule-engine/frog
>>
>> Nil
>>
>>
>> >
>> > new inference rules coded as such Atoms, can be executed perfectly
>> > well by the URE rule engine...
>> >
>> > quantitative truth value formulas associated with new inference rules
>> > can be coded in Scheme or python and wrapped in GroundedSchemaNodes
>> >
>> > easy peasy...
>> >
>> >
>> > On Fri, Apr 28, 2017 at 11:09 PM, Daniel Gross  
>> wrote:
>> >> Hi Linas,
>> >>
>> >> Thank you.
>> >>
>> >> What is the mechanism to endow new language elements in atomese with an
>> >> (custom) inference semantics.
>> >>
>> >> thank you,
>> >>
>> >> Daniel
>> >>
>> >>
>> >>
>> >>
>> >> On Friday, 28 April 2017 17:47:16 UTC+3, linas wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Wed, Apr 26, 2017 at 11:43 PM, Daniel Gross  
>> wrote:
>> 
>>  Hi Linas,
>> 
>>  Yes your intuition is right.
>> 
>>  Thank you for your clarification.
>> 
>>  What is the core meta-language that is OpenCog into which PLN can be
>>  loaded.
>> >>>
>> >>>
>> >>> Its the system of typed atoms and values values.
>> >>> http://wiki.opencog.org/w/Atomhttp://wiki.opencog.org/w/Value
>> >>>
>> >>> You can add new types if you wish (you can remove them too, but stuff 
>> will
>> >>> then likely break) with the new types defining teh new kinds of 
>> knowledge
>> >>> you want to represent.
>> >>>
>> >>> There is a rich set of pre-defined types, which encode pretty much
>> >>> everything that is generically useful, across multiple projects that 
>> people
>> >>> have done.  We call this "language" "atomese"
>> >>> http://wiki.opencog.org/w/Atomese
>> >>>
>> >>> We've gone through a lot of different atom types, by trial and error; 
>> the
>> >>> current ones are the ones that seem to work OK.  There are over a 
>> hundred of
>> >>> them.
>> >>>
>> >>> PLN uses only about a dozen of them, such as ImplicationLink,
>> >>> InheritanceLink, and most importantly, EvaluationLink.
>> >>>
>> >>> Using EvaluationLink is kind-of-like inventing a new type. So most 
>> users
>> >>> are told to use that, and nothing else.  Some types seem to deserve a
>> >>> short-hand notation, and so these get hard-coded for various reasons
>> >>> (usually for performance reasons).
>> >>>
>> >>> --linas
>> 
>> 
>>  Daniel
>> 
>> 
>> 
>>  On Thursday, 27 April 2017 05:42:02 UTC+3, linas wrote:
>> >
>> >
>> >
>> 

Re: [opencog-dev] Pros and cons

2017-05-02 Thread Vishnu Priya
Hi,


InheritenceLink Nageen human 

strength - represents True/false
Confidence - expresses degree of strength, expresses  how certain/ 
uncertain the strength is.

InheritenceLink Nageen human <.9, .9>

InheritenceLink Nageen monster <.9, .1>
this indicates that there exists very small evidence that Nageen is monster.

Atoms are usually represented with attentional values. They are of 
following types.

STI: This value indicates how relevant this atom is to the currently 
running process/context
LTI: This value indicates how relevant this atom might be in future 
processes/context (Atoms with low LTI have no future use and get delete if 
the AS gets to big)
VLTI: This is a simple boolean that indicates that this atom should never 
be deleted. (Useful for system components that are written in Atomese)

-Cheers,
Vishnu


On Tuesday, 2 May 2017 16:41:19 UTC+2, Nageen Naeem wrote:
>
> Dear all, 
> Can anyone here explain in detail tge concept of truth value
> -stregnth 
> -confidence
> -count
> What is the concept of attention value.
> Explain with example please
>
>
>
> Sent from my Samsung Galaxy smartphone.
>
>  Original message 
> From: 'Nil Geisweiller' via opencog  
>
> Date: 5/2/17 10:45 AM (GMT+05:00) 
> To: ope...@googlegroups.com  
> Cc: gros...@gmail.com , Linas Vepstas  > 
> Subject: Re: [opencog-dev] Pros and cons 
>
> On 04/28/2017 06:11 PM, Ben Goertzel wrote:
> > to implement new inference rules, you code new ImplicationLinks,
> > wrapped with LambdaLinks etc. ...
>
> Some precision. You can encode rules as data using for instance 
> ImplicationLinks, then use PLN or any custom deduction, modus-ponens, 
> etc rules defined as BindLinks to reason on these. Or directly encode 
> your rules as BindLinks. The following example demonstrates the 2 ways
>
> https://github.com/opencog/atomspace/tree/master/examples/rule-engine/frog
>
> Nil
>
>
> >
> > new inference rules coded as such Atoms, can be executed perfectly
> > well by the URE rule engine...
> >
> > quantitative truth value formulas associated with new inference rules
> > can be coded in Scheme or python and wrapped in GroundedSchemaNodes
> >
> > easy peasy...
> >
> >
> > On Fri, Apr 28, 2017 at 11:09 PM, Daniel Gross  > wrote:
> >> Hi Linas,
> >>
> >> Thank you.
> >>
> >> What is the mechanism to endow new language elements in atomese with an
> >> (custom) inference semantics.
> >>
> >> thank you,
> >>
> >> Daniel
> >>
> >>
> >>
> >>
> >> On Friday, 28 April 2017 17:47:16 UTC+3, linas wrote:
> >>>
> >>>
> >>>
> >>> On Wed, Apr 26, 2017 at 11:43 PM, Daniel Gross  
> wrote:
> 
>  Hi Linas,
> 
>  Yes your intuition is right.
> 
>  Thank you for your clarification.
> 
>  What is the core meta-language that is OpenCog into which PLN can be
>  loaded.
> >>>
> >>>
> >>> Its the system of typed atoms and values values.
> >>> http://wiki.opencog.org/w/Atomhttp://wiki.opencog.org/w/Value
> >>>
> >>> You can add new types if you wish (you can remove them too, but stuff 
> will
> >>> then likely break) with the new types defining teh new kinds of 
> knowledge
> >>> you want to represent.
> >>>
> >>> There is a rich set of pre-defined types, which encode pretty much
> >>> everything that is generically useful, across multiple projects that 
> people
> >>> have done.  We call this "language" "atomese"
> >>> http://wiki.opencog.org/w/Atomese
> >>>
> >>> We've gone through a lot of different atom types, by trial and error; 
> the
> >>> current ones are the ones that seem to work OK.  There are over a 
> hundred of
> >>> them.
> >>>
> >>> PLN uses only about a dozen of them, such as ImplicationLink,
> >>> InheritanceLink, and most importantly, EvaluationLink.
> >>>
> >>> Using EvaluationLink is kind-of-like inventing a new type. So most 
> users
> >>> are told to use that, and nothing else.  Some types seem to deserve a
> >>> short-hand notation, and so these get hard-coded for various reasons
> >>> (usually for performance reasons).
> >>>
> >>> --linas
> 
> 
>  Daniel
> 
> 
> 
>  On Thursday, 27 April 2017 05:42:02 UTC+3, linas wrote:
> >
> >
> >
> > On Wed, Apr 26, 2017 at 9:13 PM, Daniel Gross  
> wrote:
> >>
> >> Hi Linas,
> >>
> >> I guess it would be good to differentiate between the KR 
> architecture
> >> and the language. Would be great if there exists some kind of 
> comparison of
> >> the open cog language to other comparable KR languages.
> >
> >
> > I don't quite understand.  However, if I were to take a guess at the
> > intent.
> >
> > opencog allows you to design your own KR language; it doesn't much 
> care,
> > it provides a set of tools. These include a data store, a rule 
> engine with
> > backward and forward chainers, a pattern matcher, a pattern miner.

Re: [opencog-dev] Pros and cons

2017-05-02 Thread Matthew Ikle
This is straightforward: Strength is a measure of likelihood — it can be 
thought of as a probability, while confidence is a measure of how confident one 
is in the strength value. Confidence is related to the value of count. The more 
pieces of evidence upon which the strength is determined, the higher the 
confidence in the strength value. 

The attention value is determined by what the system is working upon at the 
moment. It is a measure of the importance of an Atom to the system at a point 
in time. As I write this, for example, “Atoms” in my mind related to the 
attention allocation system (Economic Attention Networks) would have a high 
attention (or importance) value.

—matt

> On May 2, 2017, at 8:41 AM, nageenn18  wrote:
> 
> Dear all, 
> Can anyone here explain in detail tge concept of truth value
> -stregnth 
> -confidence
> -count
> What is the concept of attention value.
> Explain with example please
> 
> 
> 
> Sent from my Samsung Galaxy smartphone.
> 
>  Original message 
> From: 'Nil Geisweiller' via opencog 
> Date: 5/2/17 10:45 AM (GMT+05:00)
> To: opencog@googlegroups.com
> Cc: gross...@gmail.com, Linas Vepstas 
> Subject: Re: [opencog-dev] Pros and cons
> 
> On 04/28/2017 06:11 PM, Ben Goertzel wrote:
> > to implement new inference rules, you code new ImplicationLinks,
> > wrapped with LambdaLinks etc. ...
> 
> Some precision. You can encode rules as data using for instance 
> ImplicationLinks, then use PLN or any custom deduction, modus-ponens, 
> etc rules defined as BindLinks to reason on these. Or directly encode 
> your rules as BindLinks. The following example demonstrates the 2 ways
> 
> https://github.com/opencog/atomspace/tree/master/examples/rule-engine/frog
> 
> Nil
> 
> 
> >
> > new inference rules coded as such Atoms, can be executed perfectly
> > well by the URE rule engine...
> >
> > quantitative truth value formulas associated with new inference rules
> > can be coded in Scheme or python and wrapped in GroundedSchemaNodes
> >
> > easy peasy...
> >
> >
> > On Fri, Apr 28, 2017 at 11:09 PM, Daniel Gross  wrote:
> >> Hi Linas,
> >>
> >> Thank you.
> >>
> >> What is the mechanism to endow new language elements in atomese with an
> >> (custom) inference semantics.
> >>
> >> thank you,
> >>
> >> Daniel
> >>
> >>
> >>
> >>
> >> On Friday, 28 April 2017 17:47:16 UTC+3, linas wrote:
> >>>
> >>>
> >>>
> >>> On Wed, Apr 26, 2017 at 11:43 PM, Daniel Gross  wrote:
> 
>  Hi Linas,
> 
>  Yes your intuition is right.
> 
>  Thank you for your clarification.
> 
>  What is the core meta-language that is OpenCog into which PLN can be
>  loaded.
> >>>
> >>>
> >>> Its the system of typed atoms and values values.
> >>> http://wiki.opencog.org/w/Atomhttp://wiki.opencog.org/w/Value
> >>>
> >>> You can add new types if you wish (you can remove them too, but stuff will
> >>> then likely break) with the new types defining teh new kinds of knowledge
> >>> you want to represent.
> >>>
> >>> There is a rich set of pre-defined types, which encode pretty much
> >>> everything that is generically useful, across multiple projects that 
> >>> people
> >>> have done.  We call this "language" "atomese"
> >>> http://wiki.opencog.org/w/Atomese
> >>>
> >>> We've gone through a lot of different atom types, by trial and error; the
> >>> current ones are the ones that seem to work OK.  There are over a hundred 
> >>> of
> >>> them.
> >>>
> >>> PLN uses only about a dozen of them, such as ImplicationLink,
> >>> InheritanceLink, and most importantly, EvaluationLink.
> >>>
> >>> Using EvaluationLink is kind-of-like inventing a new type. So most users
> >>> are told to use that, and nothing else.  Some types seem to deserve a
> >>> short-hand notation, and so these get hard-coded for various reasons
> >>> (usually for performance reasons).
> >>>
> >>> --linas
> 
> 
>  Daniel
> 
> 
> 
>  On Thursday, 27 April 2017 05:42:02 UTC+3, linas wrote:
> >
> >
> >
> > On Wed, Apr 26, 2017 at 9:13 PM, Daniel Gross  wrote:
> >>
> >> Hi Linas,
> >>
> >> I guess it would be good to differentiate between the KR architecture
> >> and the language. Would be great if there exists some kind of 
> >> comparison of
> >> the open cog language to other comparable KR languages.
> >
> >
> > I don't quite understand.  However, if I were to take a guess at the
> > intent.
> >
> > opencog allows you to design your own KR language; it doesn't much care,
> > it provides a set of tools. These include a data store, a rule engine 
> > with
> > backward and forward chainers, a pattern matcher, a pattern miner.
> >
> > Opencog does come with a default "KR language", PLN -- its described in
> > multiple PLN books.  But if you don't like PLN, you can 

Re: [opencog-dev] Program learning vs Machine Learning

2017-05-02 Thread Ben Goertzel
When you use MOSES for supervised classification, it's doing something
similar to a supervised learning algorithm... the advantage of MOSES
in that case is mostly that it tends to come up with classification
rules that are both accurate and concise (if the parameters are tuned
well), whereas e.g. it's hard to get neural nets or SVMs to produce
models that aren't bloated...

However, MOSES can also be used to learn other programs besides
classification functions

ben

On Tue, May 2, 2017 at 10:56 PM, Vishnu Priya  wrote:
>
>
> Hello All,
>
> Why should i use MOSES instead of  any ML algorithm? What distinguishes
> Program learning from ML?  Somehow i am not clear :-(
>
>
> Thanks,
> Vishnu
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+unsubscr...@googlegroups.com.
> To post to this group, send email to opencog@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/7e86f0bf-5835-4b1a-88a2-bc4da80dc305%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



-- 
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBfy%2BCQp96ODrCpukKPmSuOhhYKE-QrYniJo94mF2X2axQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [opencog-dev] Pros and cons

2017-05-02 Thread nageenn18
Another question is that how atomspace is different from chunking technique?Can 
we merge chunking with atoms concept??Is there any representation of featuers 
like chunk with dimension/value pairs?

Sent from my Samsung Galaxy smartphone.
 Original message From: 'Nil Geisweiller' via opencog 
 Date: 5/2/17  10:37 AM  (GMT+05:00) To: 
opencog@googlegroups.com Subject: Re: [opencog-dev] Pros and cons 


On 04/28/2017 06:49 PM, Linas Vepstas wrote:
>
>
> On Fri, Apr 28, 2017 at 1:34 AM, Nageen Naeem  > wrote:
>
> is opencog knowledge representation language is able to learn things?
>
>
> Yes, but that is a topic of current active research.  There are three
> ways to do this:
> 1) use moses
> 2) use the pattern miner
> 3) use the language-learning subsystem.
> 4) the neural net subsystem, Ralf is working on that, its a kind-of
> generalization of the earlier "destin", and using tensorflow under the
> covers.  So far, it's been used to create facial expressions (for use in
> humanoid robots)

Reasoning can be used too , you could for instance query

Implication
   
   Variable "$X"

via the backward chainer and it would fill the blanks with $X that 
directly and indirectly match. That is an inefficient form of learning, 
but still.

Nil

>
> I'm currently am working on language learning and have vague plans to
> port it over to the pattern miner, someday.  I haven't looked at the
> pattern miner yet, I'm guessing that it remains at a rather primitive,
> low level, for now.
>
> Basically, moses is "mature" the other three are not, they're in very
> active development.
>
> --linas
>
> On Friday, April 28, 2017 at 9:47:45 AM UTC+5, Daniel Gross wrote:
>
> Hi Linas,
>
> I guess i should further ask:
>
> What determines the expressiveness of OpenCogs representation, the
> one that is bult-into its inference.
>
> thank you,
>
> Daniel
>
> On Thursday, 27 April 2017 05:27:45 UTC+3, linas wrote:
>
>
>
> On Wed, Apr 26, 2017 at 2:06 PM, Nageen Naeem
>  wrote:
>
> how I can differentiate knowledge representation in OpenCog
> and traditional knowledge representation techniques.
>
>
> Opencog is really pretty traditional in its representation form.
> There are whizzy bits: the ability to assign arbitrary
> valuations to the KR (e.g. floating point probabilities). Maybe
> I should say that opencog allows you to "design your own KR",
> although it provides a reasonable one, based on the PLN books.
>
> There's a pile of tools not available in other KR systems,
> including a sophisticate pattern matcher, a prototype pattern
> miner, a learning subsystem, an NLP subsystem.  Its an active
> project, its open source, with these last two distinguishing it
> from pretty much everything else.
>
> --linas
>
>
>
> On Thursday, April 27, 2017 at 12:02:16 AM UTC+5, Nageen
> Naeem wrote:
>
> basically, i want to compare knowledge representation
> techniques, want to compare knowledge representation in
> OpenCog and in clarion? any description, please.
>
> On Wednesday, April 26, 2017 at 11:54:11 PM UTC+5, linas
> wrote:
>
>
>
> On Wed, Apr 26, 2017 at 1:41 PM, Nageen Naeem
>  wrote:
>
> OpenCog didn't shift to java from c++?
>
>
> You are welcome to study https://github.com/opencog
> for the source languages used.
>
>
> Thanks for defining pros and cons if there is
> any paper on comparison with other architecture
> kindly recommend me.
>
>
> Ben has written multiple books on the archtiecture
> in general.  The wiki describes particular choices.
>
> I am not aware of any other
> (knowledge-representation) architectures that can do
> what the atomspace can do.  So I'm not sure what you
> want to compare against. Triplestore? various
> actionscripts? Prolog?
>
> --linas
>
>
> On Wednesday, April 26, 2017 at 9:36:04 PM
> UTC+5, Ben Goertzel wrote:
>
> OpenCog did not shift from Java to C++, it
> was always C++
>
> The advantage of Atomspace is that it allows
> fine-grained semantic
> representations of all forms of knowledge in
> a common framework.  The
> disadvantage is, this makes 

Re: [opencog-dev] Pros and cons

2017-05-02 Thread nageenn18
Dear all, Can anyone here explain in detail tge concept of truth value-stregnth 
-confidence-countWhat is the concept of attention value.Explain with example 
please


Sent from my Samsung Galaxy smartphone.
 Original message From: 'Nil Geisweiller' via opencog 
 Date: 5/2/17  10:45 AM  (GMT+05:00) To: 
opencog@googlegroups.com Cc: gross...@gmail.com, Linas Vepstas 
 Subject: Re: [opencog-dev] Pros and cons 
On 04/28/2017 06:11 PM, Ben Goertzel wrote:
> to implement new inference rules, you code new ImplicationLinks,
> wrapped with LambdaLinks etc. ...

Some precision. You can encode rules as data using for instance 
ImplicationLinks, then use PLN or any custom deduction, modus-ponens, 
etc rules defined as BindLinks to reason on these. Or directly encode 
your rules as BindLinks. The following example demonstrates the 2 ways

https://github.com/opencog/atomspace/tree/master/examples/rule-engine/frog

Nil


>
> new inference rules coded as such Atoms, can be executed perfectly
> well by the URE rule engine...
>
> quantitative truth value formulas associated with new inference rules
> can be coded in Scheme or python and wrapped in GroundedSchemaNodes
>
> easy peasy...
>
>
> On Fri, Apr 28, 2017 at 11:09 PM, Daniel Gross  wrote:
>> Hi Linas,
>>
>> Thank you.
>>
>> What is the mechanism to endow new language elements in atomese with an
>> (custom) inference semantics.
>>
>> thank you,
>>
>> Daniel
>>
>>
>>
>>
>> On Friday, 28 April 2017 17:47:16 UTC+3, linas wrote:
>>>
>>>
>>>
>>> On Wed, Apr 26, 2017 at 11:43 PM, Daniel Gross  wrote:

 Hi Linas,

 Yes your intuition is right.

 Thank you for your clarification.

 What is the core meta-language that is OpenCog into which PLN can be
 loaded.
>>>
>>>
>>> Its the system of typed atoms and values values.
>>> http://wiki.opencog.org/w/Atom    http://wiki.opencog.org/w/Value
>>>
>>> You can add new types if you wish (you can remove them too, but stuff will
>>> then likely break) with the new types defining teh new kinds of knowledge
>>> you want to represent.
>>>
>>> There is a rich set of pre-defined types, which encode pretty much
>>> everything that is generically useful, across multiple projects that people
>>> have done.  We call this "language" "atomese"
>>> http://wiki.opencog.org/w/Atomese
>>>
>>> We've gone through a lot of different atom types, by trial and error; the
>>> current ones are the ones that seem to work OK.  There are over a hundred of
>>> them.
>>>
>>> PLN uses only about a dozen of them, such as ImplicationLink,
>>> InheritanceLink, and most importantly, EvaluationLink.
>>>
>>> Using EvaluationLink is kind-of-like inventing a new type. So most users
>>> are told to use that, and nothing else.  Some types seem to deserve a
>>> short-hand notation, and so these get hard-coded for various reasons
>>> (usually for performance reasons).
>>>
>>> --linas


 Daniel



 On Thursday, 27 April 2017 05:42:02 UTC+3, linas wrote:
>
>
>
> On Wed, Apr 26, 2017 at 9:13 PM, Daniel Gross  wrote:
>>
>> Hi Linas,
>>
>> I guess it would be good to differentiate between the KR architecture
>> and the language. Would be great if there exists some kind of comparison 
>> of
>> the open cog language to other comparable KR languages.
>
>
> I don't quite understand.  However, if I were to take a guess at the
> intent.
>
> opencog allows you to design your own KR language; it doesn't much care,
> it provides a set of tools. These include a data store, a rule engine with
> backward and forward chainers, a pattern matcher, a pattern miner.
>
> Opencog does come with a default "KR language", PLN -- its described in
> multiple PLN books.  But if you don't like PLN, you can create your own KR
> language. All the parts are there.
>
> The "cognitive architecture" is something you'd layer on top of the KR
> language (and/or on top of various neural nets, and/or on top of various
> learning algorithms, etc).
>
> opencog does not have a particularly firm "architecture" per se; we
> experiment and try to make things work, and learn from that. Ben would say
> that there is an architecture, it just hasn't been implemented yet.  
> There's
> a lot to do, we're only getting started.
>
> --linas
>>
>>
>> Then there are cognitive architectures, which can be compared. I think
>> Ben has a number of architectures compared in his book.
>>
>> i guess one then needs a kind of "composite" -- what an
>> architecture+language can do, since an architecture likely takes 
>> advantage
>> of the language features.
>>
>> Daniel
>>
>> On Wednesday, 26 April 2017 21:54:11 UTC+3, linas wrote:
>>>
>>>
>>>
>>> On 

Re: [opencog-dev] PLN rules selection

2017-05-02 Thread Vishnu Priya


> Yeah Nil.  I think, the output is not in the suitable format to run FC/BC. 
>  i extracted R2L parses and it is like the following:

 ((ImplicationLink (stv 1 1)
   (PredicateNode "for@bd1dfde9-6b3c-4b90-912e-c0f9815cf2b1" (stv 
9,7569708e-13 0,0012484395))
   (PredicateNode "for" (stv 9,7569708e-13 0,0012484395))
)
 (InheritanceLink (stv 1 1)
   (ConceptNode "hostile@37b430a1-64d8-43b9-be1d-62110b45b079")
   (ConceptNode "hostile")
)
 (EvaluationLink (stv 1 1)
   (PredicateNode "is@63bffae7-17ca-48c9-9c4a-3c9cb1e6a1ab" (stv 
9,7569708e-13 0,0012484395))
   (ListLink
  (ConceptNode "it@8f61d36e-dfa0-41ae-810d-019df9b3aa30")
  (ConceptNode "crime@ff7fce34-1e10-45e7-89ae-0e39f2a13543")
   )
)
 (EvaluationLink (stv 1 1)
   (PredicateNode "says@330e5cc3-3d84-4ea0-9b51-5f3ea5cfc113" (stv 
9,7569708e-13 0,0012484395))
   (ListLink
  (ConceptNode "law@efcdf101-0d88-4f38-af37-f6fcd65fa596")
   )
)
 (EvaluationLink (stv 1 1)
   (DefinedLinguisticPredicateNode "definite" (stv 9,7569708e-13 
0,0012484395))
   (ListLink
  (ConceptNode "American@20f9c975-f87b-4ad8-a7da-b414f5f793c0" (stv 
9,7569708e-13 0,0012484395))
   )
)
 (InheritanceLink (stv 1 1)
   (ConceptNode "it@8f61d36e-dfa0-41ae-810d-019df9b3aa30")
   (ConceptNode "crime@ff7fce34-1e10-45e7-89ae-0e39f2a13543")
)
 (ImplicationLink (stv 1 1)
   (PredicateNode "sell@7adea775-4dd6-4b21-86df-08ea67c56b20" (stv 
9,7569708e-13 0,0012484395))
   (PredicateNode "sell" (stv 9,7569708e-13 0,0012484395))
)
 (InheritanceLink (stv 1 1)
   (ConceptNode "nations@100dab18-b3b4-4365-82ae-bcb37a5a5de7")
   (ConceptNode "hostile@37b430a1-64d8-43b9-be1d-62110b45b079")
)
 (InheritanceLink (stv 1 1)
   (ConceptNode "nations@100dab18-b3b4-4365-82ae-bcb37a5a5de7")
   (ConceptNode "nation")
)
 (EvaluationLink (stv 1 1)
   (PredicateNode "for@bd1dfde9-6b3c-4b90-912e-c0f9815cf2b1" (stv 
9,7569708e-13 0,0012484395))
   (ListLink
  (ConceptNode "American@20f9c975-f87b-4ad8-a7da-b414f5f793c0" (stv 
9,7569708e-13 0,0012484395))
   )
)
 (InheritanceLink (stv 1 1)
   (ConceptNode "crime@ff7fce34-1e10-45e7-89ae-0e39f2a13543")
   (ConceptNode "crime")
)
 (EvaluationLink (stv 1 1)
   (PredicateNode "is@63bffae7-17ca-48c9-9c4a-3c9cb1e6a1ab" (stv 
9,7569708e-13 0,0012484395))
   (ListLink
  (ConceptNode "it@8f61d36e-dfa0-41ae-810d-019df9b3aa30")
   )
)
 (InheritanceLink (stv 1 1)
   (ConceptNode "American@20f9c975-f87b-4ad8-a7da-b414f5f793c0" (stv 
9,7569708e-13 0,0012484395))
   (ConceptNode "American" (stv 9,7569708e-13 0,0012484395))
)
 (ImplicationLink (stv 1 1)
   (PredicateNode "to@0610bade-fc40-415c-9bcc-821c2f5d01b2" (stv 
9,7569708e-13 0,0012484395))
   (PredicateNode "to" (stv 9,7569708e-13 0,0012484395))
)
 (InheritanceLink (stv 1 1)
   (PredicateNode "is@63bffae7-17ca-48c9-9c4a-3c9cb1e6a1ab" (stv 
9,7569708e-13 0,0012484395))
   (DefinedLinguisticConceptNode "present" (stv 9,7569708e-13 0,0012484395))
)
 (EvaluationLink (stv 1 1)
   (DefinedLinguisticPredicateNode "definite" (stv 9,7569708e-13 
0,0012484395))
   (ListLink
  (ConceptNode "it@8f61d36e-dfa0-41ae-810d-019df9b3aa30")
   )
)
 (ImplicationLink (stv 1 1)
   (PredicateNode "is@63bffae7-17ca-48c9-9c4a-3c9cb1e6a1ab" (stv 
9,7569708e-13 0,0012484395))
   (PredicateNode "be" (stv 9,7569708e-13 0,0012484395))
)
 (ImplicationLink (stv 1 1)
   (PredicateNode "that@3fb84d09-683f-4756-9881-4b116a625cbd" (stv 
9,7569708e-13 0,0012484395))
   (PredicateNode "that" (stv 9,7569708e-13 0,0012484395))
)
 (EvaluationLink (stv 1 1)
   (PredicateNode "says@330e5cc3-3d84-4ea0-9b51-5f3ea5cfc113" (stv 
9,7569708e-13 0,0012484395))
   (ListLink
  (ConceptNode "that@3fb84d09-683f-4756-9881-4b116a625cbd" (stv 
9,7569708e-13 0,0012484395))
   )
)
 (InheritanceLink (stv 1 1)
   (ConceptNode "it@8f61d36e-dfa0-41ae-810d-019df9b3aa30")
   (ConceptNode "it")
)
 (ImplicationLink (stv 1 1)
   (PredicateNode "says@330e5cc3-3d84-4ea0-9b51-5f3ea5cfc113" (stv 
9,7569708e-13 0,0012484395))
   (PredicateNode "say" (stv 9,7569708e-13 0,0012484395))
)
 (EvaluationLink (stv 1 1)
   (PredicateNode "that@3fb84d09-683f-4756-9881-4b116a625cbd" (stv 
9,7569708e-13 0,0012484395))
   (ListLink
  (PredicateNode "is@63bffae7-17ca-48c9-9c4a-3c9cb1e6a1ab" (stv 
9,7569708e-13 0,0012484395))
   )
)
 (InheritanceLink (stv 1 1)
   (ConceptNode "law@efcdf101-0d88-4f38-af37-f6fcd65fa596")
   (ConceptNode "law")
)
 (InheritanceLink (stv 1 1)
   (PredicateNode "says@330e5cc3-3d84-4ea0-9b51-5f3ea5cfc113" (stv 
9,7569708e-13 0,0012484395))
   (DefinedLinguisticConceptNode "present" (stv 9,7569708e-13 0,0012484395))
)
 (InheritanceLink (stv 1 1)
   (PredicateNode "sell@7adea775-4dd6-4b21-86df-08ea67c56b20" (stv 
9,7569708e-13 0,0012484395))
   (DefinedLinguisticConceptNode "infinitive" (stv 9,7569708e-13 
0,0012484395))
)
 (EvaluationLink (stv 1 1)
   (DefinedLinguisticPredicateNode "definite" (stv 9,7569708e-13 
0,0012484395))
   (ListLink
  (ConceptNode "law@efcdf101-0d88-4f38-af37-f6fcd65fa596")
   )
)
 (InheritanceLink (stv 1 1)
   

Re: [opencog-dev] PLN rules selection

2017-05-02 Thread 'Nil Geisweiller' via opencog

Vishnu,

I don't know if the NLP pipeline is mature enough to process that... 
After you've parsed the sentence you may check whether it has produced 
knowledge that is similar to the criminal example


https://github.com/opencog/atomspace/blob/master/tests/rule-engine/criminal.scm

I don't think it would but I don't follow very closely NLP development.

Nil

On 04/27/2017 01:40 PM, Vishnu Priya wrote:


Hi Linas,

Well, we do have some code in the opencog.nlp/relex2logic directory
(aka R2L) that will convert the English-language sentence "Frogs eat
flies" into a format that PLN can operate on.

But if you just want to do some basic reasoning with simple English
sentences, then R2L+PLN should be a fair way to do it.


So, according to
https://github.com/opencog/opencog/tree/master/opencog/nlp/relex2logic ,
 i started the relex server,  started scheme,  then  did (nlp-parse
"This is a test sentence.") and got a SentenceNode as an output.

   Next step what should i do, so that FC/BC can operate on it?

  My input is :
"The law says that it is a crime for an American to sell weapons to hostile
 nations. The country Nono, an enemy of America, has some missiles, and all
 of its missiles were sold to it by ColonelWest, who is American."

I am trying to convert it to a form such that i can apply BC on it.

Thanks,
 Vishnu

--
You received this message because you are subscribed to the Google
Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to opencog+unsubscr...@googlegroups.com
.
To post to this group, send email to opencog@googlegroups.com
.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit
https://groups.google.com/d/msgid/opencog/0f357d9a-ef1d-4ac7-a7e3-d5dcdefa1f10%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/1f62186a-66af-2542-b3b0-3c32c57194ce%40gmail.com.
For more options, visit https://groups.google.com/d/optout.