Hmm! yes. i am getting it now.
Thanks and regards,
Vishnu.
On Saturday, 19 November 2016 13:32:40 UTC+1, linas wrote:
>
> That makes the problem harder. You still have to somehow deal with
> different word-senses for "apple", and in addition, you also need to create
> a a model of the mental
That makes the problem harder. You still have to somehow deal with
different word-senses for "apple", and in addition, you also need to create
a a model of the mental state of id1. So, if id1 is a child, the
word-sense for "apple" and "sweet" is probably different than if id1 is an
iphone fanboi.
I also had an another idea of coupling the sentences along with their id.
Ex. Why can't i give sentences like "Apples are sweet, said by id1".
"Farmers are starving, said by id2" .So that i would know which sentence
has which id. what do you say?
Thanks,
Vishnu
On Monday, 14 November
A better design would be to explicitly acknowledge that words have
meanings. The way that this is currently done looks roughly like this:
(EvaluationLink
(PredicateNode "is")
(ListLink
(ConceptNode "apple@meaning-42")
(ConceptNode "fruit@meanning-66")
)
)
I hope the above
On Mon, Oct 17, 2016 at 5:46 AM, Vishnu Priya
wrote:
>
> Thnaks Linas for the reply.
>
>
>> I would like to know some more info about Truth values.
>>
>
> How is atom's truth value is updated based on new observations?
>
They are not. Only PLN updates TV's and some
Thnaks Linas for the reply.
> I would like to know some more info about Truth values.
>
How is atom's truth value is updated based on new observations?
How can truth values of certain atoms in a particular context change a lot?
( i came across this line in the book, "*if truth values of
On Wed, Oct 12, 2016 at 9:48 AM, vishnu wrote:
>
>
> With attention values, i thought i could do the following:
> I have 24x7 tweets coming. So i thought, I can send them to NLP pipeline
> and get Atoms. Let's say most of the people tweet about Presidential
> Election.
Hey Roman,
Thanks, that helped a lot to get more insight. :-) I shall ask Misgana
about stimulating atoms.
Cheers,
Vishnu
--
You received this message because you are subscribed to the Google Groups
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send
Hey vishnu,
what you are suggesting does sound doable.
In your case, you would just want to stimulate atoms every time they have
been parsed by the NLP pipeline. Something like this might already exist
not sure ask misgana.
More generally there would be many Mind-Agents that are running in
>
> Hey Roman,
>
Thanks for the reply :-)
I am not sure what exactly you want to use the AttentionValues for
With attention values, i thought i could do the following:
I have 24x7 tweets coming. So i thought, I can send them to NLP pipeline
and get Atoms. Let's say most of the people tweet
Hey,
Short explanation first:
STI: This value indicates how relevant this atom is to the currently
running process/context
LTI: This value indicates how relevant this atom might be in future
processes/context (Atoms with low LTI have no future use and get delete if
the AS gets to big)
VLTI:
Hello all,
Say, I have the following example sentences
- apple is rich in vitamins.
- apple makes the doctor away.
- apple is healthy.
- apple is red in color.
- eva eats apple.
- Steve Jobs invented apple.
- apple iphone is usually costly.
- headquaters of apple
12 matches
Mail list logo