Hello All,
I have tried running some FC examples by loading some rules as
in http://wiki.opencog.org/w/PLN_by_Hand and hard coding/defining rules as
in
https://github.com/opencog/atomspace/blob/master/examples/rule-engine/chaining/frog.scm
Now I have some random facts and i want some unknown
Hi,
On 03/21/2017 05:48 PM, Vishnu Priya wrote:
Hello All,
I have tried running some FC examples by loading some rules as
in http://wiki.opencog.org/w/PLN_by_Hand and hard coding/defining rules
as in
https://github.com/opencog/atomspace/blob/master/examples/rule-engine/chaining/frog.scm
Now I
Thanks Nil. But i have few questions.
1. In real time reasoning, each and every time will you look at the
input data and decide the rules manually?
2. Only when the input is in certain form of the rule (or matches with
the rule) which is being chosen, we obtain inferences? (thi
On 03/23/2017 11:54 PM, Vishnu Priya wrote:
Thanks Nil. But i have few questions.
1. In real time reasoning, each and every time will you look at the
input data and decide the rules manually?
No. Inference control (IC) would decide what rule(s) to apply. ATM IC is
rather stupid and sho
Thanks NIL.
I have read the URE documentation. It is clear enough for me to understand.
> But under "Forward Chainer", you have written,
" It currently uses a rather brute force algorithms, select sources and
rules somewhat randomly,..". But i think, currently we select the source
and
Hi,
On 03/28/2017 07:13 PM, Vishnu Priya wrote:
Thanks NIL.
I have read the URE documentation. It is clear enough for me to
understand. But under "Forward Chainer", you have written,
" It currently uses a rather brute force algorithms, select sources
and rules somewhat randomly,.
> Sorry I don't understand. What do you mean by "input". Could you give me
> an example?
I meant, instead of giving "input" like the following, which involves the
variable " X"
(ImplicationScope (stv 1.0 1.0) (TypedVariable (Variable "$X") (Type
"ConceptNode")) (And (Evaluation (Pr
On 03/28/2017 07:13 PM, Vishnu Priya wrote:
1. If the input which is give, is R2L form of the sentences but does not
contain any variable,
Still can i apply the rules on them to get inferences?
because here the conditional instantiation-meta-rule is in the following
form and involves substituti
On Mon, Apr 3, 2017 at 9:10 AM, 'Nil Geisweiller' via opencog <
opencog@googlegroups.com> wrote:
> I'm not extremely familiar with the NLP code, but I think it can already
> produce such knowledge (probably as implication links without variables,
> but as explained here http://wiki.opencog.org/w/I
> The R2L code does have some fundamental design flaws: the rules are
> hand-coded, there are about 60 or so of them, and we really need twice as
> many, but even if we had more, they most they can deal with is relatively
> unambiguous factual English sentences. What we really need is a way to
> a
Hi Linas,
Well, we do have some code in the opencog.nlp/relex2logic directory (aka
> R2L) that will convert the English-language sentence "Frogs eat flies" into
> a format that PLN can operate on.
>
> But if you just want to do some basic reasoning with simple English
> sentences, then R2L+P
Vishnu,
I don't know if the NLP pipeline is mature enough to process that...
After you've parsed the sentence you may check whether it has produced
knowledge that is similar to the criminal example
https://github.com/opencog/atomspace/blob/master/tests/rule-engine/criminal.scm
I don't think
> Yeah Nil. I think, the output is not in the suitable format to run FC/BC.
> i extracted R2L parses and it is like the following:
((ImplicationLink (stv 1 1)
(PredicateNode "for@bd1dfde9-6b3c-4b90-912e-c0f9815cf2b1" (stv
9,7569708e-13 0,0012484395))
(PredicateNode "for" (stv 9,756970
13 matches
Mail list logo