> The R2L code does have some fundamental design flaws: the rules are
> hand-coded, there are about 60 or so of them, and we really need twice as
> many, but even if we had more, they most they can deal with is relatively
> unambiguous factual English sentences. What we really need is a way to
>
Step 2 is used by step 4 (it is a bit obfuscated by the helper
ure-add-rules, which creates a MemberLink between the rule name and the
rule base).
In practice, the only reason we need is to store the rule name in the
AtomSpace (the scheme rule name alone isn't loaded in the atomspace).
And
On Mon, Apr 3, 2017 at 9:10 AM, 'Nil Geisweiller' via opencog <
opencog@googlegroups.com> wrote:
> I'm not extremely familiar with the NLP code, but I think it can already
> produce such knowledge (probably as implication links without variables,
> but as explained here
On 04/02/2017 09:54 AM, Jérémy Morceaux wrote:
Thanks Nil and linas, i disabled cpprest download and opencog is now
built. I'm getting 86% on unit test, so i think it's working ^^ Thank
you both
If the 86% is on opencog, then it's probably normal, if it's on the
atomspace then it's probably
On 03/28/2017 07:13 PM, Vishnu Priya wrote:
1. If the input which is give, is R2L form of the sentences but does not
contain any variable,
Still can i apply the rules on them to get inferences?
because here the conditional instantiation-meta-rule is in the following
form and involves