Hi,

Let me hop right to the question below:

On Thu, Jan 13, 2022 at 9:01 PM Reach Me <reach...@gmail.com> wrote:

> Hi Opencog community!
>
> So I recently discovered an interest in AGI and have started to research
> various things. In my wanderings I came across Opencog and it's amazing.
> The conceptual Idea of atomspace is very powerful in its fundamental and
> flexible nature.
>
> While I'm interested in AGI/ML, I have more of a systems background with
> some programming experience. My current understanding of ML is about on
> the level of "I know flour and water go into the process of making bread,
> I'm sure baking is part of the process, but I have no idea of all the
> ingredients and processes involved". I do understand the difference between
> Symbolic and Neural Nets. I've been reading the wiki and starting to soak
> up all the concepts I can. While the wiki is great there are some topics
> that seem like they may no longer apply.
>
> I usually like to expand into new areas by picking some pet project and
> working on it in milestones to grow my understanding. I thought a good
> project might be a question and answer application on a knowledgebase.  In
> trying to consider how it might be done in atomspace, I came across Lojban
> topics and then later the paper about symbolic natural language grammar
> induction (https://arxiv.org/abs/2005.12533). I didn't find more
> information in the wiki and I wondered if there had been further
> developments in the area that might be applicable.
>
> My pet project would be like:
> 1) feed in text corpus
> 2) some process to parse text into atomspace
> 3) ability to query atomspace in natural language.
> 4) receive a natural language text answer.
>
> Is this a doable task with opencog in its current state, or herculean for
> a newcomer?
>

It's doable. In fact, it's been done at least four times, with opencog, in
the last 10-15 years. Each distinct effort was a success .. or failure,
depending on how you define success and failure.  I certainly got a lot out
of it.

Whether it's herculean or not depends on your abilities. It's not hard to
whip up something minimal in not too much time, say in a month. What you'll
have at the end of that is a toy, a curiosity, and the unanswered question
of "what does it take to do better than this?"


> What are some milestones and topics/resources I may need to investigate
> further work towards this pet project if it's doable?
>

For language, you would need to understand link-grammar. You'd need to read
the original papers on it, as a start.  Code that dumps link-grammar output
into the atomspace is here: https://github.com/opencog/lg-atomese It works,
its maintained, I think it's "bug-free" but if not I'll fix them.  What you
do with things after that .. well, you're on your own.

One of  the more recent attempts to build a talking robot is the R2L system
(relex2logic) but I don't really recommend it. You can study it to get a
glimmer of how it worked, and learn from that, but I would discourage
further development on it.  If you are into robots, then some early
versions of the Hanson Robotics robot (named "Eva") can be found in
assorted git repos, but it would take a lot of git archeology to recreate a
working version.  But this could be fun. (The robot itself is a Blender
model. It can "see" via your webcam, and actually track you!)

There's assorted proofs-of-concept of things working at higher levels of
abstraction, but converting those into something more than a
proof-of-concept is .. well, the question would be, why?  There's a lot of
things one could do; there's a lot of things that have been tried.  It's
like climbing a mountain in the mists: a lot of possibilities, and a lot of
confusion.


> I'm currently working through the opencog hands on sections to get a feel
> for things.
>

Opencog is a bag of parts. Some parts work well. Some are obsolete. Some
code is good, some code is bad. Some ideas are good, some ideas didn't work
very well. At any rate, its not one coherent whole.  Caveat Emptor.

--linas

-- 
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA34nwchJ39ijCstEJU%2BCp3b6jwO%3DFJ3Q5hZ8%3DAvr70D4CQ%40mail.gmail.com.

Reply via email to