Hi Desmond, Jamie,
If you guys are no longer working on ROS
sensor-fusion/object-identification & tracking code, let me know. If it is
someone else, please put me in touch?  We still have ongoing discussions on
how to handle 3D position data in opencog, see below. I would like to get
everyone "on the same page" as it were.

Hi Vitaly,

On Thu, May 16, 2019 at 8:17 AM Vitaly Bogdanov <vsb...@gmail.com> wrote:

> They implemented something completely different and totally incompatible.
>> Actually, I'm not even sure any more about what it is they built.  It seems
>> like a tragic mistake, because the net result is a system that is
>> incompatible with .. everything else.  I really really want to get back to
>> the core idea of using PredicateNodes for everything, and hiding all
>> neural-net magic under the PredicateNodes, and not somewhere else
>> (certainly not in python/C++/scheme code API's)  How to rescue that effort,
>> and get it to work with ghost, well, that is a different conversation. If
>> we could just have the basic PredicateNode api working -- this would be
>> future-proof, and extendabale and I think its just not that hard to do.  So
>> yes, please please do it!
>>
>
> Some explanation to clarify the difference.
>

Ah, well, there is less of a difference now, it seems, although you still
do not use the Value API that I keep urging you to use ...


> In first case system updates coordinates of the objects constantly. It
> also constantly analyses the scene and keeps in mind that some objects like
> "cube" is here and that it has "red" color.
>

Yes, the above is exactly what the spaceserver was designed to do -- it can
keep track of objects, constantly.  Of course, you do no have to use the
space server. -- it's use is optional, and not using it might have been the
right design decision.  I don't know how or why you made this decision to
use/not-use it.


> When system needs answer question "What is on the left of the red cube?"
> it queries the atomspace and calculates predicate which finds "red cube"
> and calculates "left-of" predicate using coordinates of the cube and other
> objects on scene.
>

This is where I think you got it wrong. You must NOT query the atomspace!
That is the wrong way to use the atomspace!  The atomspace was never meant
for this kind of constant update!

The intended design is that there is a well-known Value that is attached to
red-cube.  That Value is knows how to obtain the correct 3D location from
the SpaceServer (or some other server, e.g. your server). That Value ONLY
fetches the location when it is asked for it, and ONLY THEN.   This is the
code that Misgana wrote. Twice. You should NEVER directly shove 3D
positions into the atomspace!!

The idea is that atompsace stays static, slowly changing, with relatively
few, low-frequency changes.  i.e. when a new object becomes visible, only
then is the atomspace updated. When an object is no longer
visible/forgotten-about/untracked, only then is the atomspace updated.  All
of the high-frequency jittery, fast-update object tracking happens in the
space server (or in your server... or some other server) -- Again, I don't
care about which server is tracking object locations .. I don't care very
much about those implementation details. The only thing I care about is
that the 3D locations are accessed with a Value.  That way, we can have a
common, uniform API for all visual-processing and 3D spatial reasoning
systems.  (for example, some 3D spatial reasoning might be done on
imaginary, pretend objects. The sizes, colors, locations of those objects
will not come from your vision server; they will come in through some other
subsystem.  However, the API will be the same.

I should be able to tell the robot "Imagine that there is a six-foot red
cube directly in front of Joel.  Would Joel still be visible?" and get the
right answer to that. When the robot imagines this, it would not use the
space-server, or your server, or the ROS-SLAM code for that imagination.
However, since the access style/access API is effectively the same, it can
still perform that spatial reasoning and get correct answers.


> In second case system doesn't need constantly adding all attributes of all
> objects on scene to the atomspace to be ready processing queries.
>

Yes. Exactly!  That is the whole point of the space-server + value design!
That is why I keep telling you to use it!


> Instead it uses GroundedSchemaNode or GroundedPredicateNode to calculate
> predicates "is-red" and "left-of" on the fly using visual features of
> objects it sees. And error is backpropagated through the GroundedSchemaNode
> and GroundedPredicateNode to improve predicate accuracy.
>

Oh. OK. Yes, that is a consistent design.  But I am guessing that you are
not using generic left-of code to access values. I'm guessing you wrote
some kind of custom code for this.  I would prefer to have generic left-of
code, that worked for arbitrary positional data-sources (including
imaginary ones), and this email is about convincing Joel (or Misgana) to
write that generic, works-for-any-vision-system code.

Sensor fusion is another reason to have generic code -- besides your vision
system, there is also a distinct vision system, based on ROS, including
SLAM, etc. that Jamie Diprose is/has created, that is ALSO doing vision
processing, and obtaining 3D coordinates and sizes for objects.  We want to
unify that data with your data.  We can perform that sensor fusion in the
space server, or in some other server; I don't particularly care. What I do
care about is that the locations and sizes of objects are available through
Values, so that ALL reasoning subsystems have access to them.

In the meantime, it is certainly possible to do this hack:

DefineLink
     DefinedPredicateNode   "is-left-of"
     GroundedPredicateNode "Vitalys-left-of-code"

and then, on an as-needed basis, swap in other API's:

DefineLink
     DefinedPredicateNode   "is-left-of"
     PredicateNode "some-other-value-based left-of"

We could even use StateLink instead of DefineLink to switch between
different sensory subsystems.  In other words,  ghost should use
(DefinedPredicateNode   "is-left-of") instead of (GroundedPredicateNode
"Vitalys-left-of-code") when it does its language processing.  (I assume
that Amen and/or Man Hin are involved in the language spatial-reasoning
side of things... right?).


>
> Both ways of request processing do not exclude each other. Main difference
> is that in second case you don't need to pre-calculate all attributes you
> may need to answer the question.
>

Yes, the reason that the Value system was created was to avoid having to
precalculate anything.  That is why it exists. That is why I want you to
use it.  It's generic.

-- Linas

-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36XY7L56JC4R_GBME9ueQimHJsDE0UoPtqiZNkae0%2BH9A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to