Hi Rick,

yes the RDF structure has subject, predicate and object. The object data
type is not only text, it can be integer or double as well or other data
types. The structure of our solar document doesn't only contain these three
fields. We compose one document per subject and we use all found objects as
fields. Currently, in the schema we define two static fields uri (subject)
and geo filed which contain the geographic point. When we find a message in
the kafka queue, which means something change in the DB, we query DB to get
all subject,predicate,object of the found subjects, based on that we create
the document. For example, for subjects s1 and s2, we might get the
following from the DB

s1,geo,(latitude, longitude)
s1,are,200.0
s1,type,office
s2,geo,(latitude, longitude)

for s1, there are more information available and we like to include it in
the solr doc, therefore we used the dynamic filed
feature_double_*, and feature_text_*. based on the object data type we add
to appropriate dynamic field

<doc>
<uri>s1<uri>
<geo>(latitude,longitude)</geo>
<feature_double_area>200.0</feature_double_area>
<feature_text_type>office</feature_text_type>
</doc>
 we appended the predicate name with dynamic filed prefix, and we used pdf
data type to decide which dynamic filed to use

regards,
Thaer

On 8 July 2017 at 02:36, Rick Leir <rl...@leirtech.com> wrote:

> Thaer
> Whoa, hold everything! You said RDF, meaning resource description
> framework? If so, you have exactly​ three fields: subject, predicate, and
> object. Maybe they are text type, or for exact matches you might want
> string fields. Add an ID field, which could be automatically generated by
> Solr, so now you have four fields. Or am I on a tangent again? Cheers --
> Rick
>
> On July 7, 2017 6:01:00 AM EDT, Thaer Sammar <t.sam...@geophy.com> wrote:
> >Hi Jan,
> >
> >Thanks!, I am exploring the schemaless option based on Furkan
> >suggestion. I
> >need the the flexibility because not all fields are known. We get the
> >data
> >from RDF database (which changes continuously). To be more specific, we
> >have a database and all changes on it are sent to a kafka queue. and we
> >have a consumer which listen to the queue and update the Solr index.
> >
> >regards,
> >Thaer
> >
> >On 7 July 2017 at 10:53, Jan Høydahl <jan....@cominvent.com> wrote:
> >
> >> If you do not need the flexibility of dynamic fields, don’t use them.
> >> Sounds to me that you really want a field “price” to be float and a
> >field
> >> “birthdate” to be of type date etc.
> >> If so, simply create your schema (either manually, through Schema API
> >or
> >> using schemaless) up front and index each field as correct type
> >without
> >> messing with field name prefixes.
> >>
> >> --
> >> Jan Høydahl, search solution architect
> >> Cominvent AS - www.cominvent.com
> >>
> >> > 5. jul. 2017 kl. 15.23 skrev Thaer Sammar <t.sam...@geophy.com>:
> >> >
> >> > Hi,
> >> > We are trying to index documents of different types. Document have
> >> different fields. fields are known at indexing time. We run a query
> >on a
> >> database and we index what comes using query variables as field names
> >in
> >> solr. Our current solution: we use dynamic fields with prefix, for
> >example
> >> feature_i_*, the issue with that
> >> > 1) we need to define the type of the dynamic field and to be able
> >to
> >> cover the type of discovered fields we define the following
> >> > feature_i_* for integers, feature_t_* for string, feature_d_* for
> >> double, ....
> >> > 1.a) this means we need to check the type of the discovered field
> >and
> >> then put in the corresponding dynamic field
> >> > 2) at search time, we need to know the right prefix
> >> > We are looking for help to find away to ignore the prefix and check
> >of
> >> the type
> >> >
> >> > regards,
> >> > Thaer
> >>
> >>
>
> --
> Sorry for being brief. Alternate email is rickleir at yahoo dot com

Reply via email to