Dear Team, 

We're using Stanbol for semantic search in one of our projects about 
Knowledge Management. 

We need to use the domain specific ontology provided to us. 

Below are the steps we did.

1. Created a new SolrYard 
2. Create a new Site (Managed Site) and linked the yard 
3. Using 'curl' command, loaded the ontology to the site
        curl -i -X POST -H "Content-Type:text/turtle" -T {ATRA.ttl} "
http://localhost:8080/entityhub/site/customSite/entity";
4. Created a EntityHub Linking and referred the site created in step 2
5. Created a custom Enhancement Chain and added the EntityHub Linking 
created in step 4. 

When we run search (with sentences from the ontology), we couldn't see the 
entities defined being identified, but the results are different from when 
default chain is run. So we think something is happening, but not sure 
what and how.

For eg., 
The below is part of the ontology 
##################################
<http://www.sample-xo.com/KM/ATRb#1PDP>
  rdf:type ATRb:ATR ;
  ATRb:acronym "PDP" ;
  ATRb:comment "Digital Product Development Process and fully integrated 
Digitized Production and Supply Chain (> 2x Acceleration)" ;
  ATRb:level "1" ;
  ATRb:refinedBy <http://www.sample-xo.com/KM/ATRb#2FAL> ;
  ATRb:refinedBy <http://www.sample-xo.com/KM/ATRb#2LTA> ;
  ATRb:refinedBy <http://www.sample-xo.com/KM/ATRb#2STA> ;
  ATRb:title "Digital Product Design and Factory" ;
#############################################
  when we search for 'Digital Product design and factory', we do not see 
anything in the list of entities identified. The enhancement chain just 
identifies the language. 

Our enhancemen chain contains the below engines

tika;optional
langdetect
opennlp-sentence
opennlp-token
opennlp-pos
opennlp-chunker
sampleEntityLinking
dbpedia-disamb-linking
disambiguation-mlt
dbpedia-dereference

 
We would like to know, if what we are doing is correct ? Also, why we are 
not seeing the entities from the ontology that is loaded. 

Also, we were told that we need to create a pipeline to actually have the 
semantic search implemented. 

Pipeline steps being, 
1. Load input documents to an external Solr
2. Configure the Managed site, enhancement engine and the chain
3. Hit Enhancement chain with the search text
4. Parse the output and extract the entities identified
5. Hit Solr, which has the input documents indexed, with the identified 
entities. 

Is this correct understanding. 

We're very new to Stanbol and search technologies as such.

Your help is much appreciated. 

Thanks & Regards, 
Habib
Thanks & Regards
Habib Rahman
Manufacturing TEG - Java CoE
Tata Consultancy Services
Ph:- +91446616 9247
Cell:- 9094765645
Mailto: habibrahma...@tcs.com
Website: http://www.tcs.com
____________________________________________
Experience certainty.   IT Services
                        Business Solutions
                        Consulting
____________________________________________
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


Reply via email to