[ 
https://issues.apache.org/jira/browse/TIKA-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16044904#comment-16044904
 ] 

ASF GitHub Bot commented on TIKA-2262:
--------------------------------------

thammegowda commented on a change in pull request #180: Fix for TIKA-2262: 
Supporting Image-to-Text (Image Captioning) in Tika
URL: https://github.com/apache/tika/pull/180#discussion_r121207360
 
 

 ##########
 File path: 
tika-parsers/src/main/java/org/apache/tika/parser/recognition/ObjectRecognitionParser.java
 ##########
 @@ -117,55 +130,68 @@ public synchronized void parse(InputStream stream, 
ContentHandler handler, Metad
         }
         metadata.set(MD_REC_IMPL_KEY, recogniser.getClass().getName());
         long start = System.currentTimeMillis();
-        List<RecognisedObject> objects = recogniser.recognise(stream, handler, 
metadata, context);
+        List<? extends RecognisedObject> objects = 
recogniser.recognise(stream, handler, metadata, context);
+
         LOG.debug("Found {} objects", objects != null ? objects.size() : 0);
         LOG.debug("Time taken {}ms", System.currentTimeMillis() - start);
+
         if (objects != null && !objects.isEmpty()) {
+            int count;
+            List<RecognisedObject> acceptedObjects = new 
ArrayList<RecognisedObject>();
+            List<String> xhtmlIds = new ArrayList<String>();
+            String xhtmlStartVal = null;
+
+            if (recogniser instanceof TensorflowRESTRecogniser || recogniser 
instanceof TensorflowImageRecParser) {
 
 Review comment:
   :-1:  There is a better way to handle this.
   
   In Model and Services terminalogy, we have
    `TensorflowRESTRecogniser`, `TensorflowImageRecParser`, and 
`TensorflowRESTCaptioner` as services
   `RecognisedObject` and `CaptionObject` as models.
   
   The problem:
   the condition is on the service, i.e. `recogniser instanceof 
TensorflowRESTRecogniser`
   What if we add a new awesome service tomorrow? We need to change this code 
right?
   
   The solution:
     Make your decision based on the model object
   
   i.e. check 
   ```java
   for (RecognisedObject object: objects) {
     if (object instanceof CaptionObject`) {
       This result is a caption
     } else {
       this is from something else, the default one
     }
   }
   ```
   As long as the new services return data in the same model, the code will 
work.
   Let me know if this needs more explanation!
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Supporting Image-to-Text (Image Captioning) in Tika for Image MIME Types
> ------------------------------------------------------------------------
>
>                 Key: TIKA-2262
>                 URL: https://issues.apache.org/jira/browse/TIKA-2262
>             Project: Tika
>          Issue Type: Improvement
>          Components: parser
>            Reporter: Thamme Gowda
>              Labels: deeplearning, gsoc2017, machine_learning
>
> h2. Background:
> Image captions are a small piece of text, usually of one line, added to the 
> metadata of images to provide a brief summary of the scenery in the image. 
> It is a challenging and interesting problem in the domain of computer vision. 
> Tika already has a support for image recognition via [Object Recognition 
> Parser, TIKA-1993| https://issues.apache.org/jira/browse/TIKA-1993] which 
> uses an InceptionV3 model pre-trained on ImageNet dataset using tensorflow. 
> Captioning an image is a very useful feature since it helps text based 
> Information Retrieval(IR) systems to "understand" the scenery in images.
> h2. Technical details and references:
> * Google has long back open sourced their 'show and tell' neural network and 
> its model for autogenerating captions. [Source Code| 
> https://github.com/tensorflow/models/tree/master/im2txt], [Research blog| 
> https://research.googleblog.com/2016/09/show-and-tell-image-captioning-open.html]
> * Integrate it the same way as the ObjectRecognitionParser
> ** Create a RESTful API Service [similar to this| 
> https://wiki.apache.org/tika/TikaAndVision#A2._Tensorflow_Using_REST_Server] 
> ** Extend or enhance ObjectRecognitionParser or one of its implementation
> h2. {skills, learning, homework} for GSoC students
> * Knowledge of languages: java AND python, and maven build system
> * RESTful APIs 
> * tensorflow/keras,
> * deeplearning
> ----
> Alternatively, a little more harder path for experienced:
> [Import keras/tensorflow model to 
> deeplearning4j|https://deeplearning4j.org/model-import-keras ] and run them 
> natively inside JVM.
> h4. Benefits
> * no RESTful integration required. thus no external dependencies
> * easy to distribute on hadoop/spark clusters
> h4. Hurdles:
> * This is a work in progress feature on deeplearning4j and hence expected to 
> have lots of troubles on the way! 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to