[ 
https://issues.apache.org/jira/browse/CONNECTORS-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636003#comment-14636003
 ] 

Tugba Dogan commented on CONNECTORS-1162:
-----------------------------------------

Hi Karl,

I fixed my code according to your feedback. I tried to use 
"when().thenReturn()" pattern.  However, still it gives an error. I'm not sure 
but I think that as "KafkaConfig.TOPIC" parameter is not specified in test 
code, record parameter cannot be created in line:
"ProducerRecord record = new 
ProducerRecord(params.getParameter(KafkaConfig.TOPIC), finalString);"

When I use "topic" string instead of "params.getParameter(KafkaConfig.TOPIC)", 
it gives error because of the line:
"producer.send(record).get();"

This error can be caused by asynchronous work of send() method. However, I 
couldn't fix them.

You can look at the screen shots that shows error from the link:
https://app.box.com/s/ypie8nf10jytt9y2626ekr35pv0gvzri

Here is the commit link:
https://github.com/tugbadogan/manifoldcf/commit/d376545053b3acf462976e315d4103fb76dbb027



> Apache Kafka Output Connector
> -----------------------------
>
>                 Key: CONNECTORS-1162
>                 URL: https://issues.apache.org/jira/browse/CONNECTORS-1162
>             Project: ManifoldCF
>          Issue Type: Wish
>    Affects Versions: ManifoldCF 1.8.1, ManifoldCF 2.0.1
>            Reporter: Rafa Haro
>            Assignee: Karl Wright
>              Labels: gsoc, gsoc2015
>             Fix For: ManifoldCF 1.10, ManifoldCF 2.2
>
>         Attachments: 1.JPG, 2.JPG
>
>
> Kafka is a distributed, partitioned, replicated commit log service. It 
> provides the functionality of a messaging system, but with a unique design. A 
> single Kafka broker can handle hundreds of megabytes of reads and writes per 
> second from thousands of clients.
> Apache Kafka is being used for a number of uses cases. One of them is to use 
> Kafka as a feeding system for streaming BigData processes, both in Apache 
> Spark or Hadoop environment. A Kafka output connector could be used for 
> streaming or dispatching crawled documents or metadata and put them in a 
> BigData processing pipeline



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to