Re: Writing a Producer from Scratch

2016-03-03 Thread Péricé Robin
Oups didn't see your response.

Sorry

2016-03-03 17:23 GMT+01:00 Péricé Robin <perice.ro...@gmail.com>:

> Hi,
>
> Maybe you should look at this : https://github.com/edenhill/librdkafka
>
> Regards,
>
> Robin
>
> 2016-03-03 11:47 GMT+01:00 Hopson, Stephen <stephen.hop...@gb.unisys.com>:
>
>> Hi,
>>
>> Not sure if this is the right forum for this question, but if it not I’m
>> sure someone will direct me to the proper one.
>>
>> Also, I am new to Kafka (but not new to computers).
>>
>>
>>
>> I want to write a kafka producer client for a Unisys OS 2200 mainframe. I
>> need to write it in C, and since I have no access to Windows / Unix / Linux
>> libraries, I have to develop the interface at the lowest level.
>>
>>
>>
>> So far, I have downloaded a kafka server with associated zookeeper (kafka
>> _2.10-0.8.2.2). Note I have downloaded the Windows version and have it
>> running on my laptop, successfully tested on the same laptop with the
>> provided provider and consumer clients.
>>
>>
>>
>> I have developed code to open a TCP session to the kafka server which
>> appears to work and I have attempted to send a metadata request which does
>> not appear to work. When I say it does not appear to work, I mean that I
>> send the message and then I sit on a retrieve, which eventually times out (
>> I do seem to get one character in the receive buffer of 0235 octal). The
>> message format I am using is the one described by the excellent document by
>> Jay Creps / Gwen Shapira at
>> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>> However, it is not clear what level of kafka these message formats are
>> applicable for.
>>
>>
>>
>> Can anybody offer me any advice or suggestions as to how to progress?
>>
>>
>>
>> PS is the CRC mandatory in the Producer messages?
>>
>> Many thanks in advance.
>>
>>
>>
>> *Stephen Hopson* | Infrastructure Architect | Enterprise Solutions
>>
>> Unisys | +44 (0)1908 805010 | +44 (0)7557 303321 |
>> stephen.hop...@gb.unisys.com
>>
>>
>>
>> [image: unisys_logo] <http://www.unisys.com/>
>>
>>
>>
>> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
>> MATERIAL and is for use only by the intended recipient. If you received
>> this in error, please contact the sender and delete the e-mail and its
>> attachments from all devices.
>>
>>
>>
>
>


Re: Writing a Producer from Scratch

2016-03-03 Thread Péricé Robin
Hi,

Maybe you should look at this : https://github.com/edenhill/librdkafka

Regards,

Robin

2016-03-03 11:47 GMT+01:00 Hopson, Stephen :

> Hi,
>
> Not sure if this is the right forum for this question, but if it not I’m
> sure someone will direct me to the proper one.
>
> Also, I am new to Kafka (but not new to computers).
>
>
>
> I want to write a kafka producer client for a Unisys OS 2200 mainframe. I
> need to write it in C, and since I have no access to Windows / Unix / Linux
> libraries, I have to develop the interface at the lowest level.
>
>
>
> So far, I have downloaded a kafka server with associated zookeeper (kafka
> _2.10-0.8.2.2). Note I have downloaded the Windows version and have it
> running on my laptop, successfully tested on the same laptop with the
> provided provider and consumer clients.
>
>
>
> I have developed code to open a TCP session to the kafka server which
> appears to work and I have attempted to send a metadata request which does
> not appear to work. When I say it does not appear to work, I mean that I
> send the message and then I sit on a retrieve, which eventually times out (
> I do seem to get one character in the receive buffer of 0235 octal). The
> message format I am using is the one described by the excellent document by
> Jay Creps / Gwen Shapira at
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
> However, it is not clear what level of kafka these message formats are
> applicable for.
>
>
>
> Can anybody offer me any advice or suggestions as to how to progress?
>
>
>
> PS is the CRC mandatory in the Producer messages?
>
> Many thanks in advance.
>
>
>
> *Stephen Hopson* | Infrastructure Architect | Enterprise Solutions
>
> Unisys | +44 (0)1908 805010 | +44 (0)7557 303321 |
> stephen.hop...@gb.unisys.com
>
>
>
> [image: unisys_logo] 
>
>
>
> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
> MATERIAL and is for use only by the intended recipient. If you received
> this in error, please contact the sender and delete the e-mail and its
> attachments from all devices.
>
>
>


Consumers doesn't always poll first messages

2016-03-02 Thread Péricé Robin
Hello everybody,

I'm testing the new 0.9.0.1 API and I try to make a basic example working.

*Java code* :

*Consumer *: http://pastebin.com/YtvW0sz5
*Producer *: http://pastebin.com/anQay9YE
*Test* : http://pastebin.com/nniYLsHL


*Kafka configuration* :

*Zookeeper propertie*s : http://pastebin.com/KC5yZdNx
*Kafka properties* : http://pastebin.com/Psy4uAYL

But when I try to run my test and restart Kafka to see what happen. The
Consumer doesn't always consume first messages. Sometimes it consume
messages at offset 0 or 574 or 1292 ... The behavior of the test seems to
be very random.

Anybody have an idea on that issue ?

Best Regards,

Robin


Re: KafkaConsumer poll() problems

2016-03-01 Thread Péricé Robin
Producer : https://gist.github.com/r0perice/9ce2bece76dd4113a44a
Consumer : https://gist.github.com/r0perice/8dcee160017ccd779d59
Console : https://gist.github.com/r0perice/5a8e2b2939651b1ac893

2016-03-01 14:50 GMT+01:00 craig w <codecr...@gmail.com>:

> Can you try posting your code into a Gist (gist.github.com) or Pastebin,
> so
> it's formatted and easier to read?
>
> On Tue, Mar 1, 2016 at 8:49 AM, Péricé Robin <perice.ro...@gmail.com>
> wrote:
>
> > Hello everybody,
> >
> > I'm having troubles using KafkaConsumer 0.9.0.0 API. My Consumer class
> > doesn't consumer messages properly.
> >
> > -
> > | *Consumer*.java |
> > -
> >
> > public final void run() { try {
> > *consumer.subscribe(Collections.singletonList(topicName));* boolean end =
> > false; while (!closed.get()) { while (!end) { *final
> ConsumerRecords<Long,
> > ITupleApi> records = consumer.poll(1000);* if (records == null ||
> records.
> > count() == 0) { System.err.println("In ConsumerThread consumer.poll
> > received nothing on " + topicName); } else { /* some processing on
> records
> > here */ end = true; } } } } catch (final WakeupException e) { if
> (!closed.
> > get()) { throw e; } } finally { consumer.close(); } }
> >
> > 
> > | *Producer*.java |
> > 
> >
> >
> > public final void run() {
> > while (true) {
> > *producer.send(new ProducerRecord<Long, ITupleApi>(topicName, timems,
> > tuple), callback);*
> > }
> > }
> >
> > --
> >
> > I run this exactly the same way as in Github example :
> >
> >
> https://github.com/apache/kafka/blob/trunk/examples/src/main/java/kafka/examples/KafkaConsumerProducerDemo.java
> >
> > But I only get "In ConsumerThread consumer.poll received nothing". Poll()
> > never send me messages ... But when I use command line tools I can see my
> > messages on the topic.
> >
> > When I run the basic example from GitHub everything works fine ... So
> it's
> > seems like I'm missing something.
> >
> >
> >
> >
> >
> >
> > *CONSOLE*
> >
> > [2016-03-01 14:42:14,280] INFO [GroupCoordinator 0]: Preparing to
> > restabilize group KafkaChannelBasicTestConsumer with old generation 1
> > (kafka.coordin
> > ator.GroupCoordinator)
> > [2016-03-01 14:42:14,281] INFO [GroupCoordinator 0]: Group
> > KafkaChannelBasicTestConsumer generation 1 is dead and removed
> > (kafka.coordinator.GroupCoor
> > dinator)
> > [2016-03-01 14:42:22,788] INFO [GroupCoordinator 0]: Preparing to
> > restabilize group KafkaChannelBasicTestConsumer with old generation 0
> > (kafka.coordin
> > ator.GroupCoordinator)
> > [2016-03-01 14:42:22,788] INFO [GroupCoordinator 0]: Stabilized group
> > KafkaChannelBasicTestConsumer generation 1
> > (kafka.coordinator.GroupCoordinator)
> > [2016-03-01 14:42:22,797] INFO [GroupCoordinator 0]: Assignment received
> > from leader for group KafkaChannelBasicTestConsumer for generation 1
> > (kafka.c
> > oordinator.GroupCoordinator)
> > [2016-03-01 14:42:55,808] INFO [GroupCoordinator 0]: Preparing to
> > restabilize group KafkaChannelBasicTestConsumer with old generation 1
> > (kafka.coordin
> > ator.GroupCoordinator)
> > [2016-03-01 14:42:55,809] INFO [GroupCoordinator 0]: Group
> > KafkaChannelBasicTestConsumer generation 1 is dead and removed
> > (kafka.coordinator.GroupCoor
> > dinator)
> >
> >
> > I really need help on this !
> >
> > Regards,
> >
> > Robin
> >
>
>
>
> --
>
> https://github.com/mindscratch
> https://www.google.com/+CraigWickesser
> https://twitter.com/mind_scratch
> https://twitter.com/craig_links
>


KafkaConsumer poll() problems

2016-03-01 Thread Péricé Robin
Hello everybody,

I'm having troubles using KafkaConsumer 0.9.0.0 API. My Consumer class
doesn't consumer messages properly.

-
| *Consumer*.java |
-

public final void run() { try {
*consumer.subscribe(Collections.singletonList(topicName));* boolean end =
false; while (!closed.get()) { while (!end) { *final ConsumerRecords records = consumer.poll(1000);* if (records == null || records.
count() == 0) { System.err.println("In ConsumerThread consumer.poll
received nothing on " + topicName); } else { /* some processing on records
here */ end = true; } } } } catch (final WakeupException e) { if (!closed.
get()) { throw e; } } finally { consumer.close(); } }


| *Producer*.java |



public final void run() {
while (true) {
*producer.send(new ProducerRecord(topicName, timems,
tuple), callback);*
}
}

--

I run this exactly the same way as in Github example :
https://github.com/apache/kafka/blob/trunk/examples/src/main/java/kafka/examples/KafkaConsumerProducerDemo.java

But I only get "In ConsumerThread consumer.poll received nothing". Poll()
never send me messages ... But when I use command line tools I can see my
messages on the topic.

When I run the basic example from GitHub everything works fine ... So it's
seems like I'm missing something.






*CONSOLE*

[2016-03-01 14:42:14,280] INFO [GroupCoordinator 0]: Preparing to
restabilize group KafkaChannelBasicTestConsumer with old generation 1
(kafka.coordin
ator.GroupCoordinator)
[2016-03-01 14:42:14,281] INFO [GroupCoordinator 0]: Group
KafkaChannelBasicTestConsumer generation 1 is dead and removed
(kafka.coordinator.GroupCoor
dinator)
[2016-03-01 14:42:22,788] INFO [GroupCoordinator 0]: Preparing to
restabilize group KafkaChannelBasicTestConsumer with old generation 0
(kafka.coordin
ator.GroupCoordinator)
[2016-03-01 14:42:22,788] INFO [GroupCoordinator 0]: Stabilized group
KafkaChannelBasicTestConsumer generation 1
(kafka.coordinator.GroupCoordinator)
[2016-03-01 14:42:22,797] INFO [GroupCoordinator 0]: Assignment received
from leader for group KafkaChannelBasicTestConsumer for generation 1
(kafka.c
oordinator.GroupCoordinator)
[2016-03-01 14:42:55,808] INFO [GroupCoordinator 0]: Preparing to
restabilize group KafkaChannelBasicTestConsumer with old generation 1
(kafka.coordin
ator.GroupCoordinator)
[2016-03-01 14:42:55,809] INFO [GroupCoordinator 0]: Group
KafkaChannelBasicTestConsumer generation 1 is dead and removed
(kafka.coordinator.GroupCoor
dinator)


I really need help on this !

Regards,

Robin


Re: Consumer seek on 0.9.0 API

2016-02-22 Thread Péricé Robin
OK I understand the explanation. Thanks you for sharing your knowledge !

Regards,

Robin

2016-02-18 18:56 GMT+01:00 Jason Gustafson <ja...@confluent.io>:

> Woops. Looks like Alex got there first. Glad you were able to figure it
> out.
>
> -Jason
>
> On Thu, Feb 18, 2016 at 9:55 AM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Hi Robin,
> >
> > It would be helpful if you posted the full code you were trying to use.
> > How to seek largely depends on whether you are using new consumer in
> > "simple" or "group" mode. In simple mode, when you know the partitions
> you
> > want to consume, you should just be able to do something like the
> following:
> >
> > consumer.assign(Arrays.asList(partition));
> > consumer.seek(partition, 500);
> >
> > Then you can call poll() in a loop until you hit offset 1000 and stop.
> > Does that make sense?
> >
> > -Jason
> >
> >
> > On Wed, Feb 17, 2016 at 11:39 AM, Alex Loddengaard <a...@confluent.io>
> > wrote:
> >
> >> Hi Robin,
> >>
> >> I believe seek() needs to be called after the consumer gets its
> partition
> >> assignments. Try calling poll() before you call seek(), then poll()
> again
> >> and process the records from the latter poll().
> >>
> >> There may be a better way to do this -- let's see if anyone else has a
> >> suggestion.
> >>
> >> Alex
> >>
> >> On Wed, Feb 17, 2016 at 9:13 AM, Péricé Robin <perice.ro...@gmail.com>
> >> wrote:
> >>
> >> > Hi,
> >> >
> >> > I'm trying to use the new Consumer API with this example :
> >> >
> >> >
> >>
> https://github.com/apache/kafka/tree/trunk/examples/src/main/java/kafka/examples
> >> >
> >> > With a Producer I sent 1000 messages to my Kafka broker. I need to
> know
> >> if
> >> > it's possible, for example, to read message from offset 500 to 1000.
> >> >
> >> > What I did :
> >> >
> >> >
> >> >- consumer.seek(new TopicPartition("topic1", 0), 500);
> >> >
> >> >
> >> >- final ConsumerRecords<Integer, String> records =
> >> >consumer.poll(1000);
> >> >
> >> >
> >> > But this didn't nothing (when I don't use seek() method I consume all
> >> the
> >> > messages without any problems).
> >> >
> >> > Any help on this will be greatly appreciated !
> >> >
> >> > Regards,
> >> >
> >> > Robin
> >> >
> >>
> >>
> >>
> >> --
> >> *Alex Loddengaard | **Solutions Architect | Confluent*
> >> *Download Apache Kafka and Confluent Platform:
> www.confluent.io/download
> >> <http://www.confluent.io/download>*
> >>
> >
> >
>


Re: Consumer seek on 0.9.0 API

2016-02-18 Thread Péricé Robin
Hi,

Ok I did a poll() before my seek() and poll() again and now my consumer
starts at offset.

Thanks you a lot ! But I don't really understand why I have to do that, if
anyone can explain me.

Regards,

Robin

2016-02-17 20:39 GMT+01:00 Alex Loddengaard <a...@confluent.io>:

> Hi Robin,
>
> I believe seek() needs to be called after the consumer gets its partition
> assignments. Try calling poll() before you call seek(), then poll() again
> and process the records from the latter poll().
>
> There may be a better way to do this -- let's see if anyone else has a
> suggestion.
>
> Alex
>
> On Wed, Feb 17, 2016 at 9:13 AM, Péricé Robin <perice.ro...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I'm trying to use the new Consumer API with this example :
> >
> >
> https://github.com/apache/kafka/tree/trunk/examples/src/main/java/kafka/examples
> >
> > With a Producer I sent 1000 messages to my Kafka broker. I need to know
> if
> > it's possible, for example, to read message from offset 500 to 1000.
> >
> > What I did :
> >
> >
> >- consumer.seek(new TopicPartition("topic1", 0), 500);
> >
> >
> >- final ConsumerRecords<Integer, String> records =
> >consumer.poll(1000);
> >
> >
> > But this didn't nothing (when I don't use seek() method I consume all the
> > messages without any problems).
> >
> > Any help on this will be greatly appreciated !
> >
> > Regards,
> >
> > Robin
> >
>
>
>
> --
> *Alex Loddengaard | **Solutions Architect | Confluent*
> *Download Apache Kafka and Confluent Platform: www.confluent.io/download
> <http://www.confluent.io/download>*
>


Consumer seek on 0.9.0 API

2016-02-17 Thread Péricé Robin
Hi,

I'm trying to use the new Consumer API with this example :
https://github.com/apache/kafka/tree/trunk/examples/src/main/java/kafka/examples

With a Producer I sent 1000 messages to my Kafka broker. I need to know if
it's possible, for example, to read message from offset 500 to 1000.

What I did :


   - consumer.seek(new TopicPartition("topic1", 0), 500);


   - final ConsumerRecords records =
   consumer.poll(1000);


But this didn't nothing (when I don't use seek() method I consume all the
messages without any problems).

Any help on this will be greatly appreciated !

Regards,

Robin