Re: kafka producer failed

2015-07-26 Thread Yi Pan
Great to hear that! Cheers!

-Yi

On Sun, Jul 26, 2015 at 10:53 PM, Job-Selina Wu 
wrote:

> Hi, Yi:
>
> you are right. After I add the producer.close(); the bug is fixed now.
>
> Thanks a lot!
>
> Selina
>
> On Sun, Jul 26, 2015 at 10:28 PM, Yi Pan  wrote:
>
> > Hi, Selina,
> >
> > Did you forget to close your Kafka producer in your Http servlet? If you
> > create a new Kafka producer in each Http request and do not close the
> > producer, that might cause this problem.
> >
> > -Yi
> >
> > On Sun, Jul 26, 2015 at 9:25 PM, Job-Selina Wu 
> > wrote:
> >
> > > Hi, Yan:
> > >
> > > the kaka.log got *java.io.IOException: Too many open files in
> system*
> > >
> > > Some detail list below.
> > >
> > > Sincerely,
> > > Selina
> > >
> > > [2015-07-26 21:07:01,500] INFO Verifying properties
> > > (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,545] INFO Property broker.id is overridden to 0
> > > (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,545] INFO Property log.cleaner.enable is
> > > overridden to false (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,545] INFO Property log.dirs is overridden to
> > > /tmp/kafka-logs (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,545] INFO Property
> > > log.retention.check.interval.ms is overridden to 30
> > > (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,546] INFO Property log.retention.hours is
> > > overridden to 168 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,546] INFO Property log.segment.bytes is
> > > overridden to 1073741824 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,546] INFO Property num.io.threads is overridden
> > > to 8 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,547] INFO Property num.network.threads is
> > > overridden to 3 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,547] INFO Property num.partitions is overridden
> > > to 1 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,547] INFO Property
> > > num.recovery.threads.per.data.dir is overridden to 1
> > > (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,547] INFO Property port is overridden to 9092
> > > (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,548] INFO Property socket.receive.buffer.bytes is
> > > overridden to 102400 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,548] INFO Property socket.request.max.bytes is
> > > overridden to 104857600 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,548] INFO Property socket.send.buffer.bytes is
> > > overridden to 102400 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,548] INFO Property zookeeper.connect is
> > > overridden to localhost:2181 (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,548] INFO Property
> > > zookeeper.connection.timeout.ms is overridden to 6000
> > > (kafka.utils.VerifiableProperties)
> > > [2015-07-26 21:07:01,607] INFO [Kafka Server 0], starting
> > > (kafka.server.KafkaServer)
> > > [2015-07-26 21:07:01,608] INFO [Kafka Server 0], Connecting to
> > > zookeeper on localhost:2181 (kafka.server.KafkaServer)
> > > [2015-07-26 21:07:01,619] INFO Starting ZkClient event thread.
> > > (org.I0Itec.zkclient.ZkEventThread)
> > > [2015-07-26 21:07:01,631] INFO Client
> > > environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09
> > > GMT (org.apache.zookeeper.ZooKeeper)
> > > [2015-07-26 21:07:01,631] INFO Client
> > > environment:host.name=selinas-mbp.attlocal.net
> > > (org.apache.zookeeper.ZooKeeper)
> > > [2015-07-26 21:07:01,631] INFO Client
> > > environment:java.version=1.8.0_45 (org.apache.zookeeper.ZooKeeper)
> > > [2015-07-26 21:07:01,631] INFO Client environment:java.vendor=Oracle
> > > Corporation (org.apache.zookeeper.ZooKeeper)
> > > [2015-07-26 21:07:01,631] INFO Client
> > >
> > >
> >
> environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre
> > > (org.apache.zookeeper.ZooKeeper)
> > > [2015-07-26 21:07:01,631] INFO Client
> > >
> > >
> >
> environment:java.class.path=:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../core/build/dependant-libs-2.10.4*/*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../examples/build/libs//kafka-examples*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../clients/build/libs/kafka-clients*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/jopt-simple-3.2.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka-clients-0.8.2.1.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-javadoc.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-

Re: kafka producer failed

2015-07-26 Thread Job-Selina Wu
Hi, Yi:

you are right. After I add the producer.close(); the bug is fixed now.

Thanks a lot!

Selina

On Sun, Jul 26, 2015 at 10:28 PM, Yi Pan  wrote:

> Hi, Selina,
>
> Did you forget to close your Kafka producer in your Http servlet? If you
> create a new Kafka producer in each Http request and do not close the
> producer, that might cause this problem.
>
> -Yi
>
> On Sun, Jul 26, 2015 at 9:25 PM, Job-Selina Wu 
> wrote:
>
> > Hi, Yan:
> >
> > the kaka.log got *java.io.IOException: Too many open files in system*
> >
> > Some detail list below.
> >
> > Sincerely,
> > Selina
> >
> > [2015-07-26 21:07:01,500] INFO Verifying properties
> > (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,545] INFO Property broker.id is overridden to 0
> > (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,545] INFO Property log.cleaner.enable is
> > overridden to false (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,545] INFO Property log.dirs is overridden to
> > /tmp/kafka-logs (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,545] INFO Property
> > log.retention.check.interval.ms is overridden to 30
> > (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,546] INFO Property log.retention.hours is
> > overridden to 168 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,546] INFO Property log.segment.bytes is
> > overridden to 1073741824 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,546] INFO Property num.io.threads is overridden
> > to 8 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,547] INFO Property num.network.threads is
> > overridden to 3 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,547] INFO Property num.partitions is overridden
> > to 1 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,547] INFO Property
> > num.recovery.threads.per.data.dir is overridden to 1
> > (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,547] INFO Property port is overridden to 9092
> > (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,548] INFO Property socket.receive.buffer.bytes is
> > overridden to 102400 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,548] INFO Property socket.request.max.bytes is
> > overridden to 104857600 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,548] INFO Property socket.send.buffer.bytes is
> > overridden to 102400 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,548] INFO Property zookeeper.connect is
> > overridden to localhost:2181 (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,548] INFO Property
> > zookeeper.connection.timeout.ms is overridden to 6000
> > (kafka.utils.VerifiableProperties)
> > [2015-07-26 21:07:01,607] INFO [Kafka Server 0], starting
> > (kafka.server.KafkaServer)
> > [2015-07-26 21:07:01,608] INFO [Kafka Server 0], Connecting to
> > zookeeper on localhost:2181 (kafka.server.KafkaServer)
> > [2015-07-26 21:07:01,619] INFO Starting ZkClient event thread.
> > (org.I0Itec.zkclient.ZkEventThread)
> > [2015-07-26 21:07:01,631] INFO Client
> > environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09
> > GMT (org.apache.zookeeper.ZooKeeper)
> > [2015-07-26 21:07:01,631] INFO Client
> > environment:host.name=selinas-mbp.attlocal.net
> > (org.apache.zookeeper.ZooKeeper)
> > [2015-07-26 21:07:01,631] INFO Client
> > environment:java.version=1.8.0_45 (org.apache.zookeeper.ZooKeeper)
> > [2015-07-26 21:07:01,631] INFO Client environment:java.vendor=Oracle
> > Corporation (org.apache.zookeeper.ZooKeeper)
> > [2015-07-26 21:07:01,631] INFO Client
> >
> >
> environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre
> > (org.apache.zookeeper.ZooKeeper)
> > [2015-07-26 21:07:01,631] INFO Client
> >
> >
> environment:java.class.path=:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../core/build/dependant-libs-2.10.4*/*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../examples/build/libs//kafka-examples*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../clients/build/libs/kafka-clients*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/jopt-simple-3.2.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka-clients-0.8.2.1.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-javadoc.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-scaladoc.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-sources.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-test.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.

Re: kafka producer failed

2015-07-26 Thread Yi Pan
Hi, Selina,

Did you forget to close your Kafka producer in your Http servlet? If you
create a new Kafka producer in each Http request and do not close the
producer, that might cause this problem.

-Yi

On Sun, Jul 26, 2015 at 9:25 PM, Job-Selina Wu 
wrote:

> Hi, Yan:
>
> the kaka.log got *java.io.IOException: Too many open files in system*
>
> Some detail list below.
>
> Sincerely,
> Selina
>
> [2015-07-26 21:07:01,500] INFO Verifying properties
> (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,545] INFO Property broker.id is overridden to 0
> (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,545] INFO Property log.cleaner.enable is
> overridden to false (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,545] INFO Property log.dirs is overridden to
> /tmp/kafka-logs (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,545] INFO Property
> log.retention.check.interval.ms is overridden to 30
> (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,546] INFO Property log.retention.hours is
> overridden to 168 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,546] INFO Property log.segment.bytes is
> overridden to 1073741824 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,546] INFO Property num.io.threads is overridden
> to 8 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,547] INFO Property num.network.threads is
> overridden to 3 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,547] INFO Property num.partitions is overridden
> to 1 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,547] INFO Property
> num.recovery.threads.per.data.dir is overridden to 1
> (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,547] INFO Property port is overridden to 9092
> (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,548] INFO Property socket.receive.buffer.bytes is
> overridden to 102400 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,548] INFO Property socket.request.max.bytes is
> overridden to 104857600 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,548] INFO Property socket.send.buffer.bytes is
> overridden to 102400 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,548] INFO Property zookeeper.connect is
> overridden to localhost:2181 (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,548] INFO Property
> zookeeper.connection.timeout.ms is overridden to 6000
> (kafka.utils.VerifiableProperties)
> [2015-07-26 21:07:01,607] INFO [Kafka Server 0], starting
> (kafka.server.KafkaServer)
> [2015-07-26 21:07:01,608] INFO [Kafka Server 0], Connecting to
> zookeeper on localhost:2181 (kafka.server.KafkaServer)
> [2015-07-26 21:07:01,619] INFO Starting ZkClient event thread.
> (org.I0Itec.zkclient.ZkEventThread)
> [2015-07-26 21:07:01,631] INFO Client
> environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09
> GMT (org.apache.zookeeper.ZooKeeper)
> [2015-07-26 21:07:01,631] INFO Client
> environment:host.name=selinas-mbp.attlocal.net
> (org.apache.zookeeper.ZooKeeper)
> [2015-07-26 21:07:01,631] INFO Client
> environment:java.version=1.8.0_45 (org.apache.zookeeper.ZooKeeper)
> [2015-07-26 21:07:01,631] INFO Client environment:java.vendor=Oracle
> Corporation (org.apache.zookeeper.ZooKeeper)
> [2015-07-26 21:07:01,631] INFO Client
>
> environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre
> (org.apache.zookeeper.ZooKeeper)
> [2015-07-26 21:07:01,631] INFO Client
>
> environment:java.class.path=:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../core/build/dependant-libs-2.10.4*/*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../examples/build/libs//kafka-examples*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../clients/build/libs/kafka-clients*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/jopt-simple-3.2.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka-clients-0.8.2.1.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-javadoc.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-scaladoc.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-sources.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-test.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/log4j-1.2.16.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/lz4-1.2.0.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/metrics-core-2.2.0.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/scal

Re: kafka producer failed

2015-07-26 Thread Job-Selina Wu
Hi, Yan:

the kaka.log got *java.io.IOException: Too many open files in system*

Some detail list below.

Sincerely,
Selina

[2015-07-26 21:07:01,500] INFO Verifying properties
(kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,545] INFO Property broker.id is overridden to 0
(kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,545] INFO Property log.cleaner.enable is
overridden to false (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,545] INFO Property log.dirs is overridden to
/tmp/kafka-logs (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,545] INFO Property
log.retention.check.interval.ms is overridden to 30
(kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,546] INFO Property log.retention.hours is
overridden to 168 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,546] INFO Property log.segment.bytes is
overridden to 1073741824 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,546] INFO Property num.io.threads is overridden
to 8 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,547] INFO Property num.network.threads is
overridden to 3 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,547] INFO Property num.partitions is overridden
to 1 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,547] INFO Property
num.recovery.threads.per.data.dir is overridden to 1
(kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,547] INFO Property port is overridden to 9092
(kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,548] INFO Property socket.receive.buffer.bytes is
overridden to 102400 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,548] INFO Property socket.request.max.bytes is
overridden to 104857600 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,548] INFO Property socket.send.buffer.bytes is
overridden to 102400 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,548] INFO Property zookeeper.connect is
overridden to localhost:2181 (kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,548] INFO Property
zookeeper.connection.timeout.ms is overridden to 6000
(kafka.utils.VerifiableProperties)
[2015-07-26 21:07:01,607] INFO [Kafka Server 0], starting
(kafka.server.KafkaServer)
[2015-07-26 21:07:01,608] INFO [Kafka Server 0], Connecting to
zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2015-07-26 21:07:01,619] INFO Starting ZkClient event thread.
(org.I0Itec.zkclient.ZkEventThread)
[2015-07-26 21:07:01,631] INFO Client
environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09
GMT (org.apache.zookeeper.ZooKeeper)
[2015-07-26 21:07:01,631] INFO Client
environment:host.name=selinas-mbp.attlocal.net
(org.apache.zookeeper.ZooKeeper)
[2015-07-26 21:07:01,631] INFO Client
environment:java.version=1.8.0_45 (org.apache.zookeeper.ZooKeeper)
[2015-07-26 21:07:01,631] INFO Client environment:java.vendor=Oracle
Corporation (org.apache.zookeeper.ZooKeeper)
[2015-07-26 21:07:01,631] INFO Client
environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/jre
(org.apache.zookeeper.ZooKeeper)
[2015-07-26 21:07:01,631] INFO Client
environment:java.class.path=:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../core/build/dependant-libs-2.10.4*/*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../examples/build/libs//kafka-examples*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../clients/build/libs/kafka-clients*.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/jopt-simple-3.2.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka-clients-0.8.2.1.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-javadoc.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-scaladoc.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-sources.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1-test.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/kafka_2.10-0.8.2.1.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/log4j-1.2.16.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/lz4-1.2.0.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/metrics-core-2.2.0.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/scala-library-2.10.4.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/slf4j-api-1.7.6.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/slf4j-log4j12-1.6.1.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/snappy-java-1.1.1.6.jar:/Users/selina/IdeaProjects/samza-Demo/deploy/kafka/bin/../libs/zkclient-0.3.jar:/Users/selina/IdeaProjects/samza-De

Re: kafka producer failed

2015-07-26 Thread Yan Fang
You may check the Kafka.log to see what's inside 

Yan Fang

> On Jul 26, 2015, at 2:01 AM, Job-Selina Wu  wrote:
> 
> The exception is below:
> 
> kafka.common.FailedToSendMessageException: Failed to send messages after 3
> tries.
> kafka.common.FailedToSendMessageException: Failed to send messages after 3
> tries.
> at
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
> at kafka.producer.Producer.send(Producer.scala:77)
> at kafka.javaapi.producer.Producer.send(Producer.scala:33)
> at http.server.HttpDemoHandler.doDemo(HttpDemoHandler.java:71)
> at http.server.HttpDemoHandler.handle(HttpDemoHandler.java:32)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:498)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:265)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:243)
> at
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539)
> at java.lang.Thread.run(Thread.java:745)
> 
> On Sun, Jul 26, 2015 at 12:42 AM, Job-Selina Wu 
> wrote:
> 
>> Hi, Yan:
>> 
>>  My Http Server send message to Kafka.
>> 
>> The server.log at deploy/kafka/logs/server.log shown :
>> 
>> [2015-07-26 00:33:51,910] INFO Closing socket connection to /127.0.0.1. 
>> (kafka.network.Processor)
>> [2015-07-26 00:33:51,984] INFO Closing socket connection to /127.0.0.1. 
>> (kafka.network.Processor)
>> [2015-07-26 00:33:52,011] INFO Closing socket connection to /127.0.0.1. 
>> (kafka.network.Processor)
>> 
>> .
>> 
>> 
>> Your help is highly appreciated.
>> 
>> Sincerely,
>> 
>> Selina
>> 
>> 
>>> On Sun, Jul 26, 2015 at 12:01 AM, Yan Fang  wrote:
>>> 
>>> You are giving the Kafka code and the Samza log, which does not make sense
>>> actually...
>>> 
>>> Fang, Yan
>>> yanfang...@gmail.com
>>> 
>>> On Sat, Jul 25, 2015 at 10:31 PM, Job-Selina Wu 
>>> wrote:
>>> 
 Hi, Yi, Navina and Benjamin:
 
Thanks a lot to spending your time to help me this issue.
 
The configuration is below. Do you think it could be the
>>> configuration
 problem?
 I tried props.put("request.required.acks", "0"); and
 props.put("request.required.acks", "1"); both did not work.
 
 
 Properties props = new Properties();
 
private final Producer producer;
 
public KafkaProducer() {
//BOOTSTRAP.SERVERS
props.put("metadata.broker.list", "localhost:9092");
props.put("bootstrap.servers", "localhost:9092 ");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("partitioner.class", "com.kafka.SimplePartitioner");
props.put("request.required.acks", "0");
 
ProducerConfig config = new ProducerConfig(props);
 
producer = new Producer(config);
}
 
 --
 
 Exceptions at log are list below.
 
 Your help is highly appreciated.
 
 Sincerely,
 Selina Wu
 
 
 Exceptions at log
>>> deploy/yarn/logs/userlogs/application_1437886672214_0001/container_1437886672214_0001_01_01/samza-application-master.log
 
 2015-07-25 22:03:52 Shell [DEBUG] Failed to detect a valid hadoop home
 directory
 *java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set*.
   at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265)
   at org.apache.hadoop.util.Shell.(Shell.java:290)
   at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
   at
>>> org.apache.hadoop.yarn.conf.YarnConfiguration.(YarnConfiguration.java:517)
   at
 org.apache.samza.job.yarn.SamzaAppMaster$.main(SamzaAppMaster.scala:77)
   at
>>> org.apache.samza.job.yarn.SamzaAppMaster.main(SamzaAppMaster.scala)
 2015-07-25 22:03:52 Shell [DEBUG] setsid is not available on this
 machine. So not using it.
 2015-07-25 22:03:52 Shell [DEBUG] setsid exited with exit code 0
 2015-07-25 22:03:52 ClientHelper [INFO] trying to connect to RM
 127.0.0.1:8032
 2015-07-25 22:03:52 AbstractService [DEBUG] Service:
 org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state
 INITED
 2015-07-25 22:03:52 RMProxy [INFO] Connecting to ResourceManager at
 /127.0.0.1:8032
 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
 org.apache.hadoop.metrics2.lib.MutableRate
 org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess
 with annotation
 @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
 always=false, type=DEFAULT, value=[Rate of successful kerberos logins
 and latency (milliseconds)], valueName=Time)
 2015-07-25 22:03:52 MutableMetricsFact

Re: kafka producer failed

2015-07-26 Thread Job-Selina Wu
The exception is below:

kafka.common.FailedToSendMessageException: Failed to send messages after 3
tries.
kafka.common.FailedToSendMessageException: Failed to send messages after 3
tries.
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at http.server.HttpDemoHandler.doDemo(HttpDemoHandler.java:71)
at http.server.HttpDemoHandler.handle(HttpDemoHandler.java:32)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:498)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:265)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:243)
at
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539)
at java.lang.Thread.run(Thread.java:745)

On Sun, Jul 26, 2015 at 12:42 AM, Job-Selina Wu 
wrote:

> Hi, Yan:
>
>   My Http Server send message to Kafka.
>
> The server.log at deploy/kafka/logs/server.log shown :
>
> [2015-07-26 00:33:51,910] INFO Closing socket connection to /127.0.0.1. 
> (kafka.network.Processor)
> [2015-07-26 00:33:51,984] INFO Closing socket connection to /127.0.0.1. 
> (kafka.network.Processor)
> [2015-07-26 00:33:52,011] INFO Closing socket connection to /127.0.0.1. 
> (kafka.network.Processor)
>
> .
>
>
> Your help is highly appreciated.
>
> Sincerely,
>
> Selina
>
>
> On Sun, Jul 26, 2015 at 12:01 AM, Yan Fang  wrote:
>
>> You are giving the Kafka code and the Samza log, which does not make sense
>> actually...
>>
>> Fang, Yan
>> yanfang...@gmail.com
>>
>> On Sat, Jul 25, 2015 at 10:31 PM, Job-Selina Wu 
>> wrote:
>>
>> > Hi, Yi, Navina and Benjamin:
>> >
>> > Thanks a lot to spending your time to help me this issue.
>> >
>> > The configuration is below. Do you think it could be the
>> configuration
>> > problem?
>> > I tried props.put("request.required.acks", "0"); and
>> >  props.put("request.required.acks", "1"); both did not work.
>> >
>> > 
>> > Properties props = new Properties();
>> >
>> > private final Producer producer;
>> >
>> > public KafkaProducer() {
>> > //BOOTSTRAP.SERVERS
>> > props.put("metadata.broker.list", "localhost:9092");
>> > props.put("bootstrap.servers", "localhost:9092 ");
>> > props.put("serializer.class", "kafka.serializer.StringEncoder");
>> > props.put("partitioner.class", "com.kafka.SimplePartitioner");
>> > props.put("request.required.acks", "0");
>> >
>> > ProducerConfig config = new ProducerConfig(props);
>> >
>> > producer = new Producer(config);
>> > }
>> >
>> > --
>> >
>> >  Exceptions at log are list below.
>> >
>> > Your help is highly appreciated.
>> >
>> > Sincerely,
>> > Selina Wu
>> >
>> >
>> > Exceptions at log
>> >
>> >
>> deploy/yarn/logs/userlogs/application_1437886672214_0001/container_1437886672214_0001_01_01/samza-application-master.log
>> >
>> > 2015-07-25 22:03:52 Shell [DEBUG] Failed to detect a valid hadoop home
>> > directory
>> > *java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set*.
>> >at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265)
>> >at org.apache.hadoop.util.Shell.(Shell.java:290)
>> >at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
>> >at
>> >
>> org.apache.hadoop.yarn.conf.YarnConfiguration.(YarnConfiguration.java:517)
>> >at
>> > org.apache.samza.job.yarn.SamzaAppMaster$.main(SamzaAppMaster.scala:77)
>> >at
>> org.apache.samza.job.yarn.SamzaAppMaster.main(SamzaAppMaster.scala)
>> > 2015-07-25 22:03:52 Shell [DEBUG] setsid is not available on this
>> > machine. So not using it.
>> > 2015-07-25 22:03:52 Shell [DEBUG] setsid exited with exit code 0
>> > 2015-07-25 22:03:52 ClientHelper [INFO] trying to connect to RM
>> > 127.0.0.1:8032
>> > 2015-07-25 22:03:52 AbstractService [DEBUG] Service:
>> > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state
>> > INITED
>> > 2015-07-25 22:03:52 RMProxy [INFO] Connecting to ResourceManager at
>> > /127.0.0.1:8032
>> > 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
>> > org.apache.hadoop.metrics2.lib.MutableRate
>> > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess
>> > with annotation
>> > @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
>> > always=false, type=DEFAULT, value=[Rate of successful kerberos logins
>> > and latency (milliseconds)], valueName=Time)
>> > 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
>> > org.apache.hadoop.metrics2.lib.MutableRate
>> > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure
>> > with annotation
>> > @org.apache.hadoop.metrics2.annotatio

Re: kafka producer failed

2015-07-26 Thread Job-Selina Wu
Hi, Yan:

  My Http Server send message to Kafka.

The server.log at deploy/kafka/logs/server.log shown :

[2015-07-26 00:33:51,910] INFO Closing socket connection to
/127.0.0.1. (kafka.network.Processor)
[2015-07-26 00:33:51,984] INFO Closing socket connection to
/127.0.0.1. (kafka.network.Processor)
[2015-07-26 00:33:52,011] INFO Closing socket connection to
/127.0.0.1. (kafka.network.Processor)

.


Your help is highly appreciated.

Sincerely,

Selina


On Sun, Jul 26, 2015 at 12:01 AM, Yan Fang  wrote:

> You are giving the Kafka code and the Samza log, which does not make sense
> actually...
>
> Fang, Yan
> yanfang...@gmail.com
>
> On Sat, Jul 25, 2015 at 10:31 PM, Job-Selina Wu 
> wrote:
>
> > Hi, Yi, Navina and Benjamin:
> >
> > Thanks a lot to spending your time to help me this issue.
> >
> > The configuration is below. Do you think it could be the
> configuration
> > problem?
> > I tried props.put("request.required.acks", "0"); and
> >  props.put("request.required.acks", "1"); both did not work.
> >
> > 
> > Properties props = new Properties();
> >
> > private final Producer producer;
> >
> > public KafkaProducer() {
> > //BOOTSTRAP.SERVERS
> > props.put("metadata.broker.list", "localhost:9092");
> > props.put("bootstrap.servers", "localhost:9092 ");
> > props.put("serializer.class", "kafka.serializer.StringEncoder");
> > props.put("partitioner.class", "com.kafka.SimplePartitioner");
> > props.put("request.required.acks", "0");
> >
> > ProducerConfig config = new ProducerConfig(props);
> >
> > producer = new Producer(config);
> > }
> >
> > --
> >
> >  Exceptions at log are list below.
> >
> > Your help is highly appreciated.
> >
> > Sincerely,
> > Selina Wu
> >
> >
> > Exceptions at log
> >
> >
> deploy/yarn/logs/userlogs/application_1437886672214_0001/container_1437886672214_0001_01_01/samza-application-master.log
> >
> > 2015-07-25 22:03:52 Shell [DEBUG] Failed to detect a valid hadoop home
> > directory
> > *java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set*.
> >at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265)
> >at org.apache.hadoop.util.Shell.(Shell.java:290)
> >at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
> >at
> >
> org.apache.hadoop.yarn.conf.YarnConfiguration.(YarnConfiguration.java:517)
> >at
> > org.apache.samza.job.yarn.SamzaAppMaster$.main(SamzaAppMaster.scala:77)
> >at org.apache.samza.job.yarn.SamzaAppMaster.main(SamzaAppMaster.scala)
> > 2015-07-25 22:03:52 Shell [DEBUG] setsid is not available on this
> > machine. So not using it.
> > 2015-07-25 22:03:52 Shell [DEBUG] setsid exited with exit code 0
> > 2015-07-25 22:03:52 ClientHelper [INFO] trying to connect to RM
> > 127.0.0.1:8032
> > 2015-07-25 22:03:52 AbstractService [DEBUG] Service:
> > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state
> > INITED
> > 2015-07-25 22:03:52 RMProxy [INFO] Connecting to ResourceManager at
> > /127.0.0.1:8032
> > 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
> > org.apache.hadoop.metrics2.lib.MutableRate
> > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess
> > with annotation
> > @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
> > always=false, type=DEFAULT, value=[Rate of successful kerberos logins
> > and latency (milliseconds)], valueName=Time)
> > 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
> > org.apache.hadoop.metrics2.lib.MutableRate
> > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure
> > with annotation
> > @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
> > always=false, type=DEFAULT, value=[Rate of failed kerberos logins and
> > latency (milliseconds)], valueName=Time)
> > 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
> > org.apache.hadoop.metrics2.lib.MutableRate
> > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups
> > with annotation
> > @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
> > always=false, type=DEFAULT, value=[GetGroups], valueName=Time)
> > 2015-07-25 22:03:52 MetricsSystemImpl [DEBUG] UgiMetrics, User and
> > group related metrics
> > 2015-07-25 22:03:52 KerberosName [DEBUG] Kerberos krb5 configuration
> > not found, setting default realm to empty
> > 2015-07-25 22:03:52 Groups [DEBUG]  Creating new Groups object
> > 2015-07-25 22:03:52 NativeCodeLoader [DEBUG] Trying to load the
> > custom-built native-hadoop library...
> > 2015-07-25 22:03:52 NativeCodeLoader [DEBUG] Failed to load
> > native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in
> > java.library.path
> > 2015-07-25 22:03:52 NativeCodeLoader [DEBUG]
> >
> >
> java.library.path=/home//Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
> > 2015-

Re: kafka producer failed

2015-07-26 Thread Yan Fang
You are giving the Kafka code and the Samza log, which does not make sense
actually...

Fang, Yan
yanfang...@gmail.com

On Sat, Jul 25, 2015 at 10:31 PM, Job-Selina Wu 
wrote:

> Hi, Yi, Navina and Benjamin:
>
> Thanks a lot to spending your time to help me this issue.
>
> The configuration is below. Do you think it could be the configuration
> problem?
> I tried props.put("request.required.acks", "0"); and
>  props.put("request.required.acks", "1"); both did not work.
>
> 
> Properties props = new Properties();
>
> private final Producer producer;
>
> public KafkaProducer() {
> //BOOTSTRAP.SERVERS
> props.put("metadata.broker.list", "localhost:9092");
> props.put("bootstrap.servers", "localhost:9092 ");
> props.put("serializer.class", "kafka.serializer.StringEncoder");
> props.put("partitioner.class", "com.kafka.SimplePartitioner");
> props.put("request.required.acks", "0");
>
> ProducerConfig config = new ProducerConfig(props);
>
> producer = new Producer(config);
> }
>
> --
>
>  Exceptions at log are list below.
>
> Your help is highly appreciated.
>
> Sincerely,
> Selina Wu
>
>
> Exceptions at log
>
> deploy/yarn/logs/userlogs/application_1437886672214_0001/container_1437886672214_0001_01_01/samza-application-master.log
>
> 2015-07-25 22:03:52 Shell [DEBUG] Failed to detect a valid hadoop home
> directory
> *java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set*.
>at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265)
>at org.apache.hadoop.util.Shell.(Shell.java:290)
>at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
>at
> org.apache.hadoop.yarn.conf.YarnConfiguration.(YarnConfiguration.java:517)
>at
> org.apache.samza.job.yarn.SamzaAppMaster$.main(SamzaAppMaster.scala:77)
>at org.apache.samza.job.yarn.SamzaAppMaster.main(SamzaAppMaster.scala)
> 2015-07-25 22:03:52 Shell [DEBUG] setsid is not available on this
> machine. So not using it.
> 2015-07-25 22:03:52 Shell [DEBUG] setsid exited with exit code 0
> 2015-07-25 22:03:52 ClientHelper [INFO] trying to connect to RM
> 127.0.0.1:8032
> 2015-07-25 22:03:52 AbstractService [DEBUG] Service:
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state
> INITED
> 2015-07-25 22:03:52 RMProxy [INFO] Connecting to ResourceManager at
> /127.0.0.1:8032
> 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
> org.apache.hadoop.metrics2.lib.MutableRate
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess
> with annotation
> @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
> always=false, type=DEFAULT, value=[Rate of successful kerberos logins
> and latency (milliseconds)], valueName=Time)
> 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
> org.apache.hadoop.metrics2.lib.MutableRate
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure
> with annotation
> @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
> always=false, type=DEFAULT, value=[Rate of failed kerberos logins and
> latency (milliseconds)], valueName=Time)
> 2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
> org.apache.hadoop.metrics2.lib.MutableRate
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups
> with annotation
> @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
> always=false, type=DEFAULT, value=[GetGroups], valueName=Time)
> 2015-07-25 22:03:52 MetricsSystemImpl [DEBUG] UgiMetrics, User and
> group related metrics
> 2015-07-25 22:03:52 KerberosName [DEBUG] Kerberos krb5 configuration
> not found, setting default realm to empty
> 2015-07-25 22:03:52 Groups [DEBUG]  Creating new Groups object
> 2015-07-25 22:03:52 NativeCodeLoader [DEBUG] Trying to load the
> custom-built native-hadoop library...
> 2015-07-25 22:03:52 NativeCodeLoader [DEBUG] Failed to load
> native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in
> java.library.path
> 2015-07-25 22:03:52 NativeCodeLoader [DEBUG]
>
> java.library.path=/home//Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
> 2015-07-25 22:03:52 NativeCodeLoader [WARN] Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
>
>
> 2015-07-25 22:03:53 KafkaCheckpointManager [WARN] While trying to
> validate topic __samza_checkpoint_ver_1_for_demo-parser7_1:
> *kafka.common.LeaderNotAvailableException. Retrying.*
> 2015-07-25 22:03:53 KafkaCheckpointManager [DEBUG] Exception detail:
> kafka.common.LeaderNotAvailableException
>at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.

Re: kafka producer failed

2015-07-25 Thread Job-Selina Wu
Hi, Yi, Navina and Benjamin:

Thanks a lot to spending your time to help me this issue.

The configuration is below. Do you think it could be the configuration
problem?
I tried props.put("request.required.acks", "0"); and
 props.put("request.required.acks", "1"); both did not work.


Properties props = new Properties();

private final Producer producer;

public KafkaProducer() {
//BOOTSTRAP.SERVERS
props.put("metadata.broker.list", "localhost:9092");
props.put("bootstrap.servers", "localhost:9092 ");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("partitioner.class", "com.kafka.SimplePartitioner");
props.put("request.required.acks", "0");

ProducerConfig config = new ProducerConfig(props);

producer = new Producer(config);
}

--

 Exceptions at log are list below.

Your help is highly appreciated.

Sincerely,
Selina Wu


Exceptions at log
deploy/yarn/logs/userlogs/application_1437886672214_0001/container_1437886672214_0001_01_01/samza-application-master.log

2015-07-25 22:03:52 Shell [DEBUG] Failed to detect a valid hadoop home directory
*java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set*.
   at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265)
   at org.apache.hadoop.util.Shell.(Shell.java:290)
   at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
   at 
org.apache.hadoop.yarn.conf.YarnConfiguration.(YarnConfiguration.java:517)
   at org.apache.samza.job.yarn.SamzaAppMaster$.main(SamzaAppMaster.scala:77)
   at org.apache.samza.job.yarn.SamzaAppMaster.main(SamzaAppMaster.scala)
2015-07-25 22:03:52 Shell [DEBUG] setsid is not available on this
machine. So not using it.
2015-07-25 22:03:52 Shell [DEBUG] setsid exited with exit code 0
2015-07-25 22:03:52 ClientHelper [INFO] trying to connect to RM 127.0.0.1:8032
2015-07-25 22:03:52 AbstractService [DEBUG] Service:
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state
INITED
2015-07-25 22:03:52 RMProxy [INFO] Connecting to ResourceManager at
/127.0.0.1:8032
2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess
with annotation
@org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
always=false, type=DEFAULT, value=[Rate of successful kerberos logins
and latency (milliseconds)], valueName=Time)
2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure
with annotation
@org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
always=false, type=DEFAULT, value=[Rate of failed kerberos logins and
latency (milliseconds)], valueName=Time)
2015-07-25 22:03:52 MutableMetricsFactory [DEBUG] field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups
with annotation
@org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, about=,
always=false, type=DEFAULT, value=[GetGroups], valueName=Time)
2015-07-25 22:03:52 MetricsSystemImpl [DEBUG] UgiMetrics, User and
group related metrics
2015-07-25 22:03:52 KerberosName [DEBUG] Kerberos krb5 configuration
not found, setting default realm to empty
2015-07-25 22:03:52 Groups [DEBUG]  Creating new Groups object
2015-07-25 22:03:52 NativeCodeLoader [DEBUG] Trying to load the
custom-built native-hadoop library...
2015-07-25 22:03:52 NativeCodeLoader [DEBUG] Failed to load
native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in
java.library.path
2015-07-25 22:03:52 NativeCodeLoader [DEBUG]
java.library.path=/home//Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
2015-07-25 22:03:52 NativeCodeLoader [WARN] Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable


2015-07-25 22:03:53 KafkaCheckpointManager [WARN] While trying to
validate topic __samza_checkpoint_ver_1_for_demo-parser7_1:
*kafka.common.LeaderNotAvailableException. Retrying.*
2015-07-25 22:03:53 KafkaCheckpointManager [DEBUG] Exception detail:
kafka.common.LeaderNotAvailableException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
   at java.lang.Class.newInstance(Class.java:442)
   at kafka.common.ErrorMapping$.maybeThrowException(ErrorMapping.scala:84)
   at org.apache.samza.util.KafkaUtil$.maybeThrowException(KafkaUtil.scala:63)
   at 
org.apache.samza.checkpoint.kafka.KafkaCheckpointManager$$anonfun$validateTopic$2.apply

Re: kafka producer failed

2015-07-24 Thread Benjamin Black
what are the log messages from the kafka brokers? these look like client
messages indicating a broker problem.

On Fri, Jul 24, 2015 at 1:18 PM, Job-Selina Wu 
wrote:

> Hi, Yi:
>
>   I am wondering if the problem can be fixed by the parameter  "
> max.message.size" at kafka.producer.ProducerConfig for the topic size?
>
>   My Http Server send message to Kafka. The last message shown on
> console is
> "message=timestamp=06-20-2015 id=678 ip=22.231.113.68 browser=Safari
> postalCode=95066 url=http://sample2.com language=ENG mobileBrand=Apple
> count=4269"
>
> However the Kafka got Exception from message 4244th
> The error is below and Kafka do not accept any new message after this.
>
> "[2015-07-24 12:46:11,078] WARN
> [console-consumer-61156_Selinas-MacBook-Pro.local-1437766693294-a68fc532-leader-finder-thread],
> Failed to find leader for Set([http-demo,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.KafkaException: fetching topic metadata for topics
> [Set(http-demo)] from broker [ArrayBuffer(id:0,host:10.1.10.173,port:9092)]
> failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Caused by: java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
> ... 3 more
> [2015-07-24 12:46:11,287] WARN Fetching topic metadata with correlation id
> 21 for topics [Set(http-demo)] from broker
> [id:0,host:10.1.10.173,port:9092] failed (kafka.client.ClientUtils$)
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)"
>
>
> After the Error:
> I show the topic, it is right, but can not show the content by command line
>
> Selinas-MacBook-Pro:samza-Demo selina$ deploy/kafka/bin/kafka-topics.sh
> --list --zookeeper localhost:2181
> http-demo
> Selinas-MacBook-Pro:samza-Demo selina$  
> deploy/kafka/bin/kafka-console-consumer.sh
> --zookeeper localhost:2181 --from-beginning --topic http-demo
> [2015-07-24 12:47:38,730] WARN
> [console-consumer-10297_Selinas-MacBook-Pro.local-1437767258570-1a809d87],
> no brokers found when trying to rebalance.
> (kafka.consumer.ZookeeperConsumerConnector)
>
> Attached is my Kafka properties  for server and producer.
>
> Your help is highly appreciated
>
>
> Sincerely,
> Selina
>
>
>
> On Thu, Jul 23, 2015 at 11:16 PM, Yi Pan  wrote:
>
>> Hi, Selina,
>>
>> Your question is not clear.
>> {quote}
>> When the messages was send to Kafka by KafkaProducer, It always failed
>> when the message more than 3000 - 4000 messages.
>> {quote}
>>
>> What's failing? The error stack shows errors on the consumer side and you
>> were referring to failures to produce to Kafka. Could you be more specific
>> regarding to what's your failure scenario?
>>
>> -Yi
>>
>> On Thu, Jul 23, 2015 at 5:46 PM, Job-Selina Wu 
>> wrote:
>>
>> > Hi,
>> >
>> > When the messages was send to Kafka by KafkaProducer, It always
>> failed
>> > when the message more than 3000 - 4000 messages. The error is shown
>> below.
>> > I am wondering if any topic size I need to set at Samza configuration?
>> >
>> >
>> > [2015-07-23 17:30:03,792] WARN
>> >
>> >
>> [console-consumer-84579_Selinas-MacBook-Pro.local-1437697324624-eecb4f40-leader-finder-thread],
>> > Failed to find leader for Set([http-demo,0])
>> > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
>> > kafka.common.KafkaException: fetching topic metadata for topics
>> > [Set(http-demo)] from broker [ArrayBuffer()] failed
>> > at
>> > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
>> > at
>> > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
>> > at
>> >
>> >
>> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
>> > at
>> kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
>> > ^CConsumed 4327 messages
>> >

Re: kafka producer failed

2015-07-24 Thread Yi Pan
Hi, Selina,

I assume that you were referring to "max.message.bytes" in Kafka producer
config? There is no "max.message.size" config. If you were referring to the
"max.message.bytes", it has nothing to do w/ the number of messages in a
topic, it is the limit on a single message size in bytes in Kafka.

And have you attached VisualVM or JConsole to the Samza container to see
whether you can find the metrics reporting "send_calls" etc.? That should
tell you whether your StreamTask class actually executed the
collector.send(). There are also many other metrics, such as
"process-calls", "process-envelopes" etc. You can also turn on the debug
log in your container by setting the log4j.xml log levels.

On Fri, Jul 24, 2015 at 1:18 PM, Job-Selina Wu 
wrote:

> Hi, Yi:
>
>   I am wondering if the problem can be fixed by the parameter  "
> max.message.size" at kafka.producer.ProducerConfig for the topic size?
>
>   My Http Server send message to Kafka. The last message shown on
> console is
> "message=timestamp=06-20-2015 id=678 ip=22.231.113.68 browser=Safari
> postalCode=95066 url=http://sample2.com language=ENG mobileBrand=Apple
> count=4269"
>
> However the Kafka got Exception from message 4244th
> The error is below and Kafka do not accept any new message after this.
>
> "[2015-07-24 12:46:11,078] WARN
> [console-consumer-61156_Selinas-MacBook-Pro.local-1437766693294-a68fc532-leader-finder-thread],
> Failed to find leader for Set([http-demo,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.KafkaException: fetching topic metadata for topics
> [Set(http-demo)] from broker [ArrayBuffer(id:0,host:10.1.10.173,port:9092)]
> failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Caused by: java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
> ... 3 more
> [2015-07-24 12:46:11,287] WARN Fetching topic metadata with correlation id
> 21 for topics [Set(http-demo)] from broker
> [id:0,host:10.1.10.173,port:9092] failed (kafka.client.ClientUtils$)
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)"
>
>
> After the Error:
> I show the topic, it is right, but can not show the content by command line
>
> Selinas-MacBook-Pro:samza-Demo selina$ deploy/kafka/bin/kafka-topics.sh
> --list --zookeeper localhost:2181
> http-demo
> Selinas-MacBook-Pro:samza-Demo selina$  
> deploy/kafka/bin/kafka-console-consumer.sh
> --zookeeper localhost:2181 --from-beginning --topic http-demo
> [2015-07-24 12:47:38,730] WARN
> [console-consumer-10297_Selinas-MacBook-Pro.local-1437767258570-1a809d87],
> no brokers found when trying to rebalance.
> (kafka.consumer.ZookeeperConsumerConnector)
>
> Attached is my Kafka properties  for server and producer.
>
> Your help is highly appreciated
>
>
> Sincerely,
> Selina
>
>
>
> On Thu, Jul 23, 2015 at 11:16 PM, Yi Pan  wrote:
>
>> Hi, Selina,
>>
>> Your question is not clear.
>> {quote}
>> When the messages was send to Kafka by KafkaProducer, It always failed
>> when the message more than 3000 - 4000 messages.
>> {quote}
>>
>> What's failing? The error stack shows errors on the consumer side and you
>> were referring to failures to produce to Kafka. Could you be more specific
>> regarding to what's your failure scenario?
>>
>> -Yi
>>
>> On Thu, Jul 23, 2015 at 5:46 PM, Job-Selina Wu 
>> wrote:
>>
>> > Hi,
>> >
>> > When the messages was send to Kafka by KafkaProducer, It always
>> failed
>> > when the message more than 3000 - 4000 messages. The error is shown
>> below.
>> > I am wondering if any topic size I need to set at Samza configuration?
>> >
>> >
>> > [2015-07-23 17:30:03,792] WARN
>> >
>> >
>> [console-consumer-84579_Selinas-MacBook-Pro.local-1437697324624-eecb4f40-leader-finder-thread],
>> > Failed to find leader for Set([http-demo,0])
>> > (kafka.cons

Re: kafka producer failed

2015-07-24 Thread Job-Selina Wu
Hi, All:

 Do you think it could be caused by memory, virtual memory size?

Sincerely,
Selina



On Fri, Jul 24, 2015 at 1:54 PM, Job-Selina Wu 
wrote:

> Hi, Navina:
>
> Thanks for your reply: the files are listed below:
>
> Your help is highly appreciated.
>
> Sincerely,
> Selina
>
> The producer.properties for
> Kafka:
>
>
> # Producer Basics #
>
> # list of brokers used for bootstrapping knowledge about the rest of the
> cluster
> # format: host1:port1,host2:port2 ...
> metadata.broker.list=localhost:9092
>
> # name of the partitioner class for partitioning events; default partition
> spreads data randomly
> #partitioner.class=
>
> # specifies whether the messages are sent asynchronously (async) or
> synchronously (sync)
> producer.type=sync
>
> # specify the compression codec for all data generated: none, gzip,
> snappy, lz4.
> # the old config values work as well: 0, 1, 2, 3 for none, gzip, snappy,
> lz4, respectively
> compression.codec=none
>
> # message encoder
> serializer.class=kafka.serializer.DefaultEncoder
>
> # allow topic level compression
> #compressed.topics=
>
> # Async Producer #
> # maximum time, in milliseconds, for buffering data on the producer queue
> #queue.buffering.max.ms=
>
> # the maximum size of the blocking queue for buffering on the producer
> #queue.buffering.max.messages=
>
> # Timeout for event enqueue:
> # 0: events will be enqueued immediately or dropped if the queue is full
> # -ve: enqueue will block indefinitely if the queue is full
> # +ve: enqueue will block up to this many milliseconds if the queue is full
> #queue.enqueue.timeout.ms=
>
> # the number of messages batched at the producer
> #batch.num.messages=
>
>
> -the server.properties for
> Kafka--
> # Server Basics #
>
> # The id of the broker. This must be set to a unique integer for each
> broker.
> broker.id=0
>
> # Socket Server Settings
> #
>
> # The port the socket server listens on
> port=9092
>
> # Hostname the broker will bind to. If not set, the server will bind to
> all interfaces
> #host.name=localhost
>
> # Hostname the broker will advertise to producers and consumers. If not
> set, it uses the
> # value for "host.name" if configured.  Otherwise, it will use the value
> returned from
> # java.net.InetAddress.getCanonicalHostName().
> #advertised.host.name=
>
> # The port to publish to ZooKeeper for clients to use. If this is not set,
> # it will publish the same port that the broker binds to.
> #advertised.port=
>
> # The number of threads handling network requests
> num.network.threads=3
>
> # The number of threads doing disk I/O
> num.io.threads=8
>
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=102400
>
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=102400
>
> # The maximum size of a request that the socket server will accept
> (protection against OOM)
> socket.request.max.bytes=104857600
>
>
> # Log Basics #
>
> # A comma seperated list of directories under which to store log files
> log.dirs=/tmp/kafka-logs
>
> # The default number of log partitions per topic. More partitions allow
> greater
> # parallelism for consumption, but this will also result in more files
> across
> # the brokers.
> num.partitions=1
>
> # The number of threads per data directory to be used for log recovery at
> startup and flushing at shutdown.
> # This value is recommended to be increased for installations with data
> dirs located in RAID array.
> num.recovery.threads.per.data.dir=1
>
> # Log Flush Policy
> #
>
> # Messages are immediately written to the filesystem but by default we
> only fsync() to sync
> # the OS cache lazily. The following configurations control the flush of
> data to disk.
> # There are a few important trade-offs here:
> #1. Durability: Unflushed data may be lost if you are not using
> replication.
> #2. Latency: Very large flush intervals may lead to latency spikes
> when the flush does occur as there will be a lot of data to flush.
> #3. Throughput: The flush is generally the most expensive operation,
> and a small flush interval may lead to exceessive seeks.
> # The settings below allow one to configure the flush policy to flush data
> after a period of time or
> # every N messages (or both). This can be done globally and overridden on
> a per-topic basis.
>
> # The number of messages to accept before forcing a flush of data to disk
> #log.flush.interval.messages=1
>
> # The maximum amount of time a message can sit in a log before we force 

Re: kafka producer failed

2015-07-24 Thread Job-Selina Wu
Hi, Navina:

Thanks for your reply: the files are listed below:

Your help is highly appreciated.

Sincerely,
Selina

The producer.properties for
Kafka:


# Producer Basics #

# list of brokers used for bootstrapping knowledge about the rest of the
cluster
# format: host1:port1,host2:port2 ...
metadata.broker.list=localhost:9092

# name of the partitioner class for partitioning events; default partition
spreads data randomly
#partitioner.class=

# specifies whether the messages are sent asynchronously (async) or
synchronously (sync)
producer.type=sync

# specify the compression codec for all data generated: none, gzip, snappy,
lz4.
# the old config values work as well: 0, 1, 2, 3 for none, gzip, snappy,
lz4, respectively
compression.codec=none

# message encoder
serializer.class=kafka.serializer.DefaultEncoder

# allow topic level compression
#compressed.topics=

# Async Producer #
# maximum time, in milliseconds, for buffering data on the producer queue
#queue.buffering.max.ms=

# the maximum size of the blocking queue for buffering on the producer
#queue.buffering.max.messages=

# Timeout for event enqueue:
# 0: events will be enqueued immediately or dropped if the queue is full
# -ve: enqueue will block indefinitely if the queue is full
# +ve: enqueue will block up to this many milliseconds if the queue is full
#queue.enqueue.timeout.ms=

# the number of messages batched at the producer
#batch.num.messages=


-the server.properties for
Kafka--
# Server Basics #

# The id of the broker. This must be set to a unique integer for each
broker.
broker.id=0

# Socket Server Settings
#

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all
interfaces
#host.name=localhost

# Hostname the broker will advertise to producers and consumers. If not
set, it uses the
# value for "host.name" if configured.  Otherwise, it will use the value
returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=

# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=

# The number of threads handling network requests
num.network.threads=3

# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept
(protection against OOM)
socket.request.max.bytes=104857600


# Log Basics #

# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow
greater
# parallelism for consumption, but this will also result in more files
across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at
startup and flushing at shutdown.
# This value is recommended to be increased for installations with data
dirs located in RAID array.
num.recovery.threads.per.data.dir=1

# Log Flush Policy #

# Messages are immediately written to the filesystem but by default we only
fsync() to sync
# the OS cache lazily. The following configurations control the flush of
data to disk.
# There are a few important trade-offs here:
#1. Durability: Unflushed data may be lost if you are not using
replication.
#2. Latency: Very large flush intervals may lead to latency spikes when
the flush does occur as there will be a lot of data to flush.
#3. Throughput: The flush is generally the most expensive operation,
and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data
after a period of time or
# every N messages (or both). This can be done globally and overridden on a
per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=1

# The maximum amount of time a message can sit in a log before we force a
flush
#log.flush.interval.ms=1000

# Log Retention Policy
#

# The following configurations control the disposal of log segments. The
policy can
# be set to delete segments after a period of time, or after a given size
has accumulated.
# A segment will be deleted whenever *either* of these criteria are met.
Deletion always happens
# from the end o

Re: kafka producer failed

2015-07-24 Thread Navina Ramesh
Hi Selina,
Looks like the attachment got filtered. Can you past the config in the
email or at pastebin?

Thanks!
Navina

On Fri, Jul 24, 2015 at 1:18 PM, Job-Selina Wu 
wrote:

> Hi, Yi:
>
>   I am wondering if the problem can be fixed by the parameter  "
> max.message.size" at kafka.producer.ProducerConfig for the topic size?
>
>   My Http Server send message to Kafka. The last message shown on
> console is
> "message=timestamp=06-20-2015 id=678 ip=22.231.113.68 browser=Safari
> postalCode=95066 url=http://sample2.com language=ENG mobileBrand=Apple
> count=4269"
>
> However the Kafka got Exception from message 4244th
> The error is below and Kafka do not accept any new message after this.
>
> "[2015-07-24 12:46:11,078] WARN
> [console-consumer-61156_Selinas-MacBook-Pro.local-1437766693294-a68fc532-leader-finder-thread],
> Failed to find leader for Set([http-demo,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.KafkaException: fetching topic metadata for topics
> [Set(http-demo)] from broker [ArrayBuffer(id:0,host:10.1.10.173,port:9092)]
> failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Caused by: java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
> ... 3 more
> [2015-07-24 12:46:11,287] WARN Fetching topic metadata with correlation id
> 21 for topics [Set(http-demo)] from broker
> [id:0,host:10.1.10.173,port:9092] failed (kafka.client.ClientUtils$)
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)"
>
>
> After the Error:
> I show the topic, it is right, but can not show the content by command line
>
> Selinas-MacBook-Pro:samza-Demo selina$ deploy/kafka/bin/kafka-topics.sh
> --list --zookeeper localhost:2181
> http-demo
> Selinas-MacBook-Pro:samza-Demo selina$  
> deploy/kafka/bin/kafka-console-consumer.sh
> --zookeeper localhost:2181 --from-beginning --topic http-demo
> [2015-07-24 12:47:38,730] WARN
> [console-consumer-10297_Selinas-MacBook-Pro.local-1437767258570-1a809d87],
> no brokers found when trying to rebalance.
> (kafka.consumer.ZookeeperConsumerConnector)
>
> Attached is my Kafka properties  for server and producer.
>
> Your help is highly appreciated
>
>
> Sincerely,
> Selina
>
>
>
> On Thu, Jul 23, 2015 at 11:16 PM, Yi Pan  wrote:
>
>> Hi, Selina,
>>
>> Your question is not clear.
>> {quote}
>> When the messages was send to Kafka by KafkaProducer, It always failed
>> when the message more than 3000 - 4000 messages.
>> {quote}
>>
>> What's failing? The error stack shows errors on the consumer side and you
>> were referring to failures to produce to Kafka. Could you be more specific
>> regarding to what's your failure scenario?
>>
>> -Yi
>>
>> On Thu, Jul 23, 2015 at 5:46 PM, Job-Selina Wu 
>> wrote:
>>
>> > Hi,
>> >
>> > When the messages was send to Kafka by KafkaProducer, It always
>> failed
>> > when the message more than 3000 - 4000 messages. The error is shown
>> below.
>> > I am wondering if any topic size I need to set at Samza configuration?
>> >
>> >
>> > [2015-07-23 17:30:03,792] WARN
>> >
>> >
>> [console-consumer-84579_Selinas-MacBook-Pro.local-1437697324624-eecb4f40-leader-finder-thread],
>> > Failed to find leader for Set([http-demo,0])
>> > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
>> > kafka.common.KafkaException: fetching topic metadata for topics
>> > [Set(http-demo)] from broker [ArrayBuffer()] failed
>> > at
>> > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
>> > at
>> > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
>> > at
>> >
>> >
>> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
>> > at
>> kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
>> > ^CConsumed 4327 messa

Re: kafka producer failed

2015-07-24 Thread Job-Selina Wu
Hi, Yi:

  I am wondering if the problem can be fixed by the parameter  "
max.message.size" at kafka.producer.ProducerConfig for the topic size?

  My Http Server send message to Kafka. The last message shown on
console is
"message=timestamp=06-20-2015 id=678 ip=22.231.113.68 browser=Safari
postalCode=95066 url=http://sample2.com language=ENG mobileBrand=Apple
count=4269"

However the Kafka got Exception from message 4244th
The error is below and Kafka do not accept any new message after this.

"[2015-07-24 12:46:11,078] WARN
[console-consumer-61156_Selinas-MacBook-Pro.local-1437766693294-a68fc532-leader-finder-thread],
Failed to find leader for Set([http-demo,0])
(kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.KafkaException: fetching topic metadata for topics
[Set(http-demo)] from broker [ArrayBuffer(id:0,host:10.1.10.173,port:9092)]
failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at
kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
... 3 more
[2015-07-24 12:46:11,287] WARN Fetching topic metadata with correlation id
21 for topics [Set(http-demo)] from broker
[id:0,host:10.1.10.173,port:9092] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at
kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)"


After the Error:
I show the topic, it is right, but can not show the content by command line

Selinas-MacBook-Pro:samza-Demo selina$ deploy/kafka/bin/kafka-topics.sh
--list --zookeeper localhost:2181
http-demo
Selinas-MacBook-Pro:samza-Demo selina$
deploy/kafka/bin/kafka-console-consumer.sh
--zookeeper localhost:2181 --from-beginning --topic http-demo
[2015-07-24 12:47:38,730] WARN
[console-consumer-10297_Selinas-MacBook-Pro.local-1437767258570-1a809d87],
no brokers found when trying to rebalance.
(kafka.consumer.ZookeeperConsumerConnector)

Attached is my Kafka properties  for server and producer.

Your help is highly appreciated


Sincerely,
Selina



On Thu, Jul 23, 2015 at 11:16 PM, Yi Pan  wrote:

> Hi, Selina,
>
> Your question is not clear.
> {quote}
> When the messages was send to Kafka by KafkaProducer, It always failed
> when the message more than 3000 - 4000 messages.
> {quote}
>
> What's failing? The error stack shows errors on the consumer side and you
> were referring to failures to produce to Kafka. Could you be more specific
> regarding to what's your failure scenario?
>
> -Yi
>
> On Thu, Jul 23, 2015 at 5:46 PM, Job-Selina Wu 
> wrote:
>
> > Hi,
> >
> > When the messages was send to Kafka by KafkaProducer, It always
> failed
> > when the message more than 3000 - 4000 messages. The error is shown
> below.
> > I am wondering if any topic size I need to set at Samza configuration?
> >
> >
> > [2015-07-23 17:30:03,792] WARN
> >
> >
> [console-consumer-84579_Selinas-MacBook-Pro.local-1437697324624-eecb4f40-leader-finder-thread],
> > Failed to find leader for Set([http-demo,0])
> > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > kafka.common.KafkaException: fetching topic metadata for topics
> > [Set(http-demo)] from broker [ArrayBuffer()] failed
> > at
> > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
> > at
> > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> > at
> >
> >
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> > at
> kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> > ^CConsumed 4327 messages
> >
> > Your reply and comment will be highly appreciated.
> >
> >
> > Sincerely,
> > Selina
> >
>


Re: kafka producer failed

2015-07-23 Thread Yi Pan
Hi, Selina,

Your question is not clear.
{quote}
When the messages was send to Kafka by KafkaProducer, It always failed
when the message more than 3000 - 4000 messages.
{quote}

What's failing? The error stack shows errors on the consumer side and you
were referring to failures to produce to Kafka. Could you be more specific
regarding to what's your failure scenario?

-Yi

On Thu, Jul 23, 2015 at 5:46 PM, Job-Selina Wu 
wrote:

> Hi,
>
> When the messages was send to Kafka by KafkaProducer, It always failed
> when the message more than 3000 - 4000 messages. The error is shown below.
> I am wondering if any topic size I need to set at Samza configuration?
>
>
> [2015-07-23 17:30:03,792] WARN
>
> [console-consumer-84579_Selinas-MacBook-Pro.local-1437697324624-eecb4f40-leader-finder-thread],
> Failed to find leader for Set([http-demo,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.KafkaException: fetching topic metadata for topics
> [Set(http-demo)] from broker [ArrayBuffer()] failed
> at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
> at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> at
>
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> ^CConsumed 4327 messages
>
> Your reply and comment will be highly appreciated.
>
>
> Sincerely,
> Selina
>


Re: kafka producer failed

2015-07-23 Thread Job-Selina Wu
Hi,

After I got the error in previous email. I try to check the content of
the topic. It show the error below. However when I stop Samza and re-run it
again,  it will be fine. Does anyone know What was the problem?

[2015-07-23 17:50:08,391] WARN
[console-consumer-83311_Selinas-MacBook-Pro.local-1437699008253-76961253],
no brokers found when trying to rebalance.
(kafka.consumer.ZookeeperConsumerConnector)


Sincerely,
Selina



On Thu, Jul 23, 2015 at 5:46 PM, Job-Selina Wu 
wrote:

> Hi,
>
> When the messages was send to Kafka by KafkaProducer, It always failed
> when the message more than 3000 - 4000 messages. The error is shown below.
> I am wondering if any topic size I need to set at Samza configuration?
>
>
> [2015-07-23 17:30:03,792] WARN
> [console-consumer-84579_Selinas-MacBook-Pro.local-1437697324624-eecb4f40-leader-finder-thread],
> Failed to find leader for Set([http-demo,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.KafkaException: fetching topic metadata for topics
> [Set(http-demo)] from broker [ArrayBuffer()] failed
> at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
> at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> ^CConsumed 4327 messages
>
> Your reply and comment will be highly appreciated.
>
>
> Sincerely,
> Selina
>
>


kafka producer failed

2015-07-23 Thread Job-Selina Wu
Hi,

When the messages was send to Kafka by KafkaProducer, It always failed
when the message more than 3000 - 4000 messages. The error is shown below.
I am wondering if any topic size I need to set at Samza configuration?


[2015-07-23 17:30:03,792] WARN
[console-consumer-84579_Selinas-MacBook-Pro.local-1437697324624-eecb4f40-leader-finder-thread],
Failed to find leader for Set([http-demo,0])
(kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.KafkaException: fetching topic metadata for topics
[Set(http-demo)] from broker [ArrayBuffer()] failed
at
kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at
kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at
kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
^CConsumed 4327 messages

Your reply and comment will be highly appreciated.


Sincerely,
Selina