Re: kafka is not accepting number of partitions from configuration

2017-03-26 Thread Hans Jespersen
The num.partitions parameter is a server/broker config but you are using it as 
a client/producer parameter so it will not work and will be ignored.

http://stackoverflow.com/questions/22152269/how-to-specify-number-of-partitions-on-kafka-2-8

I assume the CLI command you are using is the administrative kafka-topics.sh 
tool which talks directly to zookeeper and the Kafka brokers to create or 
modify topics in the Kafka cluster. This will work to create a topic before you 
start your producer app.

-hans

> On Mar 26, 2017, at 2:00 AM, Laxmi Narayan  wrote:
> 
> ​Hi ,
> Kafka not accepting number of partitions from config , while from CLI it is
> accepting.
> 
> What am I missing here ?
> 
> 
> props.put("bootstrap.servers",  kafkaConstants.Bootstrap_Servers);
> props.put("enable.auto.commit", kafkaConstants.Enable_Auto_Commit);
> props.put("auto.commit.interval.ms",
> kafkaConstants.Auto_Commit_Interval_Ms);
> props.put("session.timeout.ms", kafkaConstants.Session_Timeout_Ms);
> props.put("linger.ms",  "1");
> 
> props.put("key.deserializer",   kafkaConstants.Key_Deserializer);
> props.put("value.deserializer", kafkaConstants.Value_Deserializer);
> 
> props.put("key.serializer", kafkaConstants.Key_Serializer);
> props.put("value.serializer",   kafkaConstants.Value_Serializer);
> 
> props.put("partitioner.class",  kafkaConstants.Partitioner_Class);
> props.put("num.partitions", 999);
> props.put("group.id",   kafkaConstants.KafkaGroupId);
> 
> 
> 
> ​
> 
> 
> Keep learning keep moving .


Re: Streams RocksDBException with no message?

2017-03-26 Thread Sachin Mittal
Hi,
Could you please tell us what did you change for ulimit and how.

We also are seem to be facing same issue.

Thanks
Sachin


On Tue, Mar 21, 2017 at 9:22 PM, Mathieu Fenniak <
mathieu.fenn...@replicon.com> wrote:

> Thanks Guozhang.
>
> For my part, turns out I was hitting ulimit on my open file descriptors.
> Phew, easy to fix... once I figured it out. :-)
>
> Mathieu
>
>
> On Fri, Mar 17, 2017 at 4:14 PM, Guozhang Wang  wrote:
>
> > Hi Mathieu,
> >
> > We are aware of that since long time ago and I have been looking into
> this
> > issue, turns out to be a known issue in RocksDB:
> >
> > https://github.com/facebook/rocksdb/issues/1688
> >
> > And the corresponding fix (https://github.com/facebook/rocksdb/pull/1714
> )
> > has been merged in master but marked for
> >
> >- v5.1.4 
> >
> > only while the latest release is 5.1.2.
> >
> >
> > Guozhang
> >
> >
> > On Fri, Mar 17, 2017 at 10:27 AM, Mathieu Fenniak <
> > mathieu.fenn...@replicon.com> wrote:
> >
> > > Hey all,
> > >
> > > So... what does it mean to have a RocksDBException with a message that
> > just
> > > has a single character?  "e", "q", "]"... I've seen a few.  Has anyone
> > seen
> > > this before?
> > >
> > > Two example exceptions:
> > > https://gist.github.com/mfenniak/c56beb6d5058e2b21df0309aea224f12
> > >
> > > Kafka Streams 0.10.2.0.  Both of these errors occurred during state
> store
> > > initialization.  I'm running a single Kafka Streams thread per server,
> > this
> > > occurred on two servers about a half-hour apart.
> > >
> > > Mathieu
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: any production deployment of kafka 0.10.2.0

2017-03-26 Thread Jeff Widman
+ Users list


On Mar 26, 2017 8:17 AM, "Jianbin Wei"  wrote:

We are thinking about upgrading our system to 0.10.2.0.  Has anybody
upgraded his/her system to 0.10.2.0 and any issues?

Regards,

-- Jianbin


Re: YASSQ (yet another state store question)

2017-03-26 Thread Jon Yeargers
Also - if I run this on two hosts - what does it imply if the response to
'streams.allMetadata()' from one host includes both instances but the other
host only knows about itself?

On Sun, Mar 26, 2017 at 5:58 AM, Jon Yeargers 
wrote:

> If the '.state()' function returns "RUNNING" and I still get this
> exception?
>
> On Fri, Mar 24, 2017 at 1:56 PM, Eno Thereska 
> wrote:
>
>> Hi Jon,
>>
>> This is expected, see this: https://groups.google.com/foru
>> m/?pli=1#!searchin/confluent-platform/migrated$20to$
>> 20another$20instance%7Csort:relevance/confluent-platform/
>> LglWC_dZDKw/qsPuCRT_DQAJ > um/?pli=1#!searchin/confluent-platform/migrated$20to$
>> 20another$20instance|sort:relevance/confluent-platform/
>> LglWC_dZDKw/qsPuCRT_DQAJ>.
>>
>> Thanks
>> Eno
>> > On 24 Mar 2017, at 20:51, Jon Yeargers 
>> wrote:
>> >
>> > I've setup a KTable as follows:
>> >
>> > KTable, String> outTable = sourceStream.groupByKey().
>> > reduce(rowReducer,
>> >TimeWindows.of(5 * 60 * 1000L).advanceBy(1 * 60 *
>> > 1000).until(10 * 60 * 1000L),
>> >"AggStore");
>> >
>> > I can confirm its presence via 'streams.allMetadata()' (accessible
>> through
>> > a simple httpserver).
>> >
>> > When I call 'ReadOnlyKeyValueStore store =
>> > kafkaStreams.store("AggStore", QueryableStoreTypes.keyValueStore());'
>> >
>> > I get this exception:
>> >
>> > org.apache.kafka.streams.errors.InvalidStateStoreException: the state
>> > store, AggStore, may have migrated to another instance.
>> >at
>> > org.apache.kafka.streams.state.internals.QueryableStoreProvi
>> der.getStore(QueryableStoreProvider.java:49)
>> >at
>> > org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:378)
>> >at
>> > com.cedexis.videokafka.videohouraggregator.RequestHandler.
>> handle(RequestHandler.java:97)
>> >at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
>> >at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)
>> >at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
>> >at
>> > sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(Se
>> rverImpl.java:675)
>> >at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
>> >at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:6
>> 47)
>> >at
>> > sun.net.httpserver.ServerImpl$DefaultExecutor.execute(Server
>> Impl.java:158)
>> >at
>> > sun.net.httpserver.ServerImpl$Dispatcher.handle(ServerImpl.java:431)
>> >at sun.net.httpserver.ServerImpl$Dispatcher.run(ServerImpl.java
>> :396)
>> >at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > ... except.. there is only one instance.. running locally.
>>
>>
>


Re: YASSQ (yet another state store question)

2017-03-26 Thread Jon Yeargers
If the '.state()' function returns "RUNNING" and I still get this exception?

On Fri, Mar 24, 2017 at 1:56 PM, Eno Thereska 
wrote:

> Hi Jon,
>
> This is expected, see this: https://groups.google.com/
> forum/?pli=1#!searchin/confluent-platform/migrated$
> 20to$20another$20instance%7Csort:relevance/confluent-
> platform/LglWC_dZDKw/qsPuCRT_DQAJ  forum/?pli=1#!searchin/confluent-platform/migrated$
> 20to$20another$20instance|sort:relevance/confluent-
> platform/LglWC_dZDKw/qsPuCRT_DQAJ>.
>
> Thanks
> Eno
> > On 24 Mar 2017, at 20:51, Jon Yeargers  wrote:
> >
> > I've setup a KTable as follows:
> >
> > KTable, String> outTable = sourceStream.groupByKey().
> > reduce(rowReducer,
> >TimeWindows.of(5 * 60 * 1000L).advanceBy(1 * 60 *
> > 1000).until(10 * 60 * 1000L),
> >"AggStore");
> >
> > I can confirm its presence via 'streams.allMetadata()' (accessible
> through
> > a simple httpserver).
> >
> > When I call 'ReadOnlyKeyValueStore store =
> > kafkaStreams.store("AggStore", QueryableStoreTypes.keyValueStore());'
> >
> > I get this exception:
> >
> > org.apache.kafka.streams.errors.InvalidStateStoreException: the state
> > store, AggStore, may have migrated to another instance.
> >at
> > org.apache.kafka.streams.state.internals.QueryableStoreProvider.
> getStore(QueryableStoreProvider.java:49)
> >at
> > org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:378)
> >at
> > com.cedexis.videokafka.videohouraggregator.RequestHandler.handle(
> RequestHandler.java:97)
> >at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
> >at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)
> >at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
> >at
> > sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(
> ServerImpl.java:675)
> >at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
> >at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:
> 647)
> >at
> > sun.net.httpserver.ServerImpl$DefaultExecutor.execute(
> ServerImpl.java:158)
> >at
> > sun.net.httpserver.ServerImpl$Dispatcher.handle(ServerImpl.java:431)
> >at sun.net.httpserver.ServerImpl$Dispatcher.run(ServerImpl.
> java:396)
> >at java.lang.Thread.run(Thread.java:745)
> >
> >
> > ... except.. there is only one instance.. running locally.
>
>


kafka is not accepting number of partitions from configuration

2017-03-26 Thread Laxmi Narayan
​Hi ,
Kafka not accepting number of partitions from config , while from CLI it is
accepting.

What am I missing here ?


props.put("bootstrap.servers",  kafkaConstants.Bootstrap_Servers);
props.put("enable.auto.commit", kafkaConstants.Enable_Auto_Commit);
props.put("auto.commit.interval.ms",kafkaConstants.Auto_Commit_Interval_Ms);
props.put("session.timeout.ms", kafkaConstants.Session_Timeout_Ms);
props.put("linger.ms",  "1");

props.put("key.deserializer",   kafkaConstants.Key_Deserializer);
props.put("value.deserializer", kafkaConstants.Value_Deserializer);

props.put("key.serializer", kafkaConstants.Key_Serializer);
props.put("value.serializer",   kafkaConstants.Value_Serializer);

props.put("partitioner.class",  kafkaConstants.Partitioner_Class);
props.put("num.partitions", 999);
props.put("group.id",   kafkaConstants.KafkaGroupId);



​


Keep learning keep moving .