rocksdb error(s)

2016-12-11 Thread Jon Yeargers
Seeing this appearing somewhat frequently -

org.apache.kafka.streams.errors.ProcessorStateException: Error opening
store minute_agg_stream-201612100812 at location
/tmp/kafka-streams/MinuteAgg/1_9/minute_agg_stream/minute_agg_stream-201612100812

at
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:196)

at
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:158)

at
org.apache.kafka.streams.state.internals.RocksDBWindowStore$Segment.openDB(RocksDBWindowStore.java:72)

at
org.apache.kafka.streams.state.internals.RocksDBWindowStore.getOrCreateSegment(RocksDBWindowStore.java:402)

at
org.apache.kafka.streams.state.internals.RocksDBWindowStore.putInternal(RocksDBWindowStore.java:333)

at
org.apache.kafka.streams.state.internals.RocksDBWindowStore.access$100(RocksDBWindowStore.java:51)

at
org.apache.kafka.streams.state.internals.RocksDBWindowStore$2.restore(RocksDBWindowStore.java:212)

at
org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:235)

at
org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:198)

at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:123)

at
org.apache.kafka.streams.state.internals.RocksDBWindowStore.init(RocksDBWindowStore.java:206)

at
org.apache.kafka.streams.state.internals.MeteredWindowStore.init(MeteredWindowStore.java:66)

at
org.apache.kafka.streams.state.internals.CachingWindowStore.init(CachingWindowStore.java:64)

at
org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81)

at
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120)

at
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633)

at
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660)

at
org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69)

at
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124)

at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)

at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)

at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)

at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)

at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)

at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)

Caused by: org.rocksdb.RocksDBException: IO error: lock
/tmp/kafka-streams/MinuteAgg/1_9/minute_agg_stream/minute_agg_stream-201612100812/LOCK:
No locks available

at org.rocksdb.RocksDB.open(Native Method)

at org.rocksdb.RocksDB.open(RocksDB.java:184)

at
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:189)

... 26 common frames omitted


rebalancing - how to speed it up?

2016-12-11 Thread Jon Yeargers
Is there some way to 'help it along'? It's taking an hour or more from when
I start my app to actually seeing anything consumed.

Plenty of CPU (and IOWait) during this time so I know it's doing
_something_...


Re: rebalancing - how to speed it up?

2016-12-11 Thread Gerrit Jansen van Vuuren
I don't know about speeding up rebalancing, and an hour seems to suggest
something is wrong with zookeeper or you're whole setup maybe. if it
becomes an unsolvable issue for you, you could try
https://github.com/gerritjvv/kafka-fast which uses a different model and
doesn't need balancing or rebalancing.

disclojure: "Im the library author".



On 11 Dec 2016 11:56 a.m., "Jon Yeargers"  wrote:

Is there some way to 'help it along'? It's taking an hour or more from when
I start my app to actually seeing anything consumed.

Plenty of CPU (and IOWait) during this time so I know it's doing
_something_...


Struggling with Kafka Streams rebalances under load / in production

2016-12-11 Thread Robert Conrad
Hi All,

I have a relatively complex streaming application that seems to struggle
terribly with rebalance issues while under load. Does anyone have any tips
for investigating what is triggering these frequent rebalances or
particular settings I could experiment with to try to eliminate them?

Originally I thought it had to do with exceeding the heartbeat timeout with
heavy work threads, but the 0.10.1 release solved that by adding the background
heartbeat thread
.
Now rebalance just seems to strike randomly and provide no insight into
what triggered it (all nodes are happy, everything seems to be running
smoothly).

Any help or insight is greatly appreciated!

Rob


Re: Struggling with Kafka Streams rebalances under load / in production

2016-12-11 Thread Damian Guy
Hi Rob,

Do you have any further information you can provide? Logs etc?
Have you configured max.poll.interval.ms?

Thanks,
Damian

On Sun, 11 Dec 2016 at 20:30 Robert Conrad  wrote:

> Hi All,
>
> I have a relatively complex streaming application that seems to struggle
> terribly with rebalance issues while under load. Does anyone have any tips
> for investigating what is triggering these frequent rebalances or
> particular settings I could experiment with to try to eliminate them?
>
> Originally I thought it had to do with exceeding the heartbeat timeout with
> heavy work threads, but the 0.10.1 release solved that by adding the
> background
> heartbeat thread
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
> >.
> Now rebalance just seems to strike randomly and provide no insight into
> what triggered it (all nodes are happy, everything seems to be running
> smoothly).
>
> Any help or insight is greatly appreciated!
>
> Rob
>


Re: odd error message

2016-12-11 Thread Matthias J. Sax
Hi,

this might be a recently discovered bug. Does it happen when you
stop/restart your application?


-Matthias

On 12/10/16 1:42 PM, Jon Yeargers wrote:
> This came up a few times today:
> 
> 2016-12-10 18:45:52,637 [StreamThread-1] ERROR
> o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Failed to
> create an active task %s:
> 
> org.apache.kafka.streams.errors.ProcessorStateException: task [0_0] Error
> while creating the state manager
> 
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:72)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:90)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124)
> 
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228)
> 
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313)
> 
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
> 
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
> 
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
> 
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> 
> Caused by: java.io.IOException: task [0_0] Failed to lock the state
> directory: /mnt/extra/space/MinuteAgg/0_0
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:101)
> 
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:69)
> 
> ... 13 common frames omitted
> 



signature.asc
Description: OpenPGP digital signature


Re: Another odd error

2016-12-11 Thread Matthias J. Sax
Not sure about this one.

Can you describe what you do exactly? Can you reproduce the issue? We
definitely want to investigate this.

-Matthias

On 12/10/16 4:17 PM, Jon Yeargers wrote:
> (Am reporting these as have moved to 0.10.1.0-cp2)
> 
> ERROR  o.a.k.c.c.i.ConsumerCoordinator - User provided listener
> org.apache.kafka.streams.processor.internals.StreamThread$1 for group
> MinuteAgg failed on partition assignment
> 
> java.lang.IllegalStateException: task [1_9] Log end offset should not
> change while restoring
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:245)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:198)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:123)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.init(RocksDBWindowStore.java:206)
> 
> at
> org.apache.kafka.streams.state.internals.MeteredWindowStore.init(MeteredWindowStore.java:66)
> 
> at
> org.apache.kafka.streams.state.internals.CachingWindowStore.init(CachingWindowStore.java:64)
> 
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124)
> 
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228)
> 
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313)
> 
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
> 
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
> 
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
> 
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> 



signature.asc
Description: OpenPGP digital signature


Re: rocksdb error(s)

2016-12-11 Thread Matthias J. Sax
I am not sure, but this might be related with your state directory.

You use default directory that is located in /tmp -- could it be, that
/tmp gets clean up and thus you loose files/directories?

Try to reconfigure your state directory via StreamsConfig:
http://docs.confluent.io/current/streams/developer-guide.html#optional-configuration-parameters


-Matthias

On 12/11/16 1:28 AM, Jon Yeargers wrote:
> Seeing this appearing somewhat frequently -
> 
> org.apache.kafka.streams.errors.ProcessorStateException: Error opening
> store minute_agg_stream-201612100812 at location
> /tmp/kafka-streams/MinuteAgg/1_9/minute_agg_stream/minute_agg_stream-201612100812
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:196)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:158)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$Segment.openDB(RocksDBWindowStore.java:72)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.getOrCreateSegment(RocksDBWindowStore.java:402)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.putInternal(RocksDBWindowStore.java:333)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.access$100(RocksDBWindowStore.java:51)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$2.restore(RocksDBWindowStore.java:212)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:235)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:198)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:123)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.init(RocksDBWindowStore.java:206)
> 
> at
> org.apache.kafka.streams.state.internals.MeteredWindowStore.init(MeteredWindowStore.java:66)
> 
> at
> org.apache.kafka.streams.state.internals.CachingWindowStore.init(CachingWindowStore.java:64)
> 
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124)
> 
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228)
> 
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313)
> 
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
> 
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
> 
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
> 
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> 
> Caused by: org.rocksdb.RocksDBException: IO error: lock
> /tmp/kafka-streams/MinuteAgg/1_9/minute_agg_stream/minute_agg_stream-201612100812/LOCK:
> No locks available
> 
> at org.rocksdb.RocksDB.open(Native Method)
> 
> at org.rocksdb.RocksDB.open(RocksDB.java:184)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:189)
> 
> ... 26 common frames omitted
> 



signature.asc
Description: OpenPGP digital signature


Re: How does 'TimeWindows.of().until()' work?

2016-12-11 Thread Matthias J. Sax
Please have a look here:

http://docs.confluent.io/current/streams/developer-guide.html#windowing-a-stream

If you have further question, just follow up :)


-Matthias


On 12/10/16 6:11 PM, Jon Yeargers wrote:
> Ive added the 'until()' clause to some aggregation steps and it's working
> wonders for keeping the size of the state store in useful boundaries... But
> Im not 100% clear on how it works.
> 
> What is implied by the '.until()' clause? What determines when to stop
> receiving further data - is it clock time (since the window was created)?
> It seems problematic for it to refer to EventTime as this may bounce all
> over the place. For non-overlapping windows a given record can only fall
> into a single aggregation period - so when would a value get discarded?
> 
> Im using 'groupByKey(),aggregate(..., TimeWindows.of(60 * 1000L).until(10 *
> 1000L))'  - but what is this accomplishing?
> 



signature.asc
Description: OpenPGP digital signature


Re: rebalancing - how to speed it up?

2016-12-11 Thread Matthias J. Sax
No sure.

How big is your state? On rebalance, state stores might move from one
machine to another. To recreate the store on the new machine the
underlying changelog topic must be read. This can take some time -- an
hour seems quite long though...

To avoid long state recreation periods Kafka Streams support standby
task. Try to enable those via StreamsConfig: "num.standby.replicas"

http://docs.confluent.io/current/streams/developer-guide.html#optional-configuration-parameters

Also check out this section of the docs:

http://docs.confluent.io/3.1.1/streams/architecture.html#fault-tolerance


-Matthias


On 12/11/16 3:14 AM, Gerrit Jansen van Vuuren wrote:
> I don't know about speeding up rebalancing, and an hour seems to suggest
> something is wrong with zookeeper or you're whole setup maybe. if it
> becomes an unsolvable issue for you, you could try
> https://github.com/gerritjvv/kafka-fast which uses a different model and
> doesn't need balancing or rebalancing.
> 
> disclojure: "Im the library author".
> 
> 
> 
> On 11 Dec 2016 11:56 a.m., "Jon Yeargers"  wrote:
> 
> Is there some way to 'help it along'? It's taking an hour or more from when
> I start my app to actually seeing anything consumed.
> 
> Plenty of CPU (and IOWait) during this time so I know it's doing
> _something_...
> 



signature.asc
Description: OpenPGP digital signature


Re: odd error message

2016-12-11 Thread Jon Yeargers
Yes- but not 100% repro. I seem to have several issues with start /
rebalance

On Sun, Dec 11, 2016 at 2:16 PM, Matthias J. Sax 
wrote:

> Hi,
>
> this might be a recently discovered bug. Does it happen when you
> stop/restart your application?
>
>
> -Matthias
>
> On 12/10/16 1:42 PM, Jon Yeargers wrote:
> > This came up a few times today:
> >
> > 2016-12-10 18:45:52,637 [StreamThread-1] ERROR
> > o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1]
> Failed to
> > create an active task %s:
> >
> > org.apache.kafka.streams.errors.ProcessorStateException: task [0_0]
> Error
> > while creating the state manager
> >
> > at
> > org.apache.kafka.streams.processor.internals.AbstractTask.(
> AbstractTask.java:72)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.(StreamTask.java:90)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.createStreamTask(StreamThread.java:633)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.addStreamTasks(StreamThread.java:660)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.access$100(
> StreamThread.java:69)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread$1.
> onPartitionsAssigned(StreamThread.java:124)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.
> onJoinComplete(ConsumerCoordinator.java:228)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> joinGroupIfNeeded(AbstractCoordinator.java:313)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> ensureActiveGroup(AbstractCoordinator.java:277)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(
> ConsumerCoordinator.java:259)
> >
> > at
> > org.apache.kafka.clients.consumer.KafkaConsumer.
> pollOnce(KafkaConsumer.java:1013)
> >
> > at
> > org.apache.kafka.clients.consumer.KafkaConsumer.poll(
> KafkaConsumer.java:979)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> StreamThread.java:407)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:242)
> >
> > Caused by: java.io.IOException: task [0_0] Failed to lock the state
> > directory: /mnt/extra/space/MinuteAgg/0_0
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> ProcessorStateManager.(ProcessorStateManager.java:101)
> >
> > at
> > org.apache.kafka.streams.processor.internals.AbstractTask.(
> AbstractTask.java:69)
> >
> > ... 13 common frames omitted
> >
>
>


Re: Another odd error

2016-12-11 Thread Jon Yeargers
I get this one quite a bit. It kills my app after a short time of running.
Driving me nuts.

On Sun, Dec 11, 2016 at 2:17 PM, Matthias J. Sax 
wrote:

> Not sure about this one.
>
> Can you describe what you do exactly? Can you reproduce the issue? We
> definitely want to investigate this.
>
> -Matthias
>
> On 12/10/16 4:17 PM, Jon Yeargers wrote:
> > (Am reporting these as have moved to 0.10.1.0-cp2)
> >
> > ERROR  o.a.k.c.c.i.ConsumerCoordinator - User provided listener
> > org.apache.kafka.streams.processor.internals.StreamThread$1 for group
> > MinuteAgg failed on partition assignment
> >
> > java.lang.IllegalStateException: task [1_9] Log end offset should not
> > change while restoring
> >
> > at
> > org.apache.kafka.streams.processor.internals.ProcessorStateManager.
> restoreActiveState(ProcessorStateManager.java:245)
> >
> > at
> > org.apache.kafka.streams.processor.internals.ProcessorStateManager.
> register(ProcessorStateManager.java:198)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.register(ProcessorContextImpl.java:123)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBWindowStore.init(
> RocksDBWindowStore.java:206)
> >
> > at
> > org.apache.kafka.streams.state.internals.MeteredWindowStore.init(
> MeteredWindowStore.java:66)
> >
> > at
> > org.apache.kafka.streams.state.internals.CachingWindowStore.init(
> CachingWindowStore.java:64)
> >
> > at
> > org.apache.kafka.streams.processor.internals.AbstractTask.
> initializeStateStores(AbstractTask.java:81)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.(StreamTask.java:120)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.createStreamTask(StreamThread.java:633)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.addStreamTasks(StreamThread.java:660)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.access$100(
> StreamThread.java:69)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread$1.
> onPartitionsAssigned(StreamThread.java:124)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.
> onJoinComplete(ConsumerCoordinator.java:228)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> joinGroupIfNeeded(AbstractCoordinator.java:313)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> ensureActiveGroup(AbstractCoordinator.java:277)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(
> ConsumerCoordinator.java:259)
> >
> > at
> > org.apache.kafka.clients.consumer.KafkaConsumer.
> pollOnce(KafkaConsumer.java:1013)
> >
> > at
> > org.apache.kafka.clients.consumer.KafkaConsumer.poll(
> KafkaConsumer.java:979)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> StreamThread.java:407)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:242)
> >
>
>


Re: rocksdb error(s)

2016-12-11 Thread Jon Yeargers
I moved the state folder to a separate drive and linked out to it.

I'll try your suggestion and point directly.

On Sun, Dec 11, 2016 at 2:20 PM, Matthias J. Sax 
wrote:

> I am not sure, but this might be related with your state directory.
>
> You use default directory that is located in /tmp -- could it be, that
> /tmp gets clean up and thus you loose files/directories?
>
> Try to reconfigure your state directory via StreamsConfig:
> http://docs.confluent.io/current/streams/developer-guide.html#optional-
> configuration-parameters
>
>
> -Matthias
>
> On 12/11/16 1:28 AM, Jon Yeargers wrote:
> > Seeing this appearing somewhat frequently -
> >
> > org.apache.kafka.streams.errors.ProcessorStateException: Error opening
> > store minute_agg_stream-201612100812 at location
> > /tmp/kafka-streams/MinuteAgg/1_9/minute_agg_stream/minute_
> agg_stream-201612100812
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBStore.
> openDB(RocksDBStore.java:196)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBStore.
> openDB(RocksDBStore.java:158)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBWindowStore$Segment.
> openDB(RocksDBWindowStore.java:72)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBWindowStore.
> getOrCreateSegment(RocksDBWindowStore.java:402)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBWindowStore.putInternal(
> RocksDBWindowStore.java:333)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBWindowStore.access$100(
> RocksDBWindowStore.java:51)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBWindowStore$2.restore(
> RocksDBWindowStore.java:212)
> >
> > at
> > org.apache.kafka.streams.processor.internals.ProcessorStateManager.
> restoreActiveState(ProcessorStateManager.java:235)
> >
> > at
> > org.apache.kafka.streams.processor.internals.ProcessorStateManager.
> register(ProcessorStateManager.java:198)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.register(ProcessorContextImpl.java:123)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBWindowStore.init(
> RocksDBWindowStore.java:206)
> >
> > at
> > org.apache.kafka.streams.state.internals.MeteredWindowStore.init(
> MeteredWindowStore.java:66)
> >
> > at
> > org.apache.kafka.streams.state.internals.CachingWindowStore.init(
> CachingWindowStore.java:64)
> >
> > at
> > org.apache.kafka.streams.processor.internals.AbstractTask.
> initializeStateStores(AbstractTask.java:81)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.(StreamTask.java:120)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.createStreamTask(StreamThread.java:633)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.addStreamTasks(StreamThread.java:660)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.access$100(
> StreamThread.java:69)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread$1.
> onPartitionsAssigned(StreamThread.java:124)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.
> onJoinComplete(ConsumerCoordinator.java:228)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> joinGroupIfNeeded(AbstractCoordinator.java:313)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> ensureActiveGroup(AbstractCoordinator.java:277)
> >
> > at
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(
> ConsumerCoordinator.java:259)
> >
> > at
> > org.apache.kafka.clients.consumer.KafkaConsumer.
> pollOnce(KafkaConsumer.java:1013)
> >
> > at
> > org.apache.kafka.clients.consumer.KafkaConsumer.poll(
> KafkaConsumer.java:979)
> >
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> StreamThread.java:407)
> >
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:242)
> >
> > Caused by: org.rocksdb.RocksDBException: IO error: lock
> > /tmp/kafka-streams/MinuteAgg/1_9/minute_agg_stream/minute_
> agg_stream-201612100812/LOCK:
> > No locks available
> >
> > at org.rocksdb.RocksDB.open(Native Method)
> >
> > at org.rocksdb.RocksDB.open(RocksDB.java:184)
> >
> > at
> > org.apache.kafka.streams.state.internals.RocksDBStore.
> openDB(RocksDBStore.java:189)
> >
> > ... 26 common frames omitted
> >
>
>


Re: How does 'TimeWindows.of().until()' work?

2016-12-11 Thread Jon Yeargers
I've read this and still have more questions than answers. If my data skips
about (timewise) what determines when a given window will start / stop
accepting new data? What if Im reading data from some time ago?

On Sun, Dec 11, 2016 at 2:22 PM, Matthias J. Sax 
wrote:

> Please have a look here:
>
> http://docs.confluent.io/current/streams/developer-
> guide.html#windowing-a-stream
>
> If you have further question, just follow up :)
>
>
> -Matthias
>
>
> On 12/10/16 6:11 PM, Jon Yeargers wrote:
> > Ive added the 'until()' clause to some aggregation steps and it's working
> > wonders for keeping the size of the state store in useful boundaries...
> But
> > Im not 100% clear on how it works.
> >
> > What is implied by the '.until()' clause? What determines when to stop
> > receiving further data - is it clock time (since the window was created)?
> > It seems problematic for it to refer to EventTime as this may bounce all
> > over the place. For non-overlapping windows a given record can only fall
> > into a single aggregation period - so when would a value get discarded?
> >
> > Im using 'groupByKey(),aggregate(..., TimeWindows.of(60 *
> 1000L).until(10 *
> > 1000L))'  - but what is this accomplishing?
> >
>
>


Re: rebalancing - how to speed it up?

2016-12-11 Thread Jon Yeargers
I changed 'num.standby.replicas' to '2'.

I started one instance and it immediately showed up in the
'kafka-consumer-groups .. --describe' listing.

So I started a second... and it quickly displaced the first... which never
came back.

Started a third.. same effect. Second goes away never to return.. but now
it's tries to rebalance for a while before I see the third by itself.

Fourth and fifth - now it's gone off to rebalance (and is seemingly stuck
there) and hasn't pulled any data for more than an hour.



On Sun, Dec 11, 2016 at 2:27 PM, Matthias J. Sax 
wrote:

> No sure.
>
> How big is your state? On rebalance, state stores might move from one
> machine to another. To recreate the store on the new machine the
> underlying changelog topic must be read. This can take some time -- an
> hour seems quite long though...
>
> To avoid long state recreation periods Kafka Streams support standby
> task. Try to enable those via StreamsConfig: "num.standby.replicas"
>
> http://docs.confluent.io/current/streams/developer-guide.html#optional-
> configuration-parameters
>
> Also check out this section of the docs:
>
> http://docs.confluent.io/3.1.1/streams/architecture.html#fault-tolerance
>
>
> -Matthias
>
>
> On 12/11/16 3:14 AM, Gerrit Jansen van Vuuren wrote:
> > I don't know about speeding up rebalancing, and an hour seems to suggest
> > something is wrong with zookeeper or you're whole setup maybe. if it
> > becomes an unsolvable issue for you, you could try
> > https://github.com/gerritjvv/kafka-fast which uses a different model and
> > doesn't need balancing or rebalancing.
> >
> > disclojure: "Im the library author".
> >
> >
> >
> > On 11 Dec 2016 11:56 a.m., "Jon Yeargers" 
> wrote:
> >
> > Is there some way to 'help it along'? It's taking an hour or more from
> when
> > I start my app to actually seeing anything consumed.
> >
> > Plenty of CPU (and IOWait) during this time so I know it's doing
> > _something_...
> >
>
>


Re: rebalancing - how to speed it up?

2016-12-11 Thread Jon Yeargers
After an hour: it briefly popped up with 1 instance 'applied' to all 10
partitions... then it went back to rebalance for 10-15 minutes.. followed
by a different instance on all partitions.. and then more rebalancing..

At no point (yet) have I seen the work get truly 'balanced' between all 5
instances.

On Sun, Dec 11, 2016 at 6:04 PM, Jon Yeargers 
wrote:

> I changed 'num.standby.replicas' to '2'.
>
> I started one instance and it immediately showed up in the
> 'kafka-consumer-groups .. --describe' listing.
>
> So I started a second... and it quickly displaced the first... which never
> came back.
>
> Started a third.. same effect. Second goes away never to return.. but now
> it's tries to rebalance for a while before I see the third by itself.
>
> Fourth and fifth - now it's gone off to rebalance (and is seemingly stuck
> there) and hasn't pulled any data for more than an hour.
>
>
>
> On Sun, Dec 11, 2016 at 2:27 PM, Matthias J. Sax 
> wrote:
>
>> No sure.
>>
>> How big is your state? On rebalance, state stores might move from one
>> machine to another. To recreate the store on the new machine the
>> underlying changelog topic must be read. This can take some time -- an
>> hour seems quite long though...
>>
>> To avoid long state recreation periods Kafka Streams support standby
>> task. Try to enable those via StreamsConfig: "num.standby.replicas"
>>
>> http://docs.confluent.io/current/streams/developer-guide.
>> html#optional-configuration-parameters
>>
>> Also check out this section of the docs:
>>
>> http://docs.confluent.io/3.1.1/streams/architecture.html#fault-tolerance
>>
>>
>> -Matthias
>>
>>
>> On 12/11/16 3:14 AM, Gerrit Jansen van Vuuren wrote:
>> > I don't know about speeding up rebalancing, and an hour seems to suggest
>> > something is wrong with zookeeper or you're whole setup maybe. if it
>> > becomes an unsolvable issue for you, you could try
>> > https://github.com/gerritjvv/kafka-fast which uses a different model
>> and
>> > doesn't need balancing or rebalancing.
>> >
>> > disclojure: "Im the library author".
>> >
>> >
>> >
>> > On 11 Dec 2016 11:56 a.m., "Jon Yeargers" 
>> wrote:
>> >
>> > Is there some way to 'help it along'? It's taking an hour or more from
>> when
>> > I start my app to actually seeing anything consumed.
>> >
>> > Plenty of CPU (and IOWait) during this time so I know it's doing
>> > _something_...
>> >
>>
>>
>


Re: How does 'TimeWindows.of().until()' work?

2016-12-11 Thread Matthias J. Sax
Windows are created on demand, ie, each time a new record arrives and
there is no window yet for it, a new window will get created.

Windows are accepting data until their retention time (that you can
configure via .until()) passed. Thus, you will have many windows being
open in parallel.

If you read older data, they will just be put into the corresponding
windows (as long as window retention time did not pass). If a window was
discarded already, a new window with this single (later arriving) record
will get created, the computation will be triggered, you get a result,
and afterwards the window is deleted again (as it's retention time
passed already).

The retention time is driven by "stream-time", in internal tracked time
that only progressed in forward direction. It gets it value from the
timestamps provided by TimestampExtractor -- thus, per default it will
be event-time.

-Matthias

On 12/11/16 3:47 PM, Jon Yeargers wrote:
> I've read this and still have more questions than answers. If my data skips
> about (timewise) what determines when a given window will start / stop
> accepting new data? What if Im reading data from some time ago?
> 
> On Sun, Dec 11, 2016 at 2:22 PM, Matthias J. Sax 
> wrote:
> 
>> Please have a look here:
>>
>> http://docs.confluent.io/current/streams/developer-
>> guide.html#windowing-a-stream
>>
>> If you have further question, just follow up :)
>>
>>
>> -Matthias
>>
>>
>> On 12/10/16 6:11 PM, Jon Yeargers wrote:
>>> Ive added the 'until()' clause to some aggregation steps and it's working
>>> wonders for keeping the size of the state store in useful boundaries...
>> But
>>> Im not 100% clear on how it works.
>>>
>>> What is implied by the '.until()' clause? What determines when to stop
>>> receiving further data - is it clock time (since the window was created)?
>>> It seems problematic for it to refer to EventTime as this may bounce all
>>> over the place. For non-overlapping windows a given record can only fall
>>> into a single aggregation period - so when would a value get discarded?
>>>
>>> Im using 'groupByKey(),aggregate(..., TimeWindows.of(60 *
>> 1000L).until(10 *
>>> 1000L))'  - but what is this accomplishing?
>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Deleting a topic without delete.topic.enable=true?

2016-12-11 Thread Surendra , Manchikanti
If "auto.create.topics.enable" is set to true in your configurations , any
producer/consumer or fetch request will create the topic again. Set it to
false and delete the topic.

-- Surendra Manchikanti

On Sat, Dec 10, 2016 at 10:59 AM, Todd Palino  wrote:

> Are you running something else besides the consumers that would maintain a
> memory of the topics and potentially recreate them by issuing a metadata
> request? For example, Burrow (the consumer monitoring app I wrote) does
> this because it maintains a list of all topics in memory, and will end up
> recreating a topic that has been deleted as it issues a metadata request to
> try and find out what happened after an offset request for the topic fails.
>
> -Todd
>
>
> On Fri, Dec 9, 2016 at 8:37 AM, Tim Visher  wrote:
>
> > On Fri, Dec 9, 2016 at 11:34 AM, Todd Palino  wrote:
> >
> > > Given that you said you removed the log directories, and provided that
> > when
> > > you did the rmr on Zookeeper it was to the “/brokers/topics/(topic
> name)”
> > > path, you did the right things for a manual deletion. It sounds like
> you
> > > may have a consumer (or other client) that is recreating the topic. Do
> > you
> > > have auto topic creation enabled?
> > >
> >
> > That was the last epiphany we had. We had shut down the producer but not
> > all the consumers and we do allow auto-topic creation.
> >
> > That said, we then proceeded to shut all of them down (the consumers) and
> > the topic came back. I'm glad that we were doing the right steps though.
> >
> > >
> > > -Todd
> > >
> > >
> > > On Fri, Dec 9, 2016 at 8:25 AM, Tim Visher 
> wrote:
> > >
> > > > I did all of that because setting delete.topic.enable=true wasn't
> > > > effective. We set that across every broker, restarted them, and then
> > > > deleted the topic, and it was still stuck in existence.
> > > >
> > > > On Fri, Dec 9, 2016 at 11:11 AM, Ali Akhtar 
> > > wrote:
> > > >
> > > > > You need to also delete / restart zookeeper, its probably storing
> the
> > > > > topics there. (Or yeah, just enable it and then delete the topic)
> > > > >
> > > > > On Fri, Dec 9, 2016 at 9:09 PM, Rodrigo Sandoval <
> > > > > rodrigo.madfe...@gmail.com
> > > > > > wrote:
> > > > >
> > > > > > Why did you do all those things instead of just setting
> > > > > > delete.topic.enable=true?
> > > > > >
> > > > > > On Dec 9, 2016 13:40, "Tim Visher"  wrote:
> > > > > >
> > > > > > > Hi Everyone,
> > > > > > >
> > > > > > > I'm really confused at the moment. We created a topic with
> > brokers
> > > > set
> > > > > to
> > > > > > > delete.topic.enable=false.
> > > > > > >
> > > > > > > We now need to delete that topic. To do that we shut down all
> the
> > > > > > brokers,
> > > > > > > deleted everything under log.dirs and logs.dir on all the kafka
> > > > > brokers,
> > > > > > > `rmr`ed the entire chroot that kafka was storing things under
> in
> > > > > > zookeeper,
> > > > > > > and then brought kafka back up.
> > > > > > >
> > > > > > > After doing all that, the topic comes back, every time.
> > > > > > >
> > > > > > > What can we do to delete that topic?
> > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > In Christ,
> > > > > > >
> > > > > > > Timmy V.
> > > > > > >
> > > > > > > http://blog.twonegatives.com/
> > > > > > > http://five.sentenc.es/ -- Spend less time on mail
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > *Todd Palino*
> > > Staff Site Reliability Engineer
> > > Data Infrastructure Streaming
> > >
> > >
> > >
> > > linkedin.com/in/toddpalino
> > >
> >
>
>
>
> --
> *Todd Palino*
> Staff Site Reliability Engineer
> Data Infrastructure Streaming
>
>
>
> linkedin.com/in/toddpalino
>