Hi Guozahang,

I believe we are using RocksDB. We are not using the Processor API, just simple 
map and countByKey functions so it is using the default KeyValue Store.

Thanks!

Srinidhi

Hello Srinidhi,

Are you using RocksDB as well like in the WordCountDemo for your
aggregation operator?

Guozhang


On Tue, Aug 2, 2016 at 5:20 PM, Srinidhi Muppalla <srinid...@trulia.com>
wrote:

> Hey All,
>
> We are having issues successfully storing and accessing a Ktable on our
> cluster which happens to be on AWS. We are trying to store a Ktable of
> counts of ’success' and ‘failure’ strings, similar to the WordCountDemo in
> the documentation. The Kafka Streams application that creates the KTable
> works locally, but doesn’t appear be storing the state on our cluster. Does
> anyone have any experience working with Ktables and AWS or knows what
> configs related to our Kafka brokers or Streams setup could be causing this
> failure on our server but not on my local machine? Any insight into what
> could be causing this issue would be helpful.
>
> Here is what the output topic we are writing the Ktable to looks like
> locally:
>
> SUCCESS 1
> SUCCESS 2
> FAILURE 1
> SUCCESS 3
> FAILURE 2
> FAILURE 3
> FAILURE 4.
>
> Here is what it looks like on our cluster:
>
> SUCCESS 1
> SUCCESS 1
> FAILURE 1
> SUCCESS 1
> FAILURE 1
> FAILURE 1
> FAILURE 1.
>
> Thanks,
> Srinidhi
>
>


--
-- Guozhang

Reply via email to