RE: How do I remove myself from mailing list?

2016-08-30 Thread Heath Ivie
I have done the same several times, but I still get these emails.

-Original Message-
From: Spencer Owen [mailto:so...@netdocuments.com] 
Sent: Tuesday, August 30, 2016 8:33 AM
To: users@kafka.apache.org
Subject: How do I remove myself from mailing list?

I've gone through the unsubscribe process more times than I can count, but I'm 
still getting emails.

How can I unsubscribe? Who is an admin?

This message may contain information that is privileged or confidential. If you 
received this transmission in error, please notify the sender by reply e-mail 
and delete the message and any attachments. Any duplication, dissemination or 
distribution of this message or any attachments by unintended recipients is 
prohibited. Please note: All email sent to this address will be received by the 
NetDocuments corporate e-mail system and is subject to archival and review by 
someone other than the recipient.



RE: FetchRequest Question

2016-05-27 Thread Heath Ivie
PING

-Original Message-
From: Heath Ivie [mailto:hi...@autoanything.com] 
Sent: Wednesday, May 25, 2016 9:25 AM
To: users@kafka.apache.org
Subject: FetchRequest Question

Can someone please explain why if I write 1 message to the queue it takes N 
FetchRequests to get the data out where n > 1?

Heath

Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: FetchRequest Question

2016-05-26 Thread Heath Ivie
Ping

-Original Message-
From: Heath Ivie [mailto:hi...@autoanything.com] 
Sent: Wednesday, May 25, 2016 9:25 AM
To: users@kafka.apache.org
Subject: FetchRequest Question

Can someone please explain why if I write 1 message to the queue it takes N 
FetchRequests to get the data out where n > 1?

Heath

Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


FetchRequest Question

2016-05-25 Thread Heath Ivie
Can someone please explain why if I write 1 message to the queue it takes N 
FetchRequests to get the data out where n > 1?

Heath

Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


FetchRequest API Always Returns 10 Rows

2016-05-03 Thread Heath Ivie
Hi,

I am in the process of writing a consumer using the binary protocol and I am 
having issues where the fetch request will only return 10 messages.

Here is the code that I am using, so you can see the values (there are no 
settings hidden underneath):
var frb = new FetchRequestBuilder()
.SetReplicaId(-1)
.SetMaxWaitTime(100)
.SetMinBytes(0)
.BuildTopic(topic)
.AddPartition(partitionId, earliestOffset, Int32.MaxValue)
.Continue()
.GetMessage();

Any ideas why this would be.

We have the default confluent platform installed version 2.

Thanks

Heath Ivie


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


Empty Fetch Response

2016-04-28 Thread Heath Ivie
I am having an issue where the FetchResponse returns a large value 
MessageSetSize, but all of the bytes that follow after are null.

Any ideas what could be causing that?

I am getting a response without an error code, but like I said the remaining 
bytes (600 or more) are set to zero.

Heath Ivie
Solutions Architect


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: .Net Kafka Clients

2016-04-18 Thread Heath Ivie
Harsh,

We looked at some of the other client libraries for .NET and opted to use the 
Kafka-Rest proxy.

There are challenges with using it, since it is stateful.

You can only run and connect to a single instance at a time.

By that I mean, if you have 3 consumers in a group you still must connect to 
the same instance of the proxy.

We have made our systems idempotent to account for a failure and re-parsing of 
the messages.

Besides that, it works fine.

Heath

-Original Message-
From: Fastmail [mailto:harsh...@fastmail.fm] 
Sent: Saturday, April 09, 2016 7:55 PM
To: users@kafka.apache.org; users@kafka.apache.org
Subject: Re: .Net Kafka Clients

Hi Dave,           We have our users running this client 
https://github.com/Microsoft/Kafkanet. 

Thanks,Harsha
_
From: Tauzell, Dave 
Sent: Friday, April 8, 2016 11:39 AM
Subject: .Net Kafka Clients
To:  


We are about to embark on using Kafka and will be starting out with a .Net 
based (C#) application.   I see a couple of active .Net Kafka libraries as well 
as the Confluent Kafka Rest Proxy.   We can write the readers in Java or some 
other language, so our .Net application only needs to write to Kafka.

Does anybody have any real-world experience with using Kafka from .Net?

-Dave

Dave Tauzell | Senior Software Engineer | Surescripts
O: 651.855.3042 | www.surescripts.com |   
dave.tauz...@surescripts.com
Connect with us: Twitter I 
LinkedIn I 
Facebook I 
YouTube


This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.



  


RE: Log Retention: What gets deleted

2016-04-08 Thread Heath Ivie
Gwen,

Thanks for the detailed reply.

That makes it more clear for me.

Heath

-Original Message-
From: Gwen Shapira [mailto:g...@confluent.io] 
Sent: Tuesday, April 05, 2016 6:13 PM
To: users@kafka.apache.org
Subject: Re: Log Retention: What gets deleted

I think you got it almost right. The missing part is that we only delete whole 
partition segments, not individual messages.

As you are writing messages, every X bytes or Y milliseconds, a new file gets 
created for the partition to store new messages in. Those files are called 
segments.
The segment you are currently writing to is an active segment.

We will never delete an active segment, so in order to delete old messages we 
will look for an inactive segment where the newest message is older than our 
retention and delete the entire segment.

So there are several parameters controlling when will data get deleted (I'm 
looking at just the time based, not the size-based):
1. log.retention.ms - how old messages should be before we consider them for 
deletion 2. log.roll.ms - how frequently we roll new segments. Messages will 
not get deleted before a new segment is rolled 3. 
log.retention.check.interval.ms - how frequently we check for segments that we 
can delete.

A message will be deleted if all 3 are true:
1. It is older than log.retention.ms
2. It is in an inactive segment, meaning enough time passed since the message 
was written to roll a new segment 3. Kafka checked for segments that can be 
deleted, meaning that more than check.interval.ms time passed since the segment 
was rolled.

Hope this helps,

Gwen



On Fri, Apr 1, 2016 at 12:21 PM, Heath Ivie <hi...@autoanything.com> wrote:

> Hi,
>
> I have some questions about the log retention and specifically what 
> gets deleted.
>
> I have a test app where I am writing 10 logs to the topic every second.
>
> What I would expect is a lag in a group would be somewhere around 10 
> if I have retention.ms at 1000.
>
> What I am seeing that the lag continues to grow, but then at some 
> point all messages are gone and the lag is at 0.
>
> I thought that the messages that are old would be deleted first.
>
> Am I misinterpreting how the log retention works?
>
> Heath Ivie
> Solutions Architect
>
>
> Warning: This e-mail may contain information proprietary to 
> AutoAnything Inc. and is intended only for the use of the intended 
> recipient(s). If the reader of this message is not the intended 
> recipient(s), you have received this message in error and any review, 
> dissemination, distribution or copying of this message is strictly 
> prohibited. If you have received this message in error, please notify 
> the sender immediately and delete all copies.
>


Log Retention: What gets deleted

2016-04-01 Thread Heath Ivie
Hi,

I have some questions about the log retention and specifically what gets 
deleted.

I have a test app where I am writing 10 logs to the topic every second.

What I would expect is a lag in a group would be somewhere around 10 if I have 
retention.ms at 1000.

What I am seeing that the lag continues to grow, but then at some point all 
messages are gone and the lag is at 0.

I thought that the messages that are old would be deleted first.

Am I misinterpreting how the log retention works?

Heath Ivie
Solutions Architect


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: Rest Proxy Question

2016-03-31 Thread Heath Ivie
Hi Martin,

I am not using the browser.

I am making the call from code, much like curl.

The proxy is returning a http code 200, but only with an empty json array.

I am not having a delay when calling the proxy, I am having a delay when the 
proxy is returning results.

Even though there is data in the queue that is ready to be consumed, the proxy 
returns without results for some non-deterministic period of time.

-Original Message-
From: Martin Gainty [mailto:mgai...@hotmail.com] 
Sent: Thursday, March 31, 2016 4:59 AM
To: users@kafka.apache.org
Subject: RE: Rest Proxy Question

The issue is when the consumer (using the REST proxy) is trying to 
consume.MG>can we assume you have disabled all cache between server and client
MG>in fact its probably best now to use a 3rd party tool like curl to pull the 
data to make sure browser isnt causing the delay:

https://curl.haxx.se/download.html
MG>if you see the same delay
MG>then this is a network issue
MG>fortunately with HTTP 1.1 Server you can implement NIO connections 
MG>when server does publish can we see what kafka publisher is publishing?MG>it 
is also important to understand format of published results html or text or 
json or xml?
MG>if for no other reason than to trim down the results (return json 
MG>instead of xml)



> From: hi...@autoanything.com
> To: users@kafka.apache.org
> Subject: RE: Rest Proxy Question
> Date: Thu, 31 Mar 2016 00:26:27 +
> 
> I really need some help on this.
> 
> I am able to publish new messages to the topics using the rest proxy.
> 
> The issue is that when I query the rest proxy for that topic, even though 
> there is data present, I get "{}" (empty results).
> 
> I will get this empty results for some non-deterministic period of time and 
> then out of the blue it will return the results.
> 
> I am at a total loss at this point.
> 
> If someone has any ideas, I sure would welcome them.
> 
> Regards,
> Heath
> 
> -Original Message-
> From: Heath Ivie [mailto:hi...@autoanything.com]
> Sent: Tuesday, February 16, 2016 10:44 AM
> To: users@kafka.apache.org
> Subject: RE: Rest Proxy Question
> 
> Hi Martin,
> 
> I can see the items in the queue, that is not the problem.
> 
> The issue is when the consumer (using the REST proxy) is trying to consume.
> 
> -Original Message-
> From: Martin Gainty [mailto:mgai...@hotmail.com]
> Sent: Tuesday, February 16, 2016 10:32 AM
> To: users@kafka.apache.org
> Subject: RE: Rest Proxy Question
> 
>  pathping on the contacted proxy during slack periods | tee slack.lst 
> pathping on the contacted proxy during busy periods | tee busy.lst 
> Please Post Results here
> 
> What i am suggesting is once a packet is transmitted outside your firewall 
> the packet is beyond reach of locally hosted software such as Kafka If that 
> is the case you may want to look at Pulling in network engineers to configure 
> arp tables to choose the fastest route I can help you out in case you dont 
> have those resources availableMartin 
> __ 
>  _ _  _ _  _ ___ _
> _   _ _   _ 
> |_   _| |_ ___   |  _  |___ ___ ___| |_ ___   |   __|___|  _| |_ _ _ _ ___ 
> ___ ___   |   __|___ _ _ ___ _| |___| |_|_|___ ___ 
>   | | |   | -_|  | | . | .'|  _|   | -_|  |__   | . |  _|  _| | | | .'|  
> _| -_|  |   __| . | | |   | . | .'|  _| | . |   |
>   |_| |_|_|___|  |__|__|  _|__,|___|_|_|___|  |_|___|_| |_| |_|__,|_| 
> |___|  |__|  |___|___|_|_|___|__,|_| |_|___|_|_|
>|_|
> 
>  
> 
> > From: hi...@autoanything.com
> > To: users@kafka.apache.org
> > Subject: Rest Proxy Question
> > Date: Tue, 16 Feb 2016 17:28:34 +
> > 
> > I am having a hard time with some bug in my code related to the REST proxy.
> > 
> > The issue that I am seeing is when the GET 
> > /consumers/HIVIEPurchaseOrderCreatedEvent/instances/b0671fa1-d000-4f9b-b795-aff3003e500a/topics/PurchaseOrderCreatedEvent/.
> > 
> > Sometimes this will return immediately with results and sometimes it takes 
> > a very long time with many requests in between when results are received.
> > 
> > I have a job that runs in the background checking for new messages every 5 
> > seconds, so I would expect within a minute or two the messages would be 
> > visible.
> > 
> > Have any of you seen this behavior before?
> > 
> > Heath Ivie
> > Solutions Architect
> > 
> > 
> > Warning: This

RE: Rest Proxy Question

2016-03-30 Thread Heath Ivie
I really need some help on this.

I am able to publish new messages to the topics using the rest proxy.

The issue is that when I query the rest proxy for that topic, even though there 
is data present, I get "{}" (empty results).

I will get this empty results for some non-deterministic period of time and 
then out of the blue it will return the results.

I am at a total loss at this point.

If someone has any ideas, I sure would welcome them.

Regards,
Heath

-Original Message-----
From: Heath Ivie [mailto:hi...@autoanything.com] 
Sent: Tuesday, February 16, 2016 10:44 AM
To: users@kafka.apache.org
Subject: RE: Rest Proxy Question

Hi Martin,

I can see the items in the queue, that is not the problem.

The issue is when the consumer (using the REST proxy) is trying to consume.

-Original Message-
From: Martin Gainty [mailto:mgai...@hotmail.com] 
Sent: Tuesday, February 16, 2016 10:32 AM
To: users@kafka.apache.org
Subject: RE: Rest Proxy Question

 pathping on the contacted proxy during slack periods | tee slack.lst pathping 
on the contacted proxy during busy periods | tee busy.lst Please Post Results 
here

What i am suggesting is once a packet is transmitted outside your firewall the 
packet is beyond reach of locally hosted software such as Kafka If that is the 
case you may want to look at Pulling in network engineers to configure arp 
tables to choose the fastest route I can help you out in case you dont have 
those resources availableMartin __ 
 _ _  _ _  _ ___ _  
  _   _ _   _ 
|_   _| |_ ___   |  _  |___ ___ ___| |_ ___   |   __|___|  _| |_ _ _ _ ___ ___ 
___   |   __|___ _ _ ___ _| |___| |_|_|___ ___ 
  | | |   | -_|  | | . | .'|  _|   | -_|  |__   | . |  _|  _| | | | .'|  _| 
-_|  |   __| . | | |   | . | .'|  _| | . |   |
  |_| |_|_|___|  |__|__|  _|__,|___|_|_|___|  |_|___|_| |_| |_|__,|_| 
|___|  |__|  |___|___|_|_|___|__,|_| |_|___|_|_|
   |_|  
  
 

> From: hi...@autoanything.com
> To: users@kafka.apache.org
> Subject: Rest Proxy Question
> Date: Tue, 16 Feb 2016 17:28:34 +
> 
> I am having a hard time with some bug in my code related to the REST proxy.
> 
> The issue that I am seeing is when the GET 
> /consumers/HIVIEPurchaseOrderCreatedEvent/instances/b0671fa1-d000-4f9b-b795-aff3003e500a/topics/PurchaseOrderCreatedEvent/.
> 
> Sometimes this will return immediately with results and sometimes it takes a 
> very long time with many requests in between when results are received.
> 
> I have a job that runs in the background checking for new messages every 5 
> seconds, so I would expect within a minute or two the messages would be 
> visible.
> 
> Have any of you seen this behavior before?
> 
> Heath Ivie
> Solutions Architect
> 
> 
> Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
> and is intended only for the use of the intended recipient(s). If the reader 
> of this message is not the intended recipient(s), you have received this 
> message in error and any review, dissemination, distribution or copying of 
> this message is strictly prohibited. If you have received this message in 
> error, please notify the sender immediately and delete all copies.
  


RE: Documentation

2016-03-29 Thread Heath Ivie
Dana,

That is just what I was looking for, awesome.

That makes more sense now.

As for the hunting and pecking for, I have been searching the code to try to 
make some sense of how the client is intended to work.

Again, thanks a lot.

Heath

-Original Message-
From: Dana Powers [mailto:dana.pow...@gmail.com] 
Sent: Tuesday, March 29, 2016 9:06 AM
To: users@kafka.apache.org
Subject: Re: Documentation

I also found the documentation difficult to parse when it came time to 
implement group APIs. I ended up just reading the client source code and trying 
api calls until it made sense.

My general description from off the top of my head:

(1) have all consumers submit a shared protocol_name string* and 
bytes-serialized protocol_metadata, which includes the topic(s) for 
subscription.
(2) All consumers should be prepared to parse a JoinResponse and identify 
whether they have been selected as the group leader.
(3) The non-leaders go straight to SyncGroupRequest, with group_assignment 
empty, and will wait get their partition assignments via the SyncGroupResponse.
(4) Unlike the others, the consumer-leader will get the agreed protocol_name 
and all member's metadata in its JoinGroupResponse.
(5) The consumer-leader must apply the protocol_name assignment strategy to the 
member list + metadata (plus the cluster metadata), generating a set of member 
assignments (member_id -> topic partitions).
(6)The consumer-leader then submits that member_assignment data in its 
SyncGroupRequest.
(7) The broker will take the group member_assignment data and split it out into 
each members SyncGroupResponse (including the consumer-leader).

At this point all members have their assignments and can start chugging along.

I've tried to encapsulate the group request / response protocol info cleanly 
into kafka-python, if that helps you:

https://github.com/dpkp/kafka-python/blob/1.0.2/kafka/protocol/group.py

I'd be willing to help improve the docs if someone points me at the "source of 
truth". Or feel free to ping on irc #apache-kafka

-Dana

*for java client these are 'range' or 'roundrobin', but are opaque and can be 
anything if you aren't interested in joining groups that have heterogeneous 
clients. related, the protocol_metadata and member_metadata are also opaque and 
technically can be anything you like. But sticking with the documented metadata 
specs should in theory allow heterogenous clients to cooperate.

On Tue, Mar 29, 2016 at 8:21 AM, Heath Ivie <hi...@autoanything.com> wrote:

> Does anyone have better documentation around the group membership APIs?
>
> The information about the APIs are great at the beginning but get 
> progressively  sparse towards then end.
>
> I am not finding enough information about the values of the request 
> fields to join / sync the group.
>
> Can anyone help me or send me the some additional documentation?
>
> Heath Ivie
> Solutions Architect
>
>
> Warning: This e-mail may contain information proprietary to 
> AutoAnything Inc. and is intended only for the use of the intended 
> recipient(s). If the reader of this message is not the intended 
> recipient(s), you have received this message in error and any review, 
> dissemination, distribution or copying of this message is strictly 
> prohibited. If you have received this message in error, please notify 
> the sender immediately and delete all copies.
>


Documentation

2016-03-29 Thread Heath Ivie
Does anyone have better documentation around the group membership APIs?

The information about the APIs are great at the beginning but get progressively 
 sparse towards then end.

I am not finding enough information about the values of the request fields to 
join / sync the group.

Can anyone help me or send me the some additional documentation?

Heath Ivie
Solutions Architect


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: Rest Proxy Question

2016-02-16 Thread Heath Ivie
Hi Martin,

I can see the items in the queue, that is not the problem.

The issue is when the consumer (using the REST proxy) is trying to consume.

-Original Message-
From: Martin Gainty [mailto:mgai...@hotmail.com] 
Sent: Tuesday, February 16, 2016 10:32 AM
To: users@kafka.apache.org
Subject: RE: Rest Proxy Question

 pathping on the contacted proxy during slack periods | tee slack.lst pathping 
on the contacted proxy during busy periods | tee busy.lst Please Post Results 
here

What i am suggesting is once a packet is transmitted outside your firewall the 
packet is beyond reach of locally hosted software such as Kafka If that is the 
case you may want to look at Pulling in network engineers to configure arp 
tables to choose the fastest route I can help you out in case you dont have 
those resources availableMartin __ 
 _ _  _ _  _ ___ _  
  _   _ _   _ 
|_   _| |_ ___   |  _  |___ ___ ___| |_ ___   |   __|___|  _| |_ _ _ _ ___ ___ 
___   |   __|___ _ _ ___ _| |___| |_|_|___ ___ 
  | | |   | -_|  | | . | .'|  _|   | -_|  |__   | . |  _|  _| | | | .'|  _| 
-_|  |   __| . | | |   | . | .'|  _| | . |   |
  |_| |_|_|___|  |__|__|  _|__,|___|_|_|___|  |_|___|_| |_| |_|__,|_| 
|___|  |__|  |___|___|_|_|___|__,|_| |_|___|_|_|
   |_|  
  
 

> From: hi...@autoanything.com
> To: users@kafka.apache.org
> Subject: Rest Proxy Question
> Date: Tue, 16 Feb 2016 17:28:34 +
> 
> I am having a hard time with some bug in my code related to the REST proxy.
> 
> The issue that I am seeing is when the GET 
> /consumers/HIVIEPurchaseOrderCreatedEvent/instances/b0671fa1-d000-4f9b-b795-aff3003e500a/topics/PurchaseOrderCreatedEvent/.
> 
> Sometimes this will return immediately with results and sometimes it takes a 
> very long time with many requests in between when results are received.
> 
> I have a job that runs in the background checking for new messages every 5 
> seconds, so I would expect within a minute or two the messages would be 
> visible.
> 
> Have any of you seen this behavior before?
> 
> Heath Ivie
> Solutions Architect
> 
> 
> Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
> and is intended only for the use of the intended recipient(s). If the reader 
> of this message is not the intended recipient(s), you have received this 
> message in error and any review, dissemination, distribution or copying of 
> this message is strictly prohibited. If you have received this message in 
> error, please notify the sender immediately and delete all copies.
  


Rest Proxy Question

2016-02-16 Thread Heath Ivie
I am having a hard time with some bug in my code related to the REST proxy.

The issue that I am seeing is when the GET 
/consumers/HIVIEPurchaseOrderCreatedEvent/instances/b0671fa1-d000-4f9b-b795-aff3003e500a/topics/PurchaseOrderCreatedEvent/.

Sometimes this will return immediately with results and sometimes it takes a 
very long time with many requests in between when results are received.

I have a job that runs in the background checking for new messages every 5 
seconds, so I would expect within a minute or two the messages would be visible.

Have any of you seen this behavior before?

Heath Ivie
Solutions Architect


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: Protocol Question

2016-02-04 Thread Heath Ivie
Dana,

I was able to get it to work, thanks
Heath

-Original Message-
From: Heath Ivie [mailto:hi...@autoanything.com] 
Sent: Thursday, February 04, 2016 8:19 AM
To: users@kafka.apache.org
Subject: RE: Protocol Question

Dana, Again thanks for the feedback.

I will try that today and see what happens.

Is there anything special to start the loop?

What I mean is there any buffer byte to signify the start of the arrays?

-Original Message-
From: Dana Powers [mailto:dana.pow...@gmail.com]
Sent: Wednesday, February 03, 2016 8:33 PM
To: users@kafka.apache.org
Subject: Re: Protocol Question

Hi Heath, a few comments:

(1) you should be looping on brokerCount
(2) you are missing the topic array count (4 bytes), and loop
(3) topic error code is int16, so only 2 bytes not 4
(4) you are missing the partition metadata array count (4 bytes), and loop
(5) you are missing the replicas and isr parsing.

-Dana

On Wed, Feb 3, 2016 at 6:01 PM, Heath Ivie <hi...@autoanything.com> wrote:

> Hi,
>
> I am trying to navigate through the protocol and I am seeing some 
> inconsistencies with the data.
>
> I am trying to parse out the MetadataResponse and I am seeing bytes in 
> between where they shouldn't be.
>
> I know they are extra, because if I up the offset the data after is 
> correct.
>
> Here is the part of the response that I am trying to parse:
> MetadataResponse => [Broker][TopicMetadata]
>   Broker => NodeId Host Port  (any number of brokers may be returned)
> NodeId => int32
> Host => string
> Port => int32
>   TopicMetadata => TopicErrorCode TopicName [PartitionMetadata]
> TopicErrorCode => int16
>   PartitionMetadata => PartitionErrorCode PartitionId Leader Replicas Isr
> PartitionErrorCode => int16
> PartitionId => int32
> Leader => int32
> Replicas => [int32]
> Isr => [int32]
>
>
> I am seeing extra bytes after the topic error.
>
> I am not sure if this padding is expected or not?
>
> It may not be very readable, but here is my code (be nice :)).
>
> int offset =0;
> int correlationId = 
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
> int brokerCount =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
> int nodeId =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
> int hostNameLength = 
> BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
> offset += 2;
>
> string hostName = Encoding.ASCII.GetString(bytes, 
> offset, hostNameLength);
> offset += hostNameLength;
>
> int port =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
>
> int topicErrorCode = 
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
>
> int topicNameLength = 
> BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
> offset += 2;
>
> string topicName = Encoding.ASCII.GetString(bytes, 
> offset, topicNameLength);
> offset += topicNameLength;
>
> int partitionErrorCode = 
> BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
> offset += 2;
>
> int partitionId =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
>
> int leaderId =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
>
>
> Any help would be cool, thanks
>
> Heath Ivie
> Solutions Architect
>
>
> Warning: This e-mail may contain information proprietary to 
> AutoAnything Inc. and is intended only for the use of the intended 
> recipient(s). If the reader of this message is not the intended 
> recipient(s), you have received this message in error and any review, 
> dissemination, distribution or copying of this message is strictly 
> prohibited. If you have received this message in error, please notify 
> the sender immediately and delete all copies.
>


RE: Protocol Question

2016-02-04 Thread Heath Ivie
Dana, Again thanks for the feedback.

I will try that today and see what happens.

Is there anything special to start the loop?

What I mean is there any buffer byte to signify the start of the arrays?

-Original Message-
From: Dana Powers [mailto:dana.pow...@gmail.com] 
Sent: Wednesday, February 03, 2016 8:33 PM
To: users@kafka.apache.org
Subject: Re: Protocol Question

Hi Heath, a few comments:

(1) you should be looping on brokerCount
(2) you are missing the topic array count (4 bytes), and loop
(3) topic error code is int16, so only 2 bytes not 4
(4) you are missing the partition metadata array count (4 bytes), and loop
(5) you are missing the replicas and isr parsing.

-Dana

On Wed, Feb 3, 2016 at 6:01 PM, Heath Ivie <hi...@autoanything.com> wrote:

> Hi,
>
> I am trying to navigate through the protocol and I am seeing some 
> inconsistencies with the data.
>
> I am trying to parse out the MetadataResponse and I am seeing bytes in 
> between where they shouldn't be.
>
> I know they are extra, because if I up the offset the data after is 
> correct.
>
> Here is the part of the response that I am trying to parse:
> MetadataResponse => [Broker][TopicMetadata]
>   Broker => NodeId Host Port  (any number of brokers may be returned)
> NodeId => int32
> Host => string
> Port => int32
>   TopicMetadata => TopicErrorCode TopicName [PartitionMetadata]
> TopicErrorCode => int16
>   PartitionMetadata => PartitionErrorCode PartitionId Leader Replicas Isr
> PartitionErrorCode => int16
> PartitionId => int32
> Leader => int32
> Replicas => [int32]
> Isr => [int32]
>
>
> I am seeing extra bytes after the topic error.
>
> I am not sure if this padding is expected or not?
>
> It may not be very readable, but here is my code (be nice :)).
>
> int offset =0;
> int correlationId = 
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
> int brokerCount =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
> int nodeId =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
> int hostNameLength = 
> BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
> offset += 2;
>
> string hostName = Encoding.ASCII.GetString(bytes, 
> offset, hostNameLength);
> offset += hostNameLength;
>
> int port =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
>
> int topicErrorCode = 
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
>
> int topicNameLength = 
> BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
> offset += 2;
>
> string topicName = Encoding.ASCII.GetString(bytes, 
> offset, topicNameLength);
> offset += topicNameLength;
>
> int partitionErrorCode = 
> BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
> offset += 2;
>
> int partitionId =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
>
> int leaderId =
> BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
> offset += 4;
>
>
> Any help would be cool, thanks
>
> Heath Ivie
> Solutions Architect
>
>
> Warning: This e-mail may contain information proprietary to 
> AutoAnything Inc. and is intended only for the use of the intended 
> recipient(s). If the reader of this message is not the intended 
> recipient(s), you have received this message in error and any review, 
> dissemination, distribution or copying of this message is strictly 
> prohibited. If you have received this message in error, please notify 
> the sender immediately and delete all copies.
>


Protocol Question

2016-02-03 Thread Heath Ivie
Hi,

I am trying to navigate through the protocol and I am seeing some 
inconsistencies with the data.

I am trying to parse out the MetadataResponse and I am seeing bytes in between 
where they shouldn't be.

I know they are extra, because if I up the offset the data after is correct.

Here is the part of the response that I am trying to parse:
MetadataResponse => [Broker][TopicMetadata]
  Broker => NodeId Host Port  (any number of brokers may be returned)
NodeId => int32
Host => string
Port => int32
  TopicMetadata => TopicErrorCode TopicName [PartitionMetadata]
TopicErrorCode => int16
  PartitionMetadata => PartitionErrorCode PartitionId Leader Replicas Isr
PartitionErrorCode => int16
PartitionId => int32
Leader => int32
Replicas => [int32]
Isr => [int32]


I am seeing extra bytes after the topic error.

I am not sure if this padding is expected or not?

It may not be very readable, but here is my code (be nice :)).

int offset =0;
int correlationId = 
BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
offset += 4;
int brokerCount = 
BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
offset += 4;
int nodeId = 
BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
offset += 4;
int hostNameLength = 
BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
offset += 2;

string hostName = Encoding.ASCII.GetString(bytes, offset, 
hostNameLength);
offset += hostNameLength;

int port = 
BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
offset += 4;

int topicErrorCode = 
BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
offset += 4;

int topicNameLength = 
BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
offset += 2;

string topicName = Encoding.ASCII.GetString(bytes, offset, 
topicNameLength);
offset += topicNameLength;

int partitionErrorCode = 
BitConverter.ToInt16(ReverseBytes(bytes.Skip(offset).Take(2).ToArray()), 0);
offset += 2;

int partitionId = 
BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
offset += 4;

int leaderId = 
BitConverter.ToInt32(ReverseBytes(bytes.Skip(offset).Take(4).ToArray()), 0);
offset += 4;


Any help would be cool, thanks

Heath Ivie
Solutions Architect


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


Documentation

2016-01-31 Thread Heath Ivie
Hi Folks,

I am working through the protocols to build a c# rest api.

I am seeing inconsistencies in the way the document says it works with how it 
actually works, specifically the fetch messages.

Could someone point me to the current documentation?

Thanks Heath

Sent from Outlook Mobile


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


Re: Documentation

2016-01-31 Thread Heath Ivie
To piggy back , where can I find the api key values?

Sent from Outlook Mobile<https://aka.ms/qtex0l>




On Sun, Jan 31, 2016 at 9:25 AM -0800, "Heath Ivie" 
<hi...@autoanything.com<mailto:hi...@autoanything.com>> wrote:

Hi Folks,

I am working through the protocols to build a c# rest api.

I am seeing inconsistencies in the way the document says it works with how it 
actually works, specifically the fetch messages.

Could someone point me to the current documentation?

Thanks Heath

Sent from Outlook Mobile<https://aka.ms/qtex0l>


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: Partitions and consumer assignment

2016-01-19 Thread Heath Ivie
I get a 404 for this page.

Is there an updated link?

-Original Message-
From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com] 
Sent: Tuesday, January 19, 2016 1:33 PM
To: users@kafka.apache.org
Subject: Re: Partitions and consumer assignment

Thanks Luke. I'll read through that.

-J

On Tue, Jan 19, 2016 at 9:21 AM, Luke Steensen < 
luke.steen...@braintreepayments.com> wrote:

> Hi Jason,
>
> You might find this blog post useful:
>
> http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartiti
> ons-in-a-kafka-cluster/
>
> - Luke
>
>
> On Sat, Jan 16, 2016 at 1:30 PM, Jason Williams 
>  >
> wrote:
>
> > Hi Franco,
> >
> > Thank you for the info! It is my reading of the docs that adding 
> > partitions would require manual rebalancing to spread the existing load?
> >
> > Would it be advisable instead to start out with a partition count 
> > that is 10x the initial consumer count? For example if we anticipate 
> > starting
> with
> > 5 consumers, use a partition count of 50 or 100. That way we could 
> > just
> add
> > consumers during a load spike and remove them when it passes?
> >
> > Sorry for all the questions. I just am not able to find a lot of 
> > information about how folks are handling auto-scaling kind of 
> > situations like this.
> >
> > -J
> >
> > Sent via iPhone
> >
> > > On Jan 16, 2016, at 10:52, Franco Giacosa  wrote:
> > >
> > > Hi Jason,
> > >
> > > You can try to repartition and assign more consumers.
> > >
> > > *Documentation*
> > > "Modifying topics
> > >
> > > You can change the configuration or partitioning of a topic using 
> > > the
> > same
> > > topic tool.
> > > To add partitions you can do
> > >
> > >> bin/kafka-topics.sh --zookeeper zk_host:port/chroot --alter 
> > >> --topic
> > > my_topic_name
> > >   --partitions 40
> > > Be aware that one use case for partitions is to semantically 
> > > partition data, and adding partitions doesn't change the 
> > > partitioning of existing data so this may disturb consumers if 
> > > they rely on that partition. That
> > is
> > > if data is partitioned by hash(key) % number_of_partitions then 
> > > this partitioning will potentially be shuffled by adding 
> > > partitions but
> Kafka
> > > will not attempt to automatically redistribute data in any way."
> > >
> > >
> > > Franco.
> > >
> > >
> > >
> > > 2016-01-16 19:04 GMT+01:00 Jason Williams :
> > >
> > >> Hi Franco,
> > >>
> > >> Thanks for responding but that doesn't really answer my question.
> > >>
> > >> The situation described is it I have N partitions and already N
> > consumers
> > >> in the CG, and then I receive a spike in messages. How is it 
> > >> suggested
> > to
> > >> handle adding more consumption capacity to deal the spike...since
> adding
> > >> consumers when I'm already at N will just add idle consumers?
> > >>
> > >> -J
> > >>
> > >>
> > >> Sent via iPhone
> > >>
> > >>> On Jan 16, 2016, at 03:21, Franco Giacosa 
> wrote:
> > >>>
> > >>> 1 consumer group can have many partitions, if the consumer group 
> > >>> has
> 1
> > >>> consumer and there are N partitions, it will consume from N, if 
> > >>> you
> > have
> > >> a
> > >>> spike you can add up to N more consumers to that consumer group.
> > >>>
> > >>> 2016-01-16 11:32 GMT+01:00 Jason Williams 
> > >>>  >:
> > >>>
> >  Thanks Jens!
> > 
> >  So assuming you've already paired your partition count to the
> consumer
> >  count...if you experience a spike in messages and want to spin 
> >  up
> more
> >  consumers to add temporary processing capacity, what's the 
> >  suggested
> > >> way to
> >  handle this? (since it would seem you can't just add consumers 
> >  as
> they
> >  would remain idle...and adding partitions appears to require 
> >  manual rebalancing).
> > 
> >  -J
> > 
> >  Sent via iPhone
> > 
> > > On Jan 16, 2016, at 01:54, Jens Rantil 
> wrote:
> > >
> > > Hi,
> > >
> > >
> > > You are correct. The others will remain idle. This is why you
> > generally
> >  want to have at least the same number of partitions as consumers.
> > >
> > >
> > >
> > >
> > > Cheers,
> > >
> > > Jens
> > >
> > >
> > >
> > >
> > >
> > > –
> > > Skickat från Mailbox
> > >
> > > On Sat, Jan 16, 2016 at 2:34 AM, Jason J. W. Williams 
> > >  wrote:
> > >
> > >> Hi,
> > >> I'm trying to make sure I understand this statement in the docs:
> > >> "Each broker partition is consumed by a single consumer 
> > >> within a
> > given
> > >> consumer group. The consumer must establish its ownership of 
> > >> a
> given
> > >> partition before any consumption can begin."
> > >> If I have:
> > >> * a topic with 1 partition
> > >> * 

RE: Partitions and consumer assignment

2016-01-19 Thread Heath Ivie
My mistake, I missed the carriage return

-Original Message-
From: Heath Ivie [mailto:hi...@autoanything.com] 
Sent: Tuesday, January 19, 2016 2:58 PM
To: users@kafka.apache.org
Subject: RE: Partitions and consumer assignment

I get a 404 for this page.

Is there an updated link?

-Original Message-
From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
Sent: Tuesday, January 19, 2016 1:33 PM
To: users@kafka.apache.org
Subject: Re: Partitions and consumer assignment

Thanks Luke. I'll read through that.

-J

On Tue, Jan 19, 2016 at 9:21 AM, Luke Steensen < 
luke.steen...@braintreepayments.com> wrote:

> Hi Jason,
>
> You might find this blog post useful:
>
> http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartiti
> ons-in-a-kafka-cluster/
>
> - Luke
>
>
> On Sat, Jan 16, 2016 at 1:30 PM, Jason Williams 
> <jasonjwwilli...@gmail.com
> >
> wrote:
>
> > Hi Franco,
> >
> > Thank you for the info! It is my reading of the docs that adding 
> > partitions would require manual rebalancing to spread the existing load?
> >
> > Would it be advisable instead to start out with a partition count 
> > that is 10x the initial consumer count? For example if we anticipate 
> > starting
> with
> > 5 consumers, use a partition count of 50 or 100. That way we could 
> > just
> add
> > consumers during a load spike and remove them when it passes?
> >
> > Sorry for all the questions. I just am not able to find a lot of 
> > information about how folks are handling auto-scaling kind of 
> > situations like this.
> >
> > -J
> >
> > Sent via iPhone
> >
> > > On Jan 16, 2016, at 10:52, Franco Giacosa <fgiac...@gmail.com> wrote:
> > >
> > > Hi Jason,
> > >
> > > You can try to repartition and assign more consumers.
> > >
> > > *Documentation*
> > > "Modifying topics
> > >
> > > You can change the configuration or partitioning of a topic using 
> > > the
> > same
> > > topic tool.
> > > To add partitions you can do
> > >
> > >> bin/kafka-topics.sh --zookeeper zk_host:port/chroot --alter 
> > >> --topic
> > > my_topic_name
> > >   --partitions 40
> > > Be aware that one use case for partitions is to semantically 
> > > partition data, and adding partitions doesn't change the 
> > > partitioning of existing data so this may disturb consumers if 
> > > they rely on that partition. That
> > is
> > > if data is partitioned by hash(key) % number_of_partitions then 
> > > this partitioning will potentially be shuffled by adding 
> > > partitions but
> Kafka
> > > will not attempt to automatically redistribute data in any way."
> > >
> > >
> > > Franco.
> > >
> > >
> > >
> > > 2016-01-16 19:04 GMT+01:00 Jason Williams <jasonjwwilli...@gmail.com>:
> > >
> > >> Hi Franco,
> > >>
> > >> Thanks for responding but that doesn't really answer my question.
> > >>
> > >> The situation described is it I have N partitions and already N
> > consumers
> > >> in the CG, and then I receive a spike in messages. How is it 
> > >> suggested
> > to
> > >> handle adding more consumption capacity to deal the spike...since
> adding
> > >> consumers when I'm already at N will just add idle consumers?
> > >>
> > >> -J
> > >>
> > >>
> > >> Sent via iPhone
> > >>
> > >>> On Jan 16, 2016, at 03:21, Franco Giacosa <fgiac...@gmail.com>
> wrote:
> > >>>
> > >>> 1 consumer group can have many partitions, if the consumer group 
> > >>> has
> 1
> > >>> consumer and there are N partitions, it will consume from N, if 
> > >>> you
> > have
> > >> a
> > >>> spike you can add up to N more consumers to that consumer group.
> > >>>
> > >>> 2016-01-16 11:32 GMT+01:00 Jason Williams 
> > >>> <jasonjwwilli...@gmail.com
> >:
> > >>>
> > >>>> Thanks Jens!
> > >>>>
> > >>>> So assuming you've already paired your partition count to the
> consumer
> > >>>> count...if you experience a spike in messages and want to spin 
> > >>>> up
> more
> > >>>> consumers to add temporary processing capacity, what's the 
> > &g

RE: 409 Conflict

2016-01-13 Thread Heath Ivie
Hi Ewen,

When I follow the exact sequence with the curl commands below I get this error.

It seems to register fine, because I get the baseuri and the instance id.

This is the error that I get when I try to read from the topic:
{
  "error_code": 40901,
  "message": "Consumer cannot subscribe the the specified target because it has 
already subscribed to other topics."
}

I am making sure that each time I try this I am using a new group, instance and 
topic so that it shouldn't be duplicated.

Could there be a conflict between this latest version and the previous, I just 
upgraded yesterday?

-Original Message-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io] 
Sent: Tuesday, January 12, 2016 10:38 PM
To: users@kafka.apache.org
Subject: Re: 409 Conflict

Is the consumer registration failing, or the subsequent calls to read from the 
topic? From the error, it sounds like the latter -- a conflict during 
registration should generate a 40902 error.

Can you give more info about the sequence of requests that causes the error? 
The set of commands you gave looks ok at first glance, but since there's only 
one consumer, it shouldn't be possible for it to generate a
409 (conflict) error.

-Ewen

On Tue, Jan 12, 2016 at 4:06 PM, Heath Ivie <hi...@autoanything.com> wrote:

> Additional Info:
>
> When I enter the curl commands from the 2.0 document, I get the same error:
> $ curl -X POST -H "Content-Type: application/vnd.kafka.json.v1+json"
>  --data '{"records":[{"value":{"foo":"bar"}}]}' "
> http://10.1.30.48:8082/topics/jsontest;
> $ curl -X POST -H "Content-Type: application/vnd.kafka.v1+json"  
> --data
> '{"name": "my_consumer_instance", "format": "json", "auto.offset.reset":
> "smallest"}'   http://10.1.30.48:8082/consumers/my_json_consumer
>  $ curl -X GET -H "Accept: application/vnd.kafka.json.v1+json"
> http://10.1.30.48:8082/consumers/my_json_consumer/instances/my_consume
> r_instance/topics/jsontest
>  $ curl -X DELETE
> http://10.1.30.48:8082/consumers/my_json_consumer/instances/my_consume
> r_instance
>
>
>
> http://docs.confluent.io/2.0.0/kafka-rest/docs/intro.html#produce-and-
> consume-json-messages
>
> -Original Message-
> From: Heath Ivie [mailto:hi...@autoanything.com]
> Sent: Tuesday, January 12, 2016 3:08 PM
> To: users@kafka.apache.org
> Subject: 409 Conflict
>
> Hi ,
>
> I am running into an issue where I cannot register new consumers.
>
> The server is consistently return error code 40901: "Consumer cannot 
> subscribe the the specified target because it has already subscribed 
> to other topics".
>
> I am using different groups, different topics and different names but 
> I cannot figure out what I am doing wrong.
>
> I am using the REST proxy for all of my communication.
>
> I am also letting Kafka select the instance name for me so that it is 
> unique.
>
> Can someone please point me in the right direction?
> Thanks
> Heath
>
>
> Warning: This e-mail may contain information proprietary to 
> AutoAnything Inc. and is intended only for the use of the intended 
> recipient(s). If the reader of this message is not the intended 
> recipient(s), you have received this message in error and any review, 
> dissemination, distribution or copying of this message is strictly 
> prohibited. If you have received this message in error, please notify 
> the sender immediately and delete all copies.
>



--
Thanks,
Ewen


RE: 409 Conflict

2016-01-13 Thread Heath Ivie
Update: I have completely uninstall kafka v10 and v17 and reinstalled and all 
is right with the world.

-Original Message-
From: Heath Ivie [mailto:hi...@autoanything.com] 
Sent: Wednesday, January 13, 2016 11:59 AM
To: users@kafka.apache.org
Subject: RE: 409 Conflict

Hi Ewen,

When I follow the exact sequence with the curl commands below I get this error.

It seems to register fine, because I get the baseuri and the instance id.

This is the error that I get when I try to read from the topic:
{
  "error_code": 40901,
  "message": "Consumer cannot subscribe the the specified target because it has 
already subscribed to other topics."
}

I am making sure that each time I try this I am using a new group, instance and 
topic so that it shouldn't be duplicated.

Could there be a conflict between this latest version and the previous, I just 
upgraded yesterday?

-Original Message-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
Sent: Tuesday, January 12, 2016 10:38 PM
To: users@kafka.apache.org
Subject: Re: 409 Conflict

Is the consumer registration failing, or the subsequent calls to read from the 
topic? From the error, it sounds like the latter -- a conflict during 
registration should generate a 40902 error.

Can you give more info about the sequence of requests that causes the error? 
The set of commands you gave looks ok at first glance, but since there's only 
one consumer, it shouldn't be possible for it to generate a
409 (conflict) error.

-Ewen

On Tue, Jan 12, 2016 at 4:06 PM, Heath Ivie <hi...@autoanything.com> wrote:

> Additional Info:
>
> When I enter the curl commands from the 2.0 document, I get the same error:
> $ curl -X POST -H "Content-Type: application/vnd.kafka.json.v1+json"
>  --data '{"records":[{"value":{"foo":"bar"}}]}' "
> http://10.1.30.48:8082/topics/jsontest;
> $ curl -X POST -H "Content-Type: application/vnd.kafka.v1+json"  
> --data
> '{"name": "my_consumer_instance", "format": "json", "auto.offset.reset":
> "smallest"}'   http://10.1.30.48:8082/consumers/my_json_consumer
>  $ curl -X GET -H "Accept: application/vnd.kafka.json.v1+json"
> http://10.1.30.48:8082/consumers/my_json_consumer/instances/my_consume
> r_instance/topics/jsontest
>  $ curl -X DELETE
> http://10.1.30.48:8082/consumers/my_json_consumer/instances/my_consume
> r_instance
>
>
>
> http://docs.confluent.io/2.0.0/kafka-rest/docs/intro.html#produce-and-
> consume-json-messages
>
> -Original Message-
> From: Heath Ivie [mailto:hi...@autoanything.com]
> Sent: Tuesday, January 12, 2016 3:08 PM
> To: users@kafka.apache.org
> Subject: 409 Conflict
>
> Hi ,
>
> I am running into an issue where I cannot register new consumers.
>
> The server is consistently return error code 40901: "Consumer cannot 
> subscribe the the specified target because it has already subscribed 
> to other topics".
>
> I am using different groups, different topics and different names but 
> I cannot figure out what I am doing wrong.
>
> I am using the REST proxy for all of my communication.
>
> I am also letting Kafka select the instance name for me so that it is 
> unique.
>
> Can someone please point me in the right direction?
> Thanks
> Heath
>
>
> Warning: This e-mail may contain information proprietary to 
> AutoAnything Inc. and is intended only for the use of the intended 
> recipient(s). If the reader of this message is not the intended 
> recipient(s), you have received this message in error and any review, 
> dissemination, distribution or copying of this message is strictly 
> prohibited. If you have received this message in error, please notify 
> the sender immediately and delete all copies.
>



--
Thanks,
Ewen


409 Conflict

2016-01-12 Thread Heath Ivie
Hi ,

I am running into an issue where I cannot register new consumers.

The server is consistently return error code 40901: "Consumer cannot subscribe 
the the specified target because it has already subscribed to other topics".

I am using different groups, different topics and different names but I cannot 
figure out what I am doing wrong.

I am using the REST proxy for all of my communication.

I am also letting Kafka select the instance name for me so that it is unique.

Can someone please point me in the right direction?
Thanks
Heath


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: 409 Conflict

2016-01-12 Thread Heath Ivie
Additional Info:

When I enter the curl commands from the 2.0 document, I get the same error:
$ curl -X POST -H "Content-Type: application/vnd.kafka.json.v1+json"   --data 
'{"records":[{"value":{"foo":"bar"}}]}' "http://10.1.30.48:8082/topics/jsontest;
$ curl -X POST -H "Content-Type: application/vnd.kafka.v1+json"  --data 
'{"name": "my_consumer_instance", "format": "json", "auto.offset.reset": 
"smallest"}'   http://10.1.30.48:8082/consumers/my_json_consumer
 $ curl -X GET -H "Accept: application/vnd.kafka.json.v1+json"
http://10.1.30.48:8082/consumers/my_json_consumer/instances/my_consumer_instance/topics/jsontest
 $ curl -X DELETE  
http://10.1.30.48:8082/consumers/my_json_consumer/instances/my_consumer_instance


http://docs.confluent.io/2.0.0/kafka-rest/docs/intro.html#produce-and-consume-json-messages

-Original Message-
From: Heath Ivie [mailto:hi...@autoanything.com] 
Sent: Tuesday, January 12, 2016 3:08 PM
To: users@kafka.apache.org
Subject: 409 Conflict

Hi ,

I am running into an issue where I cannot register new consumers.

The server is consistently return error code 40901: "Consumer cannot subscribe 
the the specified target because it has already subscribed to other topics".

I am using different groups, different topics and different names but I cannot 
figure out what I am doing wrong.

I am using the REST proxy for all of my communication.

I am also letting Kafka select the instance name for me so that it is unique.

Can someone please point me in the right direction?
Thanks
Heath


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: Kafka-Rest question

2016-01-08 Thread Heath Ivie
That cool, I will check that out but it makes sense

-Original Message-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io] 
Sent: Thursday, January 07, 2016 8:14 PM
To: users@kafka.apache.org
Subject: Re: Kafka-Rest question

The REST proxy doesn't currently handle topic creation, so you need to either 
do that separately or set the num.partitions config for your brokers if you are 
relying on topic auto creation. The default for num.partitions is 1, which is 
probably why you're seeing the current behavior.

It'd be great to add admin functionality to the proxy so you could do this 
directly (and KIP-4 should make that easier to accomplish).

-Ewen

On Thu, Jan 7, 2016 at 1:46 PM, Heath Ivie <hi...@autoanything.com> wrote:

> I think that the question I should be asking is how do I get more than 
> 1 partition when using the rest client?
>
> -Original Message-----
> From: Heath Ivie [mailto:hi...@autoanything.com]
> Sent: Thursday, January 07, 2016 1:28 PM
> To: users@kafka.apache.org
> Subject: RE: Kafka-Rest question
>
> Thanks Ewen.
>
> I have that in my example, but when there are two consumers each with 
> an autogenerated id, only one can read and get data.
>
> The second one will return an empty array, even with a message being 
> published after the first consumer is finished.
>
> I see what is expected if I use the .NET client, but in the proxy it 
> doesn’t appear to work like that.
>
> -Original Message-
> From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
> Sent: Thursday, January 07, 2016 1:18 PM
> To: users@kafka.apache.org
> Subject: Re: Kafka-Rest question
>
> On Thu, Jan 7, 2016 at 12:54 PM, Heath Ivie <hi...@autoanything.com>
> wrote:
>
> > Hi,
> >
> > I am building my application using the rest proxy and I wanted to 
> > get some clarification on the documentation.
> >
> > "Because consumers are stateful, any consumer instances created with 
> > the REST API are tied to a specific REST proxy instance. A full URL 
> > is provided when the instance is created and it should be used to 
> > construct any subsequent requests. Failing to use the returned URL 
> > for future consumer requests will end up adding new consumers to the 
> > group. If a REST proxy instance is shutdown, it will attempt to 
> > cleanly destroy any consumers before it is terminated."
> >
> > Does the highlighted mean that I have to have multiple deployed 
> > instances in order to have more than 1 consumer group and configure 
> > it with sticky sessions?
> >
>
> You can run multiple consumer instances in one or more consumer groups 
> on a single REST proxy instance. However, you'll need to ensure they 
> have different IDs (which you can have auto-assigned if you like).
>
>
> >
> > I have the following code where I am testing the behavior when I 
> > have
> > 2 consumers in the same group, and I am getting an empty array back 
> > for the second consumer in the same group.
> >
> > This will through an exception when I attempt to read a message from 
> > the 2nd consumer.
> >
> > Not sure if it would help, but I configured the proxy to have 2 
> > threads per consumer.
> >
> > KafkaMessageQueueWriter writer = new KafkaMessageQueueWriter("
> > http://10.1.30.48:8082;);
> >
> >
> >
> > KafkaMessageQueueReader reader1 = new 
> > KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> > ConsumerGroupName = "Test50", TopicName = "ShellComparisonTest", 
> > ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48" });
> > KafkaMessageQueueReader reader2 = new 
> > KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> > ConsumerGroupName = "Test51", TopicName = "ShellComparisonTest", 
> > ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48" });
> > KafkaMessageQueueReader reader3 = new 
> > KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> > ConsumerGroupName = "Test50", TopicName = "ShellComparisonTest", 
> > ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48"
> > });
> >
> >
> >
> > for (int i = 0; i < 10; i++)
> > {
> > //Publish a message to the ShellComparisonTest topic
> > writer.PublishMessage("ShellComparisonTest", new
> > OrderEvent() { OrderId = 5, OrderItemId = 1234 });
> > var @event =
> > reader1.ReadQueue("Shell

Kafka-Rest question

2016-01-07 Thread Heath Ivie
Hi,

I am building my application using the rest proxy and I wanted to get some 
clarification on the documentation.

"Because consumers are stateful, any consumer instances created with the REST 
API are tied to a specific REST proxy instance. A full URL is provided when the 
instance is created and it should be used to construct any subsequent requests. 
Failing to use the returned URL for future consumer requests will end up adding 
new consumers to the group. If a REST proxy instance is shutdown, it will 
attempt to cleanly destroy any consumers before it is terminated."

Does the highlighted mean that I have to have multiple deployed instances in 
order to have more than 1 consumer group and configure it with sticky sessions?

I have the following code where I am testing the behavior when I have 2 
consumers in the same group, and I am getting an empty array back for the 
second consumer in the same group.

This will through an exception when I attempt to read a message from the 2nd 
consumer.

Not sure if it would help, but I configured the proxy to have 2 threads per 
consumer.

KafkaMessageQueueWriter writer = new 
KafkaMessageQueueWriter("http://10.1.30.48:8082;);



KafkaMessageQueueReader reader1 = new KafkaMessageQueueReader(new 
KafkaMessageQueueReaderSettings() { ConsumerGroupName = "Test50", TopicName = 
"ShellComparisonTest", ZookeeperClientPort = 8082, ZookeeperClientPortAddress = 
"10.1.30.48" });
KafkaMessageQueueReader reader2 = new KafkaMessageQueueReader(new 
KafkaMessageQueueReaderSettings() { ConsumerGroupName = "Test51", TopicName = 
"ShellComparisonTest", ZookeeperClientPort = 8082, ZookeeperClientPortAddress = 
"10.1.30.48" });
KafkaMessageQueueReader reader3 = new KafkaMessageQueueReader(new 
KafkaMessageQueueReaderSettings() { ConsumerGroupName = "Test50", TopicName = 
"ShellComparisonTest", ZookeeperClientPort = 8082, ZookeeperClientPortAddress = 
"10.1.30.48" });



for (int i = 0; i < 10; i++)
{
//Publish a message to the ShellComparisonTest topic
writer.PublishMessage("ShellComparisonTest", new OrderEvent() { 
OrderId = 5, OrderItemId = 1234 });
var @event = 
reader1.ReadQueue("ShellComparisonTest");
if (!@event.Any()) throw new Exception();

@event = reader2.ReadQueue("ShellComparisonTest");
if (!@event.Any()) throw new Exception();

//Publish a message to the ShellComparisonTest topic so that
//there is one unread message in the queue before trying to
//read from the second consumer in the group
writer.PublishMessage("ShellComparisonTest", new OrderEvent() { 
OrderId = 5, OrderItemId = 1234 });
@event = reader3.ReadQueue("ShellComparisonTest");
if (!@event.Any()) throw new Exception();
}

Is there something that I am missing or is there a good resource to read.

Thanks

Heath


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: Kafka-Rest question

2016-01-07 Thread Heath Ivie
I think that the question I should be asking is how do I get more than 1 
partition when using the rest client?

-Original Message-
From: Heath Ivie [mailto:hi...@autoanything.com] 
Sent: Thursday, January 07, 2016 1:28 PM
To: users@kafka.apache.org
Subject: RE: Kafka-Rest question

Thanks Ewen.

I have that in my example, but when there are two consumers each with an 
autogenerated id, only one can read and get data.

The second one will return an empty array, even with a message being published 
after the first consumer is finished.

I see what is expected if I use the .NET client, but in the proxy it doesn’t 
appear to work like that.

-Original Message-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
Sent: Thursday, January 07, 2016 1:18 PM
To: users@kafka.apache.org
Subject: Re: Kafka-Rest question

On Thu, Jan 7, 2016 at 12:54 PM, Heath Ivie <hi...@autoanything.com> wrote:

> Hi,
>
> I am building my application using the rest proxy and I wanted to get 
> some clarification on the documentation.
>
> "Because consumers are stateful, any consumer instances created with 
> the REST API are tied to a specific REST proxy instance. A full URL is 
> provided when the instance is created and it should be used to 
> construct any subsequent requests. Failing to use the returned URL for 
> future consumer requests will end up adding new consumers to the 
> group. If a REST proxy instance is shutdown, it will attempt to 
> cleanly destroy any consumers before it is terminated."
>
> Does the highlighted mean that I have to have multiple deployed 
> instances in order to have more than 1 consumer group and configure it 
> with sticky sessions?
>

You can run multiple consumer instances in one or more consumer groups on a 
single REST proxy instance. However, you'll need to ensure they have different 
IDs (which you can have auto-assigned if you like).


>
> I have the following code where I am testing the behavior when I have
> 2 consumers in the same group, and I am getting an empty array back 
> for the second consumer in the same group.
>
> This will through an exception when I attempt to read a message from 
> the 2nd consumer.
>
> Not sure if it would help, but I configured the proxy to have 2 
> threads per consumer.
>
> KafkaMessageQueueWriter writer = new KafkaMessageQueueWriter("
> http://10.1.30.48:8082;);
>
>
>
> KafkaMessageQueueReader reader1 = new 
> KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> ConsumerGroupName = "Test50", TopicName = "ShellComparisonTest", 
> ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48" });
> KafkaMessageQueueReader reader2 = new 
> KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> ConsumerGroupName = "Test51", TopicName = "ShellComparisonTest", 
> ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48" });
> KafkaMessageQueueReader reader3 = new 
> KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> ConsumerGroupName = "Test50", TopicName = "ShellComparisonTest", 
> ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48"
> });
>
>
>
> for (int i = 0; i < 10; i++)
> {
> //Publish a message to the ShellComparisonTest topic
> writer.PublishMessage("ShellComparisonTest", new
> OrderEvent() { OrderId = 5, OrderItemId = 1234 });
> var @event =
> reader1.ReadQueue("ShellComparisonTest");
> if (!@event.Any()) throw new Exception();
>
> @event =
> reader2.ReadQueue("ShellComparisonTest");
> if (!@event.Any()) throw new Exception();
>
> //Publish a message to the ShellComparisonTest topic 
> so that
> //there is one unread message in the queue before trying to
> //read from the second consumer in the group
> writer.PublishMessage("ShellComparisonTest", new
> OrderEvent() { OrderId = 5, OrderItemId = 1234 });
> @event =
> reader3.ReadQueue("ShellComparisonTest");
> if (!@event.Any()) throw new Exception();
> }
>
> Is there something that I am missing or is there a good resource to read.
>
>
How many partitions does your topic have? Did you create it before this test or 
was it auto-created? One reason you could see this behavior is if you only have 
1 partition. In that case only 1 consumer would see any messages since only 1 
consumer would have any partitions assigned to it.

-Ewen



> Thanks
>
> Heath
&g

RE: Kafka-Rest question

2016-01-07 Thread Heath Ivie
Thanks Ewen.

I have that in my example, but when there are two consumers each with an 
autogenerated id, only one can read and get data.

The second one will return an empty array, even with a message being published 
after the first consumer is finished.

I see what is expected if I use the .NET client, but in the proxy it doesn’t 
appear to work like that.

-Original Message-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io] 
Sent: Thursday, January 07, 2016 1:18 PM
To: users@kafka.apache.org
Subject: Re: Kafka-Rest question

On Thu, Jan 7, 2016 at 12:54 PM, Heath Ivie <hi...@autoanything.com> wrote:

> Hi,
>
> I am building my application using the rest proxy and I wanted to get 
> some clarification on the documentation.
>
> "Because consumers are stateful, any consumer instances created with 
> the REST API are tied to a specific REST proxy instance. A full URL is 
> provided when the instance is created and it should be used to 
> construct any subsequent requests. Failing to use the returned URL for 
> future consumer requests will end up adding new consumers to the 
> group. If a REST proxy instance is shutdown, it will attempt to 
> cleanly destroy any consumers before it is terminated."
>
> Does the highlighted mean that I have to have multiple deployed 
> instances in order to have more than 1 consumer group and configure it 
> with sticky sessions?
>

You can run multiple consumer instances in one or more consumer groups on a 
single REST proxy instance. However, you'll need to ensure they have different 
IDs (which you can have auto-assigned if you like).


>
> I have the following code where I am testing the behavior when I have 
> 2 consumers in the same group, and I am getting an empty array back 
> for the second consumer in the same group.
>
> This will through an exception when I attempt to read a message from 
> the 2nd consumer.
>
> Not sure if it would help, but I configured the proxy to have 2 
> threads per consumer.
>
> KafkaMessageQueueWriter writer = new KafkaMessageQueueWriter("
> http://10.1.30.48:8082;);
>
>
>
> KafkaMessageQueueReader reader1 = new 
> KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> ConsumerGroupName = "Test50", TopicName = "ShellComparisonTest", 
> ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48" });
> KafkaMessageQueueReader reader2 = new 
> KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> ConsumerGroupName = "Test51", TopicName = "ShellComparisonTest", 
> ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48" });
> KafkaMessageQueueReader reader3 = new 
> KafkaMessageQueueReader(new KafkaMessageQueueReaderSettings() { 
> ConsumerGroupName = "Test50", TopicName = "ShellComparisonTest", 
> ZookeeperClientPort = 8082, ZookeeperClientPortAddress = "10.1.30.48" 
> });
>
>
>
> for (int i = 0; i < 10; i++)
> {
> //Publish a message to the ShellComparisonTest topic
> writer.PublishMessage("ShellComparisonTest", new
> OrderEvent() { OrderId = 5, OrderItemId = 1234 });
> var @event =
> reader1.ReadQueue("ShellComparisonTest");
> if (!@event.Any()) throw new Exception();
>
> @event =
> reader2.ReadQueue("ShellComparisonTest");
> if (!@event.Any()) throw new Exception();
>
> //Publish a message to the ShellComparisonTest topic 
> so that
> //there is one unread message in the queue before trying to
> //read from the second consumer in the group
> writer.PublishMessage("ShellComparisonTest", new
> OrderEvent() { OrderId = 5, OrderItemId = 1234 });
> @event =
> reader3.ReadQueue("ShellComparisonTest");
> if (!@event.Any()) throw new Exception();
> }
>
> Is there something that I am missing or is there a good resource to read.
>
>
How many partitions does your topic have? Did you create it before this test or 
was it auto-created? One reason you could see this behavior is if you only have 
1 partition. In that case only 1 consumer would see any messages since only 1 
consumer would have any partitions assigned to it.

-Ewen



> Thanks
>
> Heath
>
>
> Warning: This e-mail may contain information proprietary to 
> AutoAnything Inc. and is intended only for the use of the intended 
> recipient(s). If the reader of this message is not the intended 
> recipient(s), you have received this message in error and any review, 
> dissemination, distribution or copying of this message is strictly 
> prohibited. If you have received this message in error, please notify 
> the sender immediately and delete all copies.
>



--
Thanks,
Ewen


Leader Election

2016-01-06 Thread Heath Ivie
Hi Folks,

I am trying to use the REST proxy, but I have some fundamental questions about 
how the leader election works.

My understanding of how the way the leader elections work is that the proxy 
hides all of that complexity and processes my produce/consume request 
transparently.

Is that the case or do I need to manage that myself?

Thanks
Heath



Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.


RE: Local Storage

2015-12-18 Thread Heath Ivie
Thank you very much Gwen

-Original Message-
From: Gwen Shapira [mailto:g...@confluent.io] 
Sent: Thursday, December 17, 2015 3:45 PM
To: users@kafka.apache.org
Subject: Re: Local Storage

Hi,

Kafka *is* a data store. It writes data to files on the OS file system. One 
directory per partition, and a new file every specific amount of time (you can 
control this with log.roll.ms). The data format is specific to Kafka.

Hope this helps,

Gwen

On Thu, Dec 17, 2015 at 3:32 PM, Heath Ivie <hi...@autoanything.com> wrote:

> Maybe someone can answer this question for, because I cannot seem to 
> find it.
>
> What is the data store that Kafka uses when it writes the logs to disk?
>
> I thought I saw a reference to KahaDB, but I am not sure if that is 
> correct.
>
>
> Heath Ivie
> Solutions Architect
>
>
> Warning: This e-mail may contain information proprietary to 
> AutoAnything Inc. and is intended only for the use of the intended 
> recipient(s). If the reader of this message is not the intended 
> recipient(s), you have received this message in error and any review, 
> dissemination, distribution or copying of this message is strictly 
> prohibited. If you have received this message in error, please notify 
> the sender immediately and delete all copies.
>


Local Storage

2015-12-17 Thread Heath Ivie
Maybe someone can answer this question for, because I cannot seem to find it.

What is the data store that Kafka uses when it writes the logs to disk?

I thought I saw a reference to KahaDB, but I am not sure if that is correct.


Heath Ivie
Solutions Architect


Warning: This e-mail may contain information proprietary to AutoAnything Inc. 
and is intended only for the use of the intended recipient(s). If the reader of 
this message is not the intended recipient(s), you have received this message 
in error and any review, dissemination, distribution or copying of this message 
is strictly prohibited. If you have received this message in error, please 
notify the sender immediately and delete all copies.