Xavier,
>
> We also generate sha1 and sha2. Do we have to use different tools to
> generate those too?
>
> Thanks,
>
> Jun
>
> On Wed, Dec 30, 2015 at 2:29 PM, Xavier Stevens wrote:
>
> > Hey Jun,
> >
> > I was expecting that you just used md5sum (GNU v
rather than GPG.
Cheers,
Xavier
On Wed, Dec 30, 2015 at 2:00 PM, Jun Rao wrote:
> Xavier,
>
> The md5 checksum is generated by running "gpg --print-md MD5". Is there a
> command that generates the output that you wanted?
>
> Thanks,
>
> Jun
>
> On Tue, Dec 29
The current md5 checksums of the release downloads all seem to be returning
in an atypical format. Anyone know what's going on there?
Example:
https://dist.apache.org/repos/dist/release/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz.md5
I see:
kafka_2.11-0.9.0.0.tgz: 08 4F B8
0C DC
I'm all for less dependencies but I would personally vote for slf4j-api.
Just don't use any underlying implementations like logback or slf4j-log4j
bridge. Then people can hookup whatever they want.
On Mon, Feb 3, 2014 at 11:08 AM, Neha Narkhede wrote:
> >> Basically my preference would be java.
AutoCloseable would be nice for us as most of our code is using Java 7 at
this point.
I like Dropwizard's configuration mapping to POJOs via Jackson, but if you
wanted to stick with property maps I don't care enough to object.
If the producer only dealt with bytes, is there a way we could still d
+1 all of Clark's points above.
On Fri, Jan 24, 2014 at 3:30 PM, Clark Breyman wrote:
> Jay - Thanks for the call for comments. Here's some initial input:
>
> - Make message serialization a client responsibility (making all messages
> byte[]). Reflection-based loading makes it harder to use gen
I can't answer the rest but the catchy name is from Gregor Samza. A
character from Kafka's novel called The Metamorphosis.
https://en.wikipedia.org/wiki/Gregor_Samsa#Gregor_Samsa
-Xavier
On Tue, Aug 27, 2013 at 6:51 AM, Jonathan Hodges wrote:
> First off, I want to say this is awesome! It h
+1 to making the API use bytes and push serialization into the client. This
is effectively what I am doing currently anyway. I implemented a generic
Encoder which just passes the bytes through.
I also like the idea of the client being written in pure Java. Interacting
with Scala code from Java isn
Usually when these types of errors are because you're not connecting to the
proper host:port. Double check your configs, make sure everything is
running and listening on the host:port you think they are.
Have you tried using the sync producer to work out your bugs? My guess is
the sync producer wo
You should bring up your Zookeeper instances first and then the Kafka
brokers.
On Tue, Apr 23, 2013 at 11:56 AM, Karl Kirch wrote:
> Now to make things even more interesting. I restarted 2 and now it sees
> all 3 nodes.
> I think I've got some sort of weirdness happening with how I'm bringing
>
Not quite in production yet, but we have payloads in the 30KB+ range. I
just added a
max.message.size to the broker's server.properties.
-Xavier
On 1/29/13 8:57 AM, S Ahmed wrote:
Neha/Jay,
At linkedin, what is the largest payload size per message you guys have in
production? My app might ha
11 matches
Mail list logo