gigabit ethernet
Thanks,
Neha
On Fri, Mar 29, 2013 at 4:44 PM, David Arthur wrote:
> Especially in light of replication (broker-broker communication), I'm
> wondering if all the brokers are in the same rack and what kind of
> networking interfaces are used (Gigabit ethernet, Fibre Channel, etc).
Especially in light of replication (broker-broker communication), I'm
wondering if all the brokers are in the same rack and what kind of
networking interfaces are used (Gigabit ethernet, Fibre Channel, etc).
On 3/29/13 6:53 PM, Jun Rao wrote:
We have multiple Kafka clusters, each has about 10
This means that somehow the broker is not sending the response to the
producer and is sending it back to the wrong client. Any error in the
broker log?
Thanks,
Jun
On Fri, Mar 29, 2013 at 3:35 PM, Bob Jervis wrote:
> I now have the following settings (in various configs):
>
> In our producer
We have multiple Kafka clusters, each has about 10 brokers right now. Not
sure about the network topology. What kind of info do you want to know?
Thanks,
Jun
On Fri, Mar 29, 2013 at 11:47 AM, David Arthur wrote:
> How many brokers are you (LinkedIn) running? What kind of network topology?
>
>
Kafka log4j appender may not support the layout. Could you open a jira for
this?
Thanks,
Jun
On Fri, Mar 29, 2013 at 11:24 AM, Sining Ma wrote:
> Hi
>
>
>
> I am using kafka 0.7.1 right now.
> I am using the following log4j properties file and trying to send some log
> information to kafka ser
BTW, here is how one guy did it. He rebuilds the upstream tarball after the
build so there is no discrepancy for the debian build packaging scripts.
https://github.com/wikimedia-incubator/kafka-debian
On Fri, Mar 29, 2013 at 3:41 PM, Manish Bhatt wrote:
> We are trying that right now for Debia
We are trying that right now for Debian and finding that the current
version makes it nearly impossible to follow packaging best practices.
1) The build process does not allow specification of a target directory,
and thus pollutes the upstream package.
2) The build process downloads packages durin
I now have the following settings (in various configs):
In our producer configs:
producer.request.timeout.ms=60
This producer just hangs there for 10 minutes before timing out
Here is the stack dump for that timeout:
java.net.SocketTimeoutException
at
sun.nio.ch.SocketAdaptor$Socke
We have EL6 packages.
On Fri, Mar 29, 2013 at 2:01 PM, mrevilgnome wrote:
> Has anyone gone through the effort of packaging Kafka for Ubuntu, Debian,
> or CentOS? I'm partially through the process for Ubuntu, and I figured I
> should ask. Thanks.
>
> --Matt
>
--
**
*Jonathan Creasy* | Sr.
Has anyone gone through the effort of packaging Kafka for Ubuntu, Debian,
or CentOS? I'm partially through the process for Ubuntu, and I figured I
should ask. Thanks.
--Matt
How many brokers are you (LinkedIn) running? What kind of network topology?
On 3/29/13 2:45 PM, Neha Narkhede wrote:
1. We never share zookeeper and broker on the same hardware. Both need
significant memory to operate efficiently.
2. 14 drive setup is just for Kafka. We have a separate disk for
1. We never share zookeeper and broker on the same hardware. Both need
significant memory to operate efficiently.
2. 14 drive setup is just for Kafka. We have a separate disk for the OS, AFAIK.
Thanks,
Neha
On Fri, Mar 29, 2013 at 11:37 AM, Ian Friedman wrote:
> Thanks Jun. Couple more questions
Thanks Jun. Couple more questions:
1. Do you guys have dedicated hardware for Zookeeper or do you have a few
machines run both a ZK and a broker? If so, do you keep the ZK and Kafka data
on separate volumes?
2. You use the 14 drive raid setup is just for Kafka data and a separate drive
for the
Hi
I am using kafka 0.7.1 right now.
I am using the following log4j properties file and trying to send some log
information to kafka server.
log4j.rootLogger=INFO,file,stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
It's more or less the same. Our new server has 14 sata disks, each of 1 TB.
The disk also has better write latency due to larger write cache.
Thanks,
Jun
On Fri, Mar 29, 2013 at 8:32 AM, Ian Friedman wrote:
> Hi all,
>
> I'm wondering how up to date the hardware specs listed on this page are:
Hi all,
I'm wondering how up to date the hardware specs listed on this page are:
https://cwiki.apache.org/confluence/display/KAFKA/Operations
We're evaluating hardware for a Kafka broker/ZK quorum buildout and looking for
some tips and/or sample configurations if anyone can help us out with s
Hi,
I've added an example program for using a SimpleConsumer for 0.8.0. Turns
out to be a little more complicated once you add Broker failover. I'm not
100% thrilled with how I detect and recover, so if someone has a better way
of doing this please let me (and this list) know.
https://cwiki.apach
Hi Anand,
Can you describe your exact test setup ? This bug has been quite
elusive so far, it will be great to have a reproducible test case.
Also, are you using Kafka 0.7 or 0.8 ? I wonder if you can reproduce
this with Kafka 0.8 as well ?
Thanks
Neha
On Fri, Mar 29, 2013 at 8:15 AM, anand naly
Hi Jun,
I'm using async java producer. It works fine till the messages are in 100s
of thousands but starts failing for anything above a million. Each message
is around 2kb.
I've tried both with single producer and multiple producers. Rate of this
error is much less in single producer then in case
Chris,
Client id is used for registering jmx beans for monitoring. Because of the
restrictions in bean names, we limit the client id to be only alpha-numeric
plus "-" and "_".
Thanks,
Jun
On Fri, Mar 29, 2013 at 5:54 AM, Chris Curtin wrote:
> Hi,
>
> Before I submit an enhancement JIRA, is the
This indicates that the messages sent to the broker are somehow corrupted.
Are you using a java producer? How many instances of producers do you have?
Thanks,
Jun
On Fri, Mar 29, 2013 at 2:46 AM, anand nalya wrote:
> Hi,
>
> I'm running kafka in distributed mode with 2 nodes. It works fine wit
Hi,
Before I submit an enhancement JIRA, is there are reason I can't use a
colon (:) or parenthesis in a client name for Simple Consumer?
I wanted to do something like 'Web:topic(partition)' so I know this is the
Web process for topic and partition.
Thanks,
Chris
Hi,
I'm running kafka in distributed mode with 2 nodes. It works fine with slow
ingestion rates but when I increase the ingestion rate, both the nodes
starts giving the following error:
[2013-03-29 14:51:45,379] ERROR Closing socket for /192.168.145.183 because
of error (kafka.network.Processor)
23 matches
Mail list logo