I recently ran partition assignment on some topics. This made the replicas
of some partitions move around the cluster. It was seamless. However, when
it came to purging old logs following the retention.ms property of the
topic, the replica partitions were not clear. The leader partition,
however,
ah, never mind - I just noticed you do use a schema... Maybe you are
running into this? https://issues.apache.org/jira/browse/KAFKA-3055
On Thu, Sep 15, 2016 at 4:20 PM, Gwen Shapira wrote:
> Most people use JSON without schema, so you should probably change
> your
Most people use JSON without schema, so you should probably change
your configuration to:
key.converter.schemas.enable=false
value.converter.schemas.enable=false
On Thu, Sep 15, 2016 at 4:04 PM, Srikrishna Alla
wrote:
> I am trying to use jdbc connector to send
I am trying to use jdbc connector to send records from Kafka 0.9 to DB. I
am using jsonConverter to convert the records. My connector is failing when
its checking the Schema I am using. Please let me know what is the issue
with my json schema.
Configuration used:
Kafka Topic Creation issues (via Kafka-Manager with
auto.create.topics.enable = false)
Version: 0.9.0.1
We created a topic "web" via Kafka-Manager (our brokers are configured for
autocreate to be false) and then clicked on Generate Partitions and
according to the tool, the topic has been created
Is there any guidance on a maximum number different keys in a compacted
log? Such total numbers, or "keys need to fit in memory, message data does
not", etc. Is it unreasonable to expect tens or hundreds of millions of
keys in a single topic to be handled gracefully?
Thanks,
Wes
1. You can create N topics
2. You control from producer where each message goes
3. You have consumer that fetches from M different topics:
https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#subscribe(java.util.Collection)
Isn't this architecture flexible
@Umesh According to these examples it looks like producer and consumer
specifies bootstrap.servers. What is PLAINTEXT? do I need to change something
here https://github.com/apache/kafka/blob/trunk/config/server.properties ?
because when I specify port 9092 for both producer consumer or just either
No that is not required, when you use new consumer API. You have to
specify bootstrap.servers,
which will have 9092 (for PLAINTEXT usually ).
In old consumer API you need zookeeper server which points on 2181.
On Thu, 15 Sep 2016 at 17:03 kant kodali wrote:
> I haven't
I haven't changed anything from
https://github.com/apache/kafka/blob/trunk/config/server.properties
and it looks like it is pointing to zookeeper.
Question:
Does producer client need to point 9092 and Consumer need to point 2181? is that
the standard? Why not both point to the same thing?
Examine server.properties and see which port you're using in there
On Thu, Sep 15, 2016 at 3:52 PM, kant kodali wrote:
> which port should I use 9091 or 9092 or 2181 to send messages through kafka
> when using a client Library?
> I start kafka as follows:
> sudo
which port should I use 9091 or 9092 or 2181 to send messages through kafka
when using a client Library?
I start kafka as follows:
sudo bin/zookeeper-server-start.sh config/zookeeper.propertiessudo
./bin/kafka-server-start.sh config/server.properties
and I dont see any process running on 9091 or
Hi,
*get* /controller
is the way to go, thought there should have been children.
Cheers,
Francesco
On 13 September 2016 at 15:29, Francesco laTorre <
francesco.lato...@openbet.com> wrote:
> Hi,
>
> Anyone else finding this weird ?
>
> Cheers,
> Francesco
>
> On 12 September 2016 at 15:53,
Hi Ben,
Also the article that you pointed out clearly shows the setup had multiple
partitions and multiple workers and so on..that's whyat this point my biggest
question is what is the fair setup for Kafka so its comparable with NATS and
NSQ? and since you suspect the client Library I can give
Yup, I suspect the client library.
On Thu, Sep 15, 2016 at 10:28 AM, kant kodali wrote:
> Hi Ben,
> I can give that a try but can you tell me the suspicion or motivation
> behind it?
> other words you think single partition and single broker should be
> comparable to
> the
Hi Ben,
I can give that a try but can you tell me the suspicion or motivation behind it?
other words you think single partition and single broker should be comparable to
the setup I had with NATS and NSQ except you suspect the client library or
something?
Thanks,Kant
On Thu, Sep 15, 2016
Hi Kant,
I was following the other thread, can you try using a different
benchmarking client for a test.
https://grey-boundary.io/load-testing-apache-kafka-on-aws/
Ben
On Thursday, 15 September 2016, kant kodali wrote:
> with Kafka I tried it with 10 messages with single
It sounds like you can implement the 'mapping service' component yourself
using Kafka.
Have all of your messages go to one kafka topic. Have one consumer group
listening to this 'everything goes here' topic. This consumer group acts as
your mapping service. It looks at each message, and based on
with Kafka I tried it with 10 messages with single broker and only one partiton
that looked instantaneous and ~5K messages/sec for the data size of 1KB
I tried it with 1000 messages that looked instantaneous as well ~5K messages/sec
for the data size of 1KBI tried it with 10K messages with single
I'd post to the mailing list again with a new subject and ask that.
On Thu, Sep 15, 2016 at 1:52 PM, kant kodali wrote:
> I used node.js client libraries for all three and yes I want to make sure
> I am
> comparing apples to apples so I make it as equivalent as possible.
>
I used node.js client libraries for all three and yes I want to make sure I am
comparing apples to apples so I make it as equivalent as possible.
Again the big question is What is the right setup for Kafka to be comparable
with the other I mentioned in my previous email?
On Thu, Sep 15,
The issue is clearly that you're running out of resources, so I would add
more brokers and/or larger instances.
You're also using Node which is not the best for performance. A compiled
language such as Java would give you the best performance.
Here's a case study that should help:
Hello,
We’re considering migrating an AMQ-based platform to Kafka. However our
application logic needs an AMQ feature called Dynamic Binding, that is, on AMQ
one publishes messages to an Exchange, which can be dynamically configured to
deliver a copy of the message to several queues, based on
yeah..
I tried it with 10 messages with single broker and only one partiton that looked
instantaneous and ~5K messages/sec for the data size of 1KBI tried it with 1000
messages that looked instantaneous as well ~5K messages/sec for the data size of
1KBI tried it with 10K messages with single
Hi,
I have a pipeline that publish message with v0.8 client, the message goes
to v0.10 broker first then mirror maker will consume it and publish it to
another v0.10 brokers. But I got the following message from MM log:
java.lang.IllegalArgumentException: Invalid timestamp -1
at
Lower the workload gradually, start from 10 messages, increase to 100, then
1000, and so on. See if it slows down as the workload increases. If so, you
need more brokers + partitions to handle the workload.
On Thu, Sep 15, 2016 at 12:42 PM, kant kodali wrote:
> m4.xlarge
>
>
m4.xlarge
On Thu, Sep 15, 2016 12:33 AM, Ali Akhtar ali.rac...@gmail.com
wrote:
What's the instance size that you're using? With 300k messages your single
broker might not be able to handle it.
On Thu, Sep 15, 2016 at 12:30 PM, kant kodali wrote:
My goal is to
What's the instance size that you're using? With 300k messages your single
broker might not be able to handle it.
On Thu, Sep 15, 2016 at 12:30 PM, kant kodali wrote:
> My goal is to test the throughput (#messages per second) given my setup and
> with a data size of 1KB. if
My goal is to test the throughput (#messages per second) given my setup and
with a data size of 1KB. if you guys already have some idea on these numbers
that would be helpful as well.
On Thu, Sep 15, 2016 12:24 AM, kant kodali kanth...@gmail.com
wrote:
172.* is all private ip's for my
172.* is all private ip's for my machine I double checked it.I have not changed
any default settingsI dont know how to use kafka-consumer.sh
or kafka-producer.sh because it looks like they want me to specify a group and I
didn't create any consumer group because I am using single producer and
Your code seems to be using the public ip of the servers. If all 3 machines
are in the same availability zone on AWS, try using the private ip, and
then they might communicate over the local network.
Did you change any default settings?
Do you get the same results if you run kafka-consumer.sh
They are hosted on AWS and I dont think there are any network issues because I
tried testing other Queuing systems with no issues however I am using a node.js
client with the following code. I am not sure if there are any errors or
anything I didn't set in the following code?
Thanks, yeah, that must be it. So topics don't get deleted if all messages
in them expire, right?
Good to know :)
On Thu, Sep 15, 2016 at 11:29 AM, Manikumar Reddy wrote:
> looks like you have not changed the default data log directory. By default
> kafka is
It sounds like a network issue. Where are the 3 servers located / hosted?
On Thu, Sep 15, 2016 at 11:51 AM, kant kodali wrote:
> Hi,
> I have the following setup.
> Single Kafka broker and Zookeeper on Machine 1single Kafka producer on
> Machine 2
> Single Kafka Consumer on
Hi,
I have the following setup.
Single Kafka broker and Zookeeper on Machine 1single Kafka producer on Machine 2
Single Kafka Consumer on Machine 3
When a producer client sends a message to the Kafka broker by pointing at the
Zookeeper Server the consumer doesn't seem to get the message right
looks like you have not changed the default data log directory. By default
kafka is configured to store the data logs to /tmp/ folder. /tmp gets
cleared
on system reboots. change log.dirs config property to some other directory.
On Thu, Sep 15, 2016 at 11:46 AM, Ali Akhtar
I've noticed that, on my own machine, if I start a kafka broker, then
create a topic, then I stop that server and restart it, the topic I created
is still kept.
However, on restarts, it looks like the topic is deleted.
Its also possible that the default retention policy of 24 hours causes the
37 matches
Mail list logo