n with 3 brokers I can write 100MB/s (using Java
clients).
>
> -Dave
>
> -Original Message-
> From: Dominik Safaric [mailto:dominiksafa...@gmail.com]
> Sent: Thursday, August 25, 2016 11:51 AM
> To: users@kafka.apache.org
> Subject: Re: Kafka Producer performance - 400G
I think what Dana is suggesting is that since Python isn't doing a good job
utilising all the available CPU power, you could run multiple python
processes to process the load. Divide the mongodb collection between, say,
4 parts and process each part with one python process. On kafka side.
Or use a
faric [mailto:dominiksafa...@gmail.com]
Sent: Thursday, August 25, 2016 11:51 AM
To: users@kafka.apache.org
Subject: Re: Kafka Producer performance - 400GB of transfer on single instance
taking > 72 hours?
Dear Dana,
> I would recommend
> other tools for bulk transfers.
What tools/langu
Dear Dana,
> I would recommend
> other tools for bulk transfers.
What tools/languages would you rather recommend then using Python?
I could for sure accomplish the same by using the native Java Kafka Producer
API, but should this really affect the performance under the assumption that
the Ka
python is generally restricted to a single CPU, and kafka-python will max
out a single CPU well before it maxes a network card. I would recommend
other tools for bulk transfers. Otherwise you may find that partitioning
your data set and running separate python processes for each will increase
the o