@Robert Quinlivan: the producer is just the kafka-console-producer
shell that comes in the kafka/bin directory (kafka/bin/windows in my
case). Nothing special.
I'll try messing with acks because this problem is somewhat incidental
to what I'm trying to do which is see how big the log directory grows.

It's possible kafkacat or other producers would do a better job than
the console producer but I'll try that on linux as getting them
working on windows, meh.

thanks all

jan


On 18/04/2017, David Garcia <dav...@spiceworks.com> wrote:
> The “NewShinyProducer” is also deprecated.
>
> On 4/18/17, 5:41 PM, "David Garcia" <dav...@spiceworks.com> wrote:
>
>     The console producer in the 0.10.0.0 release uses the old producer which
> doesn’t have “backoff”…it’s really just for testing simple producing:
>
>     object ConsoleProducer {
>
>       def main(args: Array[String]) {
>
>         try {
>             val config = new ProducerConfig(args)
>             val reader =
> Class.forName(config.readerClass).newInstance().asInstanceOf[MessageReader]
>             reader.init(System.in, getReaderProps(config))
>
>             val producer =
>               if(config.useOldProducer) {
>                 new OldProducer(getOldProducerProps(config))
>               } else {
>                 new NewShinyProducer(getNewProducerProps(config))
>               }
>
>
>
>     On 4/18/17, 5:31 PM, "Robert Quinlivan" <rquinli...@signal.co> wrote:
>
>         I am curious how your producer is configured. The producer maintains
> an
>         internal buffer of messages to be sent over to the broker. Is it
> possible
>         you are terminating the producer code in your test before the buffer
> is
>         exhausted?
>
>         On Tue, Apr 18, 2017 at 5:29 PM, jan <rtm4...@googlemail.com>
> wrote:
>
>         > Thanks to both of you. Some quick points:
>         >
>         > I'd expect there to be backpressure from the producer if the
> broker is
>         > busy ie. the broker would not respond to the console producer if
> the
>         > broker was too busy accept more messages, and the producer would
> hang
>         > on the socket. Alternatively I'd hope the console producer would
> have
>         > the sense to back off and retry but clearly(?) not.
>         > This behaviour is actually relevant to my old job so I need to
> know more.
>         >
>         > Perhaps the timeout mentioned in the error msg can just be upped?
>         >
>         > *Is* the claimed timeout relevant?
>         > > Batch containing 8 record(s) expired due to timeout while
> requesting
>         > metadata from brokers for big_ptns1_repl1_nozip-0
>         >
>         > Why is the producer expiring records?
>         >
>         > But I'm surprised this happened because my setup is one machine
> with
>         > everything running on it. No network. Also Kafka writes to the
> disk
>         > without an fsync (or its equivalent on windows) which means it
> just
>         > gets cached in ram before being lazily written to disk, and I've
> got
>         > plenty of ram - 16GB ram vs 5GB of input file. Kafka adds its
> overhead
>         > so it grows to ~8GB but still, it need not hit disk at all (and
> the
>         > file goes into the windows memory, not java's).
>         > Maybe it is GC holding things up but I dunno, GC even for a second
> or
>         > two should not cause a socket failure, just delay the read, though
> I'm
>         > not an expert on this *at all*.
>         >
>         > I'll go over the answers tomorrow more carefully but thanks
> anyway!
>         >
>         > cheers
>         >
>         > jan
>         >
>         > On 18/04/2017, Serega Sheypak <serega.shey...@gmail.com> wrote:
>         > >> err, isn't it supposed to? Isn't the loss of data a very
> serious error?
>         > > Kafka can't fix networking issues like latencies, blinking,
>         > unavailability
>         > > or any other weird stuff. Kafka promises you to persist data if
> data
>         > > reaches Kafka. Data delivery responsibility to kafka is on your
> side. You
>         > > fail to do it according to logs.
>         > >
>         > > 0.02% not 2%
>         > > You should check broker logs to figure out what went wrong. All
> things
>         > > happen on one machine as far as I understand. Maybe your brokers
> don't
>         > have
>         > > enough mem and they stuck because of GC and don't respond to
> producer.
>         > > Async producer fails to send data. That is why you observe data
> loss on
>         > > consumer side.
>         > >
>         > >
>         > > 2017-04-18 23:32 GMT+02:00 jan <rtm4...@googlemail.com>:
>         > >
>         > >> Hi Serega,
>         > >>
>         > >> > data didn't reach producer. So why should data appear in
> consumer?
>         > >>
>         > >> err, isn't it supposed to? Isn't the loss of data a very
> serious error?
>         > >>
>         > >> > loss rate is more or less similar [...] Not so bad.
>         > >>
>         > >> That made me laugh at least.  Is kafka intended to be a
> reliable
>         > >> message delivery system, or is a 2% data loss officially
> acceptable?
>         > >>
>         > >> I've been reading the other threads and one says windows is
> really not
>         > >> supported, and certainly not for production. Perhaps that's the
> root
>         > >> of it. Well I'm hoping to try it on linux shortly so I'll see
> if I can
>         > >> replicate the issue but I would like to know whether it
> *should* work
>         > >> in windows.
>         > >>
>         > >> cheers
>         > >>
>         > >> jan
>         > >>
>         > >> On 18/04/2017, Serega Sheypak <serega.shey...@gmail.com>
> wrote:
>         > >> > Hi,
>         > >> >
>         > >> > [2017-04-17 18:14:05,868] ERROR Error when sending message to
> topic
>         > >> > big_ptns1_repl1_nozip with key: null, value: 55 bytes with
> error:
>         > >> > (org.apache.kafka.clients.
>         > >> > producer.internals.ErrorLoggingCallback)
>         > >> > org.apache.kafka.common.errors.TimeoutException: Batch
> containing 8
>         > >> > record(s) expired due to timeout while requesting metadata
> from
>         > >> > brokers for big_ptns1_repl1_nozip-0
>         > >> >
>         > >> > data didn't reach producer. So why should data appear in
> consumer?
>         > >> > loss rate is more or less similar : 0.02 (130k / 5400mb) ~
> 0.03%
>         > (150mb
>         > >> > /
>         > >> > 5000gb) Not so bad.
>         > >> >
>         > >> >
>         > >> > 2017-04-18 21:46 GMT+02:00 jan <rtm4...@googlemail.com>:
>         > >> >
>         > >> >> Hi all, I'm something of a kafka n00b.
>         > >> >> I posted the following in the  google newsgroup, haven't had
> a reply
>         > >> >> or even a single read so I'll try here. My original msg,
> slightly
>         > >> >> edited, was:
>         > >> >>
>         > >> >> ----
>         > >> >>
>         > >> >> (windows 2K8R2 fully patched, 16GB ram, fairly modern dual
> core xeon
>         > >> >> server, latest version of java)
>         > >> >>
>         > >> >> I've spent several days trying to sort out unexpected
> behaviour
>         > >> >> involving kafka and the kafka console producer and
> consumer.
>         > >> >>
>         > >> >>  If I set  the console produced and console consumer to look
> at the
>         > >> >> same topic then I can type lines into the producer window
> and see
>         > them
>         > >> >> appear in the consumer window, so it works.
>         > >> >>
>         > >> >> If I try to pipe in large amounts of data to the producer,
> some gets
>         > >> >> lost and the producer reports errors eg.
>         > >> >>
>         > >> >> [2017-04-17 18:14:05,868] ERROR Error when sending message
> to topic
>         > >> >> big_ptns1_repl1_nozip with key: null, value: 55 bytes with
> error:
>         > >> >> (org.apache.kafka.clients.
>         > >> >> producer.internals.ErrorLoggingCallback)
>         > >> >> org.apache.kafka.common.errors.TimeoutException: Batch
> containing 8
>         > >> >> record(s) expired due to timeout while requesting metadata
> from
>         > >> >> brokers for big_ptns1_repl1_nozip-0
>         > >> >>
>         > >> >> I'm using as input a file either shakespeare's full works
> (about 5.4
>         > >> >> meg ascii), or a much larger file of shakespear's full
> works
>         > >> >> replicated 900 times to make it about 5GB. Lines are ascii
> and short,
>         > >> >> and each line should be a single record when read in by the
> console
>         > >> >> producer. I need to do some benchmarking on time and space
> and this
>         > >> >> was my first try.
>         > >> >>
>         > >> >> As mentioned, data gets lost. I presume it is expected that
> any data
>         > >> >> we pipe into the producer should arrive in the consumer, so
> if I do
>         > >> >> this in one windows console:
>         > >> >>
>         > >> >> kafka-console-consumer.bat --bootstrap-server localhost:9092
>  --topic
>         > >> >> big_ptns1_repl1_nozip --zookeeper localhost:2181 >
>         > >> >>
> F:\Users\me\Desktop\shakespear\single_all_shakespear_OUT.txt
>         > >> >>
>         > >> >> and this in another:
>         > >> >>
>         > >> >> kafka-console-producer.bat --broker-list localhost:9092
> --topic
>         > >> >> big_ptns1_repl1_nozip <
>         > >> >>
> F:\Users\me\Desktop\shakespear\complete_works_no_bare_lines.txt
>         > >> >>
>         > >> >> then the output file "single_all_shakespear_OUT.txt" should
> be
>         > >> >> identical to the input file
> "complete_works_no_bare_lines.txt"
>         > except
>         > >> >> it's not. For the complete works (sabout 5.4 meg
> uncompressed) I lost
>         > >> >> about 130K in the output.
>         > >> >> For the replicated shakespeare, which is about 5GB, I lost
> about 150
>         > >> meg.
>         > >> >>
>         > >> >> This can't be right surely and it's repeatable but happens
> at
>         > >> >> different places in the file when errors start to be
> produced, it
>         > >> >> seems.
>         > >> >>
>         > >> >> I've done this using all 3 versions of kafak in the 0.10.x.y
> branch
>         > >> >> and I get the same problem (the above commands were using
> the
>         > 0.10.0.0
>         > >> >> branch so they look a little obsolete but they are right for
> that
>         > >> >> branch I think). It's cost me some days.
>         > >> >> So, am I making a mistake, if so what?
>         > >> >>
>         > >> >> thanks
>         > >> >>
>         > >> >> jan
>         > >> >>
>         > >> >
>         > >>
>         > >
>         >
>
>
>
>         --
>         Robert Quinlivan
>         Software Engineer, Signal
>
>
>
>
>

Reply via email to