Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Manikumar
Yes, rolling restart should be fine for 1.0 -> 1.0.1 We can add "unclean.leader.election.enable=true" to server.properties. This requires broker restart to take effect. On Tue, Apr 24, 2018 at 12:02 PM, Mika Linnanoja wrote: > Morning, group. > > On Mon, Apr 23, 2018 at 11:19 AM, Mika Linnanoja

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Mika Linnanoja
Morning, group. On Mon, Apr 23, 2018 at 11:19 AM, Mika Linnanoja wrote > > If nothing else, let this incident of ours serve as a warning to do > exactly as the book (upgrade guide) says, not sort of wing it. Thanks for > fast replies, lively mailing list! > > Mika > So yeah last night further 94

Re: Keeping track of ingest time of messages in pipeline (cluster 1-> mm -> cluster 2 -> ..)

2018-04-23 Thread Uddhav Arote
Any thoughts here? On 2018/04/23 05:47:24, Uddhav Arote wrote: > Hi,> > > The V1 message format is> > > >1. v1 (supported since 0.10.0)> >2. Message => Crc MagicByte Attributes Key Value> >3. Crc => int32> >4. MagicByte => int8> >5. Attributes => int8> >6.

Topic partitions logSize are way behind the offsets.

2018-04-23 Thread Kim Chew
The following is the output from "kafka-consumer-offset-checker.sh", Group Topic Pid Offset logSize Lag Owner secorV2_backup auth-data 0 13433 2 -13431 secorV2_backup_ip-172-30-13-99_3448_243-0 se

having a problem with SSL and kafka

2018-04-23 Thread Philip Gerow
Hello, I am trying to only add ssl, not Kerberos, not sasl, no jaas. I know my certs are good and installed… test from vm or from my laptop...works {{ openssl s_client -debug -connect 172.21.149.190:9093 -tls1_2}} CONNECTED(0003) write to 0x11b27f0 [0x11fdf73] (247 bytes => 247 (0xF7)) 0

Re: Using Kafka CLI without specifying the URLs every single time?

2018-04-23 Thread Andrew Otto
Us too: https://github.com/wikimedia/puppet/blob/production/modules/confluent/files/kafka/kafka.sh This requires that the various kafka-* scrips are in your PATH. And then this gets rendered into /etc/profile.d to set env variables. https://github.com/wikimedia/puppet/blob/production/modules/con

Kafka Log Compaction (LogCleaner) and retry

2018-04-23 Thread Rabin Banerjee
Hi All, I have a doubt regarding what will happen if there is a retry to a message and log compaction is enabled. retries 10 retry.backoff.ms 1000 For example the data looks like this 1-->State1 (sent) 1-->State2 (failed, waiting for retry) 1-->State3 (sent) 1-->State2 (State2 is now sent as ret

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Mika Linnanoja
On Mon, Apr 23, 2018 at 10:51 AM, Brett Rann wrote: > > Mostly updating version variable in our puppet config file (masterless) > and applying manually per instance. It works surprisingly well this way. > > Sure, we do the same, but with Chef. But we still follow that process. Lock > in inter bro

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Mika Linnanoja
On Mon, Apr 23, 2018 at 10:29 AM, Manikumar wrote: > > What is the replication factor? Was unclean election enabled (It enabled by > default in 0.10.0.1)? > RF is 2 for regular topics (global var). Re: unclean elections, whatever is the default was on, so I think unclean election was enabled for

Re: Kafka behaviour on contolled shutdown of brokers

2018-04-23 Thread Brett Rann
i) no. ii) yes you do, no it won't. :) You used the word replace, not add. Is the final state of your cluster 6 nodes, or 12? If you're replacing, you might want to consider just replacing one at a time to avoid having to do reassignments: a) stop 1 broker, lets say broker "1". b) start up the

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Brett Rann
> Mostly updating version variable in our puppet config file (masterless) and applying manually per instance. It works surprisingly well this way. Sure, we do the same, but with Chef. But we still follow that process. Lock in inter broker and log message format to existing version first. upgrade 1

Re: Kafka rebalancing behavior on broker failures

2018-04-23 Thread Brett Rann
partitions are never automatically moved. They are assigned to broker(s) and stay that way unless reassignments are triggered by external tools. (leadership can move automatically though, if RF>1). There's more info on partitions and moving partitions at these two links: https://kafka.apache.org/d

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Mika Linnanoja
Hi, On Mon, Apr 23, 2018 at 10:25 AM, Brett Rann wrote: > Firstly, 1.0.1 is out and I'd strongly advise you to use that as the > upgrade path over 1.0.0 if you can because it contains a lot of bugfixes. > Some critical. > Yeah, it would've just meant starting the whole process from scratch in a

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Manikumar
Hi, Before Kafka 1.1.0, If the unclean leader election is enabled and if there are no ISRs, the leader is set to -1 and ISR will be empty. During upgrade, If you have single replica partitions or if all replicas goes out of ISR, then we get into this situation. >From Kafka 0.11.0.0, Unclean lead

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Brett Rann
Firstly, 1.0.1 is out and I'd strongly advise you to use that as the upgrade path over 1.0.0 if you can because it contains a lot of bugfixes. Some critical. With unclean leader elections it should have resolved itself when the affected broker came back online and all partitions were available. So

UNKNOWN_PRODUCER_ID When using apache kafka streams (scala)

2018-04-23 Thread Nitay Kufert
Hey, I described the problem I am having with Kafka streams 1.1.0 in StackOverflow, so I hope its fine to cross-reference. https://stackoverflow.com/questions/49968123/unknown-producer-id-when-using-apache-kafka-streams-scala -- Nitay Kufert Backend Developer [image: ironSource]

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Mika Linnanoja
On Mon, Apr 23, 2018 at 9:59 AM, Enrique Medina Montenegro < e.medin...@gmail.com> wrote: > What type of storage do you have for your setup? Ah, the most important promptly forgotten from the details! 5 TB EBS GP2 (regular ssd) volumes as kafka data directory, 20 GB EBS GP2 as root per instance

Re: Unavailable partitions after upgrade to kafka 1.0.0

2018-04-23 Thread Enrique Medina Montenegro
What type of storage do you have for your setup? En 23 de abril de 2018 8:04:46 a. m. Mika Linnanoja escribió: Hello, Last week I upgraded one relatively large kafka (EC2, 10 brokers, ~30 TB data, 100-300 Mbps in/out per instance) 0.10.0.1 cluster to 1.0, and saw some issues. Out of ~100 t