kafka_2.10-0.8.1.1:: Need help regarding a split configuration situation between kafka brokers and topic metadata stored in zookeeper

2018-06-06 Thread Jagbir Hooda
Hi there,

Wondering if anyone ran into a unique situation where zookeeper seems
to have the topic metadata, but broker doesn't have the corresponding
log file

Below is what we noticed in zookeeper:

--
kafka@kafka-3:~$ /opt/kafka/kafka_2.10-0.8.1.1/bin/zookeeper-shell.sh
localhost:2181 get /brokers/topics/T_60036/partitions/0/state
Connecting to localhost:2181

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
{"controller_epoch":6,"leader":1,"version":1,"leader_epoch":0,"isr":[1,2]}
cZxid = 0x80013308c
ctime = Wed Jun 06 04:55:37 UTC 2018
mZxid = 0x80013308c
mtime = Wed Jun 06 04:55:37 UTC 2018
pZxid = 0x80013308c
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 74
numChildren = 0

kafka@kafka-3:~$ /opt/kafka/kafka_2.10-0.8.1.1/bin/zookeeper-shell.sh
localhost:2181 get /config/topics/T_60036
Connecting to localhost:2181

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
{"version":1,"config":{}}
cZxid = 0x800132992
ctime = Wed Jun 06 04:55:13 UTC 2018
mZxid = 0x800132992
mtime = Wed Jun 06 04:55:13 UTC 2018
pZxid = 0x800132992
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 25
numChildren = 0
==

But there are no log files for this topic:

--
kafka@kafka-3:~$ ls -l /var/kafka/topics/T_60036*
ls: cannot access /var/kafka/topics/T_60036*: No such file or directory
==

Since in this version of kafka there is no topic deletion utility,
will it be OK to delete zookeeper entries ("/config/topics/T_60036",
"/brokers/topics/T_60036") from zookeeper without restarting or
jeopardizing the cluster. Right now our producer keeps getting
"kafka.common.FailedToSendMessageException" exception due to above
split configuration.

I really appreciate your help if you can share your insights.

Jsh

Notes:
--
Version: kafka_2.10-0.8.1.1
Cluster Configuration: 4 kafka brokers + 4 zookeeper
Topic Partiton: 1
Topic Replica: 1
--


kafka Group coordinator error on remote server

2018-06-06 Thread Yao Chen
Hi : 
 My code works fine with local Kafka version 2.11, but when running the 
code by connecting kafka on remote server (managed with Cloudera), the 
following error message occurred:  
Group coordinator cahive-master02:9092 (id: 2147483391 rack: null) is 
unavailable or invalid, will attempt rediscovery


I tried to set advertised.hostname = localhost. Did not help the problem. 
Another error occurred: 



[Producer clientId=producer-1] Error while fetching metadata with correlation 
id 362 : {ciovInput_v1=LEADER_NOT_AVAILABLE}

[Producer clientId=producer-1] Error while fetching metadata with correlation 
id 363 : {ciovInput_v1=LEADER_NOT_AVAILABLE}

[Producer clientId=producer-1] Error while fetching metadata with correlation 
id 364 : {ciovInput_v1=LEADER_NOT_AVAILABLE}

[Producer clientId=producer-1] Error while fetching metadata with correlation 
id 365 : {ciovInput_v1=LEADER_NOT_AVAILABLE} ...










Thanks

FINAL REMINDER: Apache EU Roadshow 2018 in Berlin next week!

2018-06-06 Thread sharan

Hello Apache Supporters and Enthusiasts

This is a final reminder that our Apache EU Roadshow will be held in 
Berlin next week on 13th and 14th June 2018. We will have 28 different 
sessions running over 2 days that cover some great topics. So if you are 
interested in Microservices, Internet of Things (IoT), Cloud, Apache 
Tomcat or Apache Http Server then we have something for you.


https://foss-backstage.de/sessions/apache-roadshow

We will be co-located with FOSS Backstage, so if you are interested in 
topics such as incubator, the Apache Way, open source governance, legal, 
trademarks or simply open source communities then there will be 
something there for you too.  You can attend any of talks, presentations 
and workshops from the Apache EU Roadshow or FOSS Backstage.


You can find details of the combined Apache EU Roadshow and FOSS 
Backstage conference schedule below:


https://foss-backstage.de/schedule?day=2018-06-13

Ticket prices go up on 8th June 2018 and we have a last minute discount 
code that anyone can use before the deadline:


15% discount code: ASF15_discount
valid until June 7, 23:55 CET

You can register at the following link:

https://foss-backstage.de/tickets

Our Apache booth and lounge will be open from 11th - 14th June for 
meetups, hacking or to simply relax between sessions. And we will be 
posting regular updates on social media throughout next week so please 
follow us on Twitter @ApacheCon


Thank you for your continued support and we look forward to seeing you 
in Berlin!


Thanks
Sharan Foga, VP Apache Community Development

http://apachecon.com/

PLEASE NOTE: You are receiving this message because you are subscribed 
to a user@ or dev@ list of one or more Apache Software Foundation projects.





Re: FINAL REMINDER: Apache EU Roadshow 2018 in Berlin next week!

2018-06-06 Thread NB Bisht
made my ticket free

On Thu, Jun 7, 2018 at 12:27 AM,  wrote:

> Hello Apache Supporters and Enthusiasts
>
> This is a final reminder that our Apache EU Roadshow will be held in
> Berlin next week on 13th and 14th June 2018. We will have 28 different
> sessions running over 2 days that cover some great topics. So if you are
> interested in Microservices, Internet of Things (IoT), Cloud, Apache Tomcat
> or Apache Http Server then we have something for you.
>
> https://foss-backstage.de/sessions/apache-roadshow
>
> We will be co-located with FOSS Backstage, so if you are interested in
> topics such as incubator, the Apache Way, open source governance, legal,
> trademarks or simply open source communities then there will be something
> there for you too.  You can attend any of talks, presentations and
> workshops from the Apache EU Roadshow or FOSS Backstage.
>
> You can find details of the combined Apache EU Roadshow and FOSS Backstage
> conference schedule below:
>
> https://foss-backstage.de/schedule?day=2018-06-13
>
> Ticket prices go up on 8th June 2018 and we have a last minute discount
> code that anyone can use before the deadline:
>
> 15% discount code: ASF15_discount
> valid until June 7, 23:55 CET
>
> You can register at the following link:
>
> https://foss-backstage.de/tickets
>
> Our Apache booth and lounge will be open from 11th - 14th June for
> meetups, hacking or to simply relax between sessions. And we will be
> posting regular updates on social media throughout next week so please
> follow us on Twitter @ApacheCon
>
> Thank you for your continued support and we look forward to seeing you in
> Berlin!
>
> Thanks
> Sharan Foga, VP Apache Community Development
>
> http://apachecon.com/
>
> PLEASE NOTE: You are receiving this message because you are subscribed to
> a user@ or dev@ list of one or more Apache Software Foundation projects.
>
>
>


NetworkException: The server disconnected before a response was received.

2018-06-06 Thread Kang Minwoo
Hello, Users

Recently I have been using kafka and have suffered by NetworkException.
There is Exception.

---

org.apache.kafka.common.errors.NetworkException: The server disconnected before 
a response was received.

---

Producer encountered an exception.
The Broker does not have any unusual error logs.

Can you give me any tips?

Best regards,
Minwoo Kang


cleanup.policy - doesn't accept compact,delete

2018-06-06 Thread Jayaraman, AshokKumar (CCI-Atlanta-CON)
Hi,

We are on Kafka version 1.0.0.  Per the below new feature, a topic can allow 
both compact and delete.  I tried all the combinations, but they all fail to 
accept values that are not either compact OR delete.   Is this feature valid in 
updated releases, since 0.10.2?If this is not a feature available, how to 
cleanup the growing compacted topic scenario?

https://issues.apache.org/jira/browse/KAFKA-4015

$ ./kafka-configs.sh --zookeeper <>:2181--alter --entity-type topics 
--entity-name stream_output --add-config cleanup.policy=compact,delete
Error while executing config command requirement failed: Invalid entity config: 
all configs to be added must be in the format "key=val".
java.lang.IllegalArgumentException: requirement failed: Invalid entity config: 
all configs to be added must be in the format "key=val".
at scala.Predef$.require(Predef.scala:233)
at 
kafka.admin.ConfigCommand$.parseConfigsToBeAdded(ConfigCommand.scala:128)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:78)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:65)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)


$ ./kafka-configs.sh --zookeeper <>:2181 --alter --entity-type topics 
--entity-name ash_stream_output --add-config cleanup.policy=compact_delete
Error while executing config command Invalid value compact_delete for 
configuration cleanup.policy: String must be one of: compact, delete
org.apache.kafka.common.config.ConfigException: Invalid value compact_delete 
for configuration cleanup.policy: String must be one of: compact, delete
at 
org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:851)
at 
org.apache.kafka.common.config.ConfigDef$ValidList.ensureValid(ConfigDef.java:827)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:427)
at kafka.log.LogConfig$.validate(LogConfig.scala:331)
at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:524)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:90)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:65)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)


$ ./kafka-configs.sh --zookeeper <>:2181 --alter --entity-type topics 
--entity-name ash_stream_output --add-config cleanup.policy=compact_and_delete
Error while executing config command Invalid value compact_delete for 
configuration cleanup.policy: String must be one of: compact, delete
org.apache.kafka.common.config.ConfigException: Invalid value compact_delete 
for configuration cleanup.policy: String must be one of: compact, delete
at 
org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:851)
at 
org.apache.kafka.common.config.ConfigDef$ValidList.ensureValid(ConfigDef.java:827)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:427)
at kafka.log.LogConfig$.validate(LogConfig.scala:331)
at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:524)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:90)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:65)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)


Thanks & Regards,
Ashok



Re: cleanup.policy - doesn't accept compact,delete

2018-06-06 Thread Manikumar
As described in usage description, to group the values which contain
commas,  we need to use square brackets.

ex: --add-config cleanup.policy=[compact,delete]

On Thu, Jun 7, 2018 at 8:49 AM, Jayaraman, AshokKumar (CCI-Atlanta-CON) <
ashokkumar.jayara...@cox.com> wrote:

> Hi,
>
> We are on Kafka version 1.0.0.  Per the below new feature, a topic can
> allow both compact and delete.  I tried all the combinations, but they all
> fail to accept values that are not either compact OR delete.   Is this
> feature valid in updated releases, since 0.10.2?If this is not a
> feature available, how to cleanup the growing compacted topic scenario?
>
> https://issues.apache.org/jira/browse/KAFKA-4015
>
> $ ./kafka-configs.sh --zookeeper <>:2181--alter --entity-type
> topics --entity-name stream_output --add-config
> cleanup.policy=compact,delete
> Error while executing config command requirement failed: Invalid entity
> config: all configs to be added must be in the format "key=val".
> java.lang.IllegalArgumentException: requirement failed: Invalid entity
> config: all configs to be added must be in the format "key=val".
> at scala.Predef$.require(Predef.scala:233)
> at kafka.admin.ConfigCommand$.parseConfigsToBeAdded(
> ConfigCommand.scala:128)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:78)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:65)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
>
>
> $ ./kafka-configs.sh --zookeeper <>:2181 --alter --entity-type
> topics --entity-name ash_stream_output --add-config
> cleanup.policy=compact_delete
> Error while executing config command Invalid value compact_delete for
> configuration cleanup.policy: String must be one of: compact, delete
> org.apache.kafka.common.config.ConfigException: Invalid value
> compact_delete for configuration cleanup.policy: String must be one of:
> compact, delete
> at org.apache.kafka.common.config.ConfigDef$ValidString.
> ensureValid(ConfigDef.java:851)
> at org.apache.kafka.common.config.ConfigDef$ValidList.
> ensureValid(ConfigDef.java:827)
> at org.apache.kafka.common.config.ConfigDef.parse(
> ConfigDef.java:427)
> at kafka.log.LogConfig$.validate(LogConfig.scala:331)
> at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:524)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:90)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:65)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
>
>
> $ ./kafka-configs.sh --zookeeper <>:2181 --alter --entity-type
> topics --entity-name ash_stream_output --add-config
> cleanup.policy=compact_and_delete
> Error while executing config command Invalid value compact_delete for
> configuration cleanup.policy: String must be one of: compact, delete
> org.apache.kafka.common.config.ConfigException: Invalid value
> compact_delete for configuration cleanup.policy: String must be one of:
> compact, delete
> at org.apache.kafka.common.config.ConfigDef$ValidString.
> ensureValid(ConfigDef.java:851)
> at org.apache.kafka.common.config.ConfigDef$ValidList.
> ensureValid(ConfigDef.java:827)
> at org.apache.kafka.common.config.ConfigDef.parse(
> ConfigDef.java:427)
> at kafka.log.LogConfig$.validate(LogConfig.scala:331)
> at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:524)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:90)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:65)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
>
>
> Thanks & Regards,
> Ashok
>
>