Jenkins build is back to normal : kafka-trunk-jdk8 #1130

2016-12-27 Thread Apache Jenkins Server
See 



[jira] [Created] (KAFKA-4571) Consumer fails to retrieve messages if started before producer

2016-12-27 Thread Sergiu Hlihor (JIRA)
Sergiu Hlihor created KAFKA-4571:


 Summary: Consumer fails to retrieve messages if started before 
producer
 Key: KAFKA-4571
 URL: https://issues.apache.org/jira/browse/KAFKA-4571
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.10.1.1
 Environment: Ubuntu Desktop 16.04 LTS, Oracle Java 8 1.8.0_101, Core 
i7 4770K
Reporter: Sergiu Hlihor


In a configuration where topic was never created before, starting the consumer 
before the producer leads to no message being consumed (KafkaConsumer.pool() 
returns always an instance of ConsumerRecords with 0 count ). 

Starting another consumer on the same group, same topic after messages were 
produced is still not consuming them. Starting another consumer with another 
groupId appears to be working.

In the consumer logs I see: WARN  NetworkClient - Error while fetching metadata 
with correlation id 1 : {measurements021=LEADER_NOT_AVAILABLE} 

Both producer and consumer were launched from inside same JVM. 

The configuration used is the standard one found in Kafka distribution. If this 
is a configuration issue, please suggest any change that I should do.

Thank you



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4092) retention.bytes should not be allowed to be less than segment.bytes

2016-12-27 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4092.
--
   Resolution: Fixed
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 0.10.2.0

> retention.bytes should not be allowed to be less than segment.bytes
> ---
>
> Key: KAFKA-4092
> URL: https://issues.apache.org/jira/browse/KAFKA-4092
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> Right now retention.bytes can be as small as the user wants but it doesn't 
> really get acted on for the active segment if retention.bytes is smaller than 
> segment.bytes.  We shouldn't allow retention.bytes to be less than 
> segment.bytes and validate that at startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4092) retention.bytes should not be allowed to be less than segment.bytes

2016-12-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15781163#comment-15781163
 ] 

ASF GitHub Bot commented on KAFKA-4092:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1796


> retention.bytes should not be allowed to be less than segment.bytes
> ---
>
> Key: KAFKA-4092
> URL: https://issues.apache.org/jira/browse/KAFKA-4092
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Minor
>
> Right now retention.bytes can be as small as the user wants but it doesn't 
> really get acted on for the active segment if retention.bytes is smaller than 
> segment.bytes.  We shouldn't allow retention.bytes to be less than 
> segment.bytes and validate that at startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1796: KAFKA-4092: retention.bytes should not be allowed ...

2016-12-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1796


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3601) fail fast when newer client connecting to older server

2016-12-27 Thread Chris Pennello (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15780762#comment-15780762
 ] 

Chris Pennello commented on KAFKA-3601:
---

KAFKA-3600 is closed via https://github.com/apache/kafka/pull/1251, but I 
didn't see the particular poll call with {{Long.MAX_VALUE}} timeout removed in 
that diff.  Perhaps it got removed elsewhere, or perhaps it's still present.

> fail fast when newer client connecting to older server
> --
>
> Key: KAFKA-3601
> URL: https://issues.apache.org/jira/browse/KAFKA-3601
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Chris Pennello
>Assignee: Ashish K Singh
>
> I know that connecting with a newer client to an older server is forbidden, 
> but I would like to suggest that the behavior be that it predictably fail 
> noisily, explicitly, and with specific detail indicating why the failure 
> occurred.
> As-is, trying to connect to a v0.8.1.1 cluster with a v0.9.1 client yields a 
> hang when trying to get a coordinator metadata update.
> (This may be related to KAFKA-1894.  I certainly did note 
> {{poll(Long.MAX_VALUE)}} and wept many tears.  At least we could include a 
> TODO-commented constant in the code with a non-forever, timeout, right?)
> {noformat}
>java.lang.Thread.State: RUNNABLE
>   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
>   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>   - locked <1c8cadab> (a sun.nio.ch.Util$2)
>   - locked <2324ec49> (a java.util.Collections$UnmodifiableSet)
>   - locked <3f3f5e0b> (a sun.nio.ch.EPollSelectorImpl)
>   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>   at org.apache.kafka.common.network.Selector.select(Selector.java:425)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:254)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:184)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:886)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
>   ... my code that calls poll...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: custom offsets in ProduceRequest

2016-12-27 Thread radai
IIUC if you replicate from a single source cluster to a single target
cluster, the topic has the same number of partitions on both, and no one
writes directly to the target cluster (so master --> slave) the offsets
would be preserved.

but in the general case - how would you handle the case where multiple
producers "claim" the same offset ?


On Mon, Dec 26, 2016 at 4:52 AM, Andrey L. Neporada <
anepor...@yandex-team.ru> wrote:

> Hi all!
>
> Suppose you have two Kafka clusters and want to replicate topics from
> primary cluster to secondary one.
> It would be very convenient for readers if the message offsets for
> replicated topics would be the same as for primary topics.
>
> As far as I know, currently there is no way to achieve this.
> I wonder is it possible/reasonable to add message offset to ProduceRequest?
>
>
> —
> Andrey Neporada
>
>
>
>


[jira] [Commented] (KAFKA-4477) Node reduces its ISR to itself, and doesn't recover. Other nodes do not take leadership, cluster remains sick until node is restarted.

2016-12-27 Thread Niles Hiray (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15780738#comment-15780738
 ] 

Niles Hiray commented on KAFKA-4477:


We are facing this issue on our systems too. Does 0.10.1.1 completely resolve 
this issue? Is downgrade a better option?

> Node reduces its ISR to itself, and doesn't recover. Other nodes do not take 
> leadership, cluster remains sick until node is restarted.
> --
>
> Key: KAFKA-4477
> URL: https://issues.apache.org/jira/browse/KAFKA-4477
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0
> Environment: RHEL7
> java version "1.8.0_66"
> Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>Reporter: Michael Andre Pearce (IG)
>Assignee: Apurva Mehta
>Priority: Critical
>  Labels: reliability
> Attachments: 2016_12_15.zip, issue_node_1001.log, 
> issue_node_1001_ext.log, issue_node_1002.log, issue_node_1002_ext.log, 
> issue_node_1003.log, issue_node_1003_ext.log, kafka.jstack, 
> state_change_controller.tar.gz
>
>
> We have encountered a critical issue that has re-occured in different 
> physical environments. We haven't worked out what is going on. We do though 
> have a nasty work around to keep service alive. 
> We do have not had this issue on clusters still running 0.9.01.
> We have noticed a node randomly shrinking for the partitions it owns the 
> ISR's down to itself, moments later we see other nodes having disconnects, 
> followed by finally app issues, where producing to these partitions is 
> blocked.
> It seems only by restarting the kafka instance java process resolves the 
> issues.
> We have had this occur multiple times and from all network and machine 
> monitoring the machine never left the network, or had any other glitches.
> Below are seen logs from the issue.
> Node 7:
> [2016-12-01 07:01:28,112] INFO Partition 
> [com_ig_trade_v1_position_event--demo--compacted,10] on broker 7: Shrinking 
> ISR for partition [com_ig_trade_v1_position_event--demo--compacted,10] from 
> 1,2,7 to 7 (kafka.cluster.Partition)
> All other nodes:
> [2016-12-01 07:01:38,172] WARN [ReplicaFetcherThread-0-7], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@5aae6d42 
> (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 7 was disconnected before the response was 
> read
> All clients:
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> After this occurs, we then suddenly see on the sick machine an increasing 
> amount of close_waits and file descriptors.
> As a work around to keep service we are currently putting in an automated 
> process that tails and regex's for: and where new_partitions hit just itself 
> we restart the node. 
> "\[(?P.+)\] INFO Partition \[.*\] on broker .* Shrinking ISR for 
> partition \[.*\] from (?P.+) to (?P.+) 
> \(kafka.cluster.Partition\)"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2260) Allow specifying expected offset on produce

2016-12-27 Thread Bill Warshaw (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15780697#comment-15780697
 ] 

Bill Warshaw commented on KAFKA-2260:
-

[~ijuma] it seems like we would be able to satisfy this use case with KIP-98 by 
specifying a global {{PID}} for {{Producer}} instances in a distributed 
application.  An application would have to use a {{Producer}} with this 
specific {{PID}} to publish any messages which needed a sequential guarantee.  
Does that make sense?

Is there a timeline for KIP-98?

> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
> Attachments: KAFKA-2260.patch, expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain that at least one of them will fail.
> - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
> messages consecutively to a partition, even if the list is much larger than 
> the buffer size or the producer has to be restarted.
> - If a process is using Kafka as a commit log -- reading from a partition to 
> bootstrap, then writing any updates to that same partition -- it can be sure 
> that it's seen all of the messages in that partition at the moment it does 
> its first (successful) write.
> There's a bunch of other similar use-cases here, but they all have roughly 
> the same flavour.
> h4. Implementation
> The major advantage of this proposal over other suggested transaction / 
> idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
> currently-unused field, adds no new APIs, and requires very little new code 
> or additional work from the server.
> - Produced messages already carry an offset field, which is currently ignored 
> by the server. This field could be used for the 'expected offset', with a 
> sigil value for the current behaviour. (-1 is a natural choice, since it's 
> already used to mean 'next available offset'.)
> - We'd need a new error and error code for a 'CAS failure'.
> - The server assigns offsets to produced messages in 
> {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
> changed, this method would assign offsets in the same way -- but if they 
> don't match the offset in the message, we'd return an error instead of 
> completing the write.
> - To avoid breaking existing clients, this behaviour would need to live 
> behind some config flag. (Possibly global, but probably more useful 
> per-topic?)
> I understand all this is unsolicited and possibly strange: happy to answer 
> questions, and if this seems interesting, I'd be glad to flesh this out into 
> a full KIP or patch. (And apologies if this is the wrong venue for this sort 
> of thing!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Experiencing trouble with KafkaCSVMetricsReporter

2016-12-27 Thread Dongjin Lee
In short: the resulting csv files from the brokers are filled with 0 only, 
although the broker cluster is running correctly.

Hello. I am trying some benchmarks with KAFKA-4514[^1]. However, My 
KafkaCSVMetricsReporter is not working properly. I would like to ask if someone 
on this mailing list has experienced similar cases.
Let me explain. I configured a Kafka cluster with 3 zookeeper instances and 3 
Kafka Broker instances. After confirming all is working correctly, I added 
following properties in server.properties file and restarted the brokers.
> kafka.metrics.polling.interval.secs=5> 
> kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter> 
> kafka.csv.metrics.reporter.enabled=true
By re-running the brokers and the producer, I acquired some csv files from 
kafka_metrics directory. But there are two weird things:
1. Kafka brokers repeatedly try to create csv files and fail with IO Exception. 
Why they try to create files, yet the resulting csv files already exist?2. All 
cells of the resulting csv files are filled with 0, except their headers. Since 
the producer generated so many messages, they (e.g., BytesPerSec) cannot be 0.
Any pieces of advices or comments are welcome. Thanks in advance.
Regards,Dongjin
[^1]: https://issues.apache.org/jira/browse/KAFKA-4514

[jira] [Created] (KAFKA-4570) How to transfer extended fields in producing or consuming requests.

2016-12-27 Thread zander (JIRA)
zander created KAFKA-4570:
-

 Summary: How to transfer extended fields in producing or consuming 
requests.
 Key: KAFKA-4570
 URL: https://issues.apache.org/jira/browse/KAFKA-4570
 Project: Kafka
  Issue Type: Wish
  Components: clients
Affects Versions: 0.10.1.1
Reporter: zander
Priority: Critical


We encounter a problem that  we can not transfer extended fields for producing 
or consuming requests to the broker.
We want to validate  the producers or consumers in a custom way other than 
using SSL.
In general, such as JMS, it is possible to transfer user-related fields to 
server.
But it seems that Kafka dose not support this, its protocol is very tight and  
unable to add user-defined fields.

So is there any way  achieving this goal ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4180) Shared authentication with multiple active Kafka producers/consumers

2016-12-27 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15780212#comment-15780212
 ] 

Edoardo Comar commented on KAFKA-4180:
--

Now that https://issues.apache.org/jira/browse/KAFKA-4259 has been resolved and 
merged in trunk,
this can be merged too

> Shared authentication with multiple active Kafka producers/consumers
> 
>
> Key: KAFKA-4180
> URL: https://issues.apache.org/jira/browse/KAFKA-4180
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , security
>Affects Versions: 0.10.0.1
>Reporter: Guillaume Grossetie
>Assignee: Mickael Maison
>  Labels: authentication, jaas, loginmodule, plain, producer, 
> sasl, user
>
> I'm using Kafka 0.10.0.1 with an SASL authentication on the client:
> {code:title=kafka_client_jaas.conf|borderStyle=solid}
> KafkaClient {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="guillaume"
> password="secret";
> };
> {code}
> When using multiple Kafka producers the authentification is shared [1]. In 
> other words it's not currently possible to have multiple Kafka producers in a 
> JVM process.
> Am I missing something ? How can I have multiple active Kafka producers with 
> different credentials ?
> My use case is that I have an application that send messages to multiples 
> clusters (one cluster for logs, one cluster for metrics, one cluster for 
> business data).
> [1] 
> https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4180) Shared authentication with multiple active Kafka producers/consumers

2016-12-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15780202#comment-15780202
 ] 

ASF GitHub Bot commented on KAFKA-4180:
---

GitHub user edoardocomar opened a pull request:

https://github.com/apache/kafka/pull/2293

KAFKA-4180 : Authentication with multiple actives Kafka

producers/consumers

Changed caching in LoginManager to allow one LoginManager per client
JAAS configuration.
Added test to End2EndAuthorization for SASL Plain and Gssapi with two
consumers with different credentials.

developed with @mimaison

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edoardocomar/kafka KAFKA-4180d

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2293.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2293


commit 64e827c365b938e977a15d5f3b2a4103297f9092
Author: Edoardo Comar 
Date:   2016-10-06T16:14:21Z

KAFKA-4180 : Authentication with multiple actives Kafka
producers/consumers

Changed caching in LoginManager to allow one LoginManager per client
JAAS configuration.
Added test to End2EndAuthorization for SASL Plain and Gssapi with two
consumers with different credentials.

developed with @mimaison




> Shared authentication with multiple active Kafka producers/consumers
> 
>
> Key: KAFKA-4180
> URL: https://issues.apache.org/jira/browse/KAFKA-4180
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , security
>Affects Versions: 0.10.0.1
>Reporter: Guillaume Grossetie
>Assignee: Mickael Maison
>  Labels: authentication, jaas, loginmodule, plain, producer, 
> sasl, user
>
> I'm using Kafka 0.10.0.1 with an SASL authentication on the client:
> {code:title=kafka_client_jaas.conf|borderStyle=solid}
> KafkaClient {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="guillaume"
> password="secret";
> };
> {code}
> When using multiple Kafka producers the authentification is shared [1]. In 
> other words it's not currently possible to have multiple Kafka producers in a 
> JVM process.
> Am I missing something ? How can I have multiple active Kafka producers with 
> different credentials ?
> My use case is that I have an application that send messages to multiples 
> clusters (one cluster for logs, one cluster for metrics, one cluster for 
> business data).
> [1] 
> https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2293: KAFKA-4180 : Authentication with multiple actives ...

2016-12-27 Thread edoardocomar
GitHub user edoardocomar opened a pull request:

https://github.com/apache/kafka/pull/2293

KAFKA-4180 : Authentication with multiple actives Kafka

producers/consumers

Changed caching in LoginManager to allow one LoginManager per client
JAAS configuration.
Added test to End2EndAuthorization for SASL Plain and Gssapi with two
consumers with different credentials.

developed with @mimaison

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edoardocomar/kafka KAFKA-4180d

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2293.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2293


commit 64e827c365b938e977a15d5f3b2a4103297f9092
Author: Edoardo Comar 
Date:   2016-10-06T16:14:21Z

KAFKA-4180 : Authentication with multiple actives Kafka
producers/consumers

Changed caching in LoginManager to allow one LoginManager per client
JAAS configuration.
Added test to End2EndAuthorization for SASL Plain and Gssapi with two
consumers with different credentials.

developed with @mimaison




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4180) Shared authentication with multiple active Kafka producers/consumers

2016-12-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15780150#comment-15780150
 ] 

ASF GitHub Bot commented on KAFKA-4180:
---

Github user edoardocomar closed the pull request at:

https://github.com/apache/kafka/pull/1989


> Shared authentication with multiple active Kafka producers/consumers
> 
>
> Key: KAFKA-4180
> URL: https://issues.apache.org/jira/browse/KAFKA-4180
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , security
>Affects Versions: 0.10.0.1
>Reporter: Guillaume Grossetie
>Assignee: Mickael Maison
>  Labels: authentication, jaas, loginmodule, plain, producer, 
> sasl, user
>
> I'm using Kafka 0.10.0.1 with an SASL authentication on the client:
> {code:title=kafka_client_jaas.conf|borderStyle=solid}
> KafkaClient {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="guillaume"
> password="secret";
> };
> {code}
> When using multiple Kafka producers the authentification is shared [1]. In 
> other words it's not currently possible to have multiple Kafka producers in a 
> JVM process.
> Am I missing something ? How can I have multiple active Kafka producers with 
> different credentials ?
> My use case is that I have an application that send messages to multiples 
> clusters (one cluster for logs, one cluster for metrics, one cluster for 
> business data).
> [1] 
> https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1989: Kafka 4180 - Shared authentification with multiple...

2016-12-27 Thread edoardocomar
Github user edoardocomar closed the pull request at:

https://github.com/apache/kafka/pull/1989


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2292: MINOR: Update rocksDB dependency to 4.13.5

2016-12-27 Thread jozi-k
GitHub user jozi-k opened a pull request:

https://github.com/apache/kafka/pull/2292

MINOR: Update rocksDB dependency to 4.13.5



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jozi-k/kafka update-rocksdb-4.13.5

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2292


commit 3247ad3052909214cf305051c18d5a65dc12a916
Author: jozi-k 
Date:   2016-12-27T08:24:49Z

MINOR: Update rocksDB dependency to 4.13.5




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---