[jira] [Resolved] (KAFKA-3972) kafka java consumer poll returns 0 records after seekToBeginning

2016-08-07 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3972.
--
Resolution: Invalid
  Assignee: Ewen Cheslack-Postava

> kafka java consumer poll returns 0 records after seekToBeginning
> 
>
> Key: KAFKA-3972
> URL: https://issues.apache.org/jira/browse/KAFKA-3972
> Project: Kafka
>  Issue Type: Task
>  Components: consumer
>Affects Versions: 0.10.0.0
> Environment: docker image elasticsearch:latest, kafka scala version 
> 2.11, kafka version 0.10.0.0
>Reporter: don caldwell
>Assignee: Ewen Cheslack-Postava
>  Labels: kafka, polling
>
> kafkacat successfully returns rows for the topic, but the following java 
> source reliably fails to produce rows. I have the suspicion that I am missing 
> some simple thing in my setup, but I have been unable to find a way out. I am 
> using the current docker and using docker network commands to connect the 
> processes in my cluster. The properties are:
> bootstrap.servers: kafka01:9092,kafka02:9092,kafka03:9092
> group.id: dhcp1
> topic: dhcp
> enable.auto.commit: false
> auto.commit.interval.ms: 1000
> session.timeout.ms 3
> key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
> value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
> the kafka consumer follows. One thing that I find curious is that, although I 
> seem to successfully make the call to seekToBeginning(), when I print offsets 
> on failure, I get large offsets for all partitions although I had expected 
> them to be 0 or at least some small number.
> Here is the code:
> import org.apache.kafka.clients.consumer.ConsumerConfig;
> import org.apache.kafka.clients.consumer.ConsumerRecord;
> import org.apache.kafka.clients.consumer.ConsumerRecords;
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> import org.apache.kafka.common.errors.TimeoutException;
> import org.apache.kafka.common.protocol.types.SchemaException;
> import org.apache.kafka.common.KafkaException;
> import org.apache.kafka.common.Node;
> import org.apache.kafka.common.PartitionInfo;
> import org.apache.kafka.common.TopicPartition;
> import java.io.FileInputStream;
> import java.io.FileNotFoundException;
> import java.io.IOException;
> import java.lang.Integer;
> import java.lang.System;
> import java.lang.Thread;
> import java.lang.InterruptedException;
> import java.util.Arrays;
> import java.util.ArrayList;
> import java.util.Collections;
> import java.util.List;
> import java.util.Map;
> import java.util.Properties;
> public class KConsumer {
> private Properties prop;
> private String topic;
> private Integer polln;
> private KafkaConsumer consumer;
> private String[] pna = {ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
> ConsumerConfig.GROUP_ID_CONFIG,
> ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,
> ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,
> ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG,
> ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
> ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG};
> public KConsumer(String pf) throws FileNotFoundException,
> IOException {
> this.setProperties(pf);
> this.newClient();
> }
> public void setProperties(String p) throws FileNotFoundException,
> IOException {
> this.prop = new Properties();
> this.prop.load(new FileInputStream(p));
> this.topic = this.prop.getProperty("topic");
> this.polln = new Integer(this.prop.getProperty("polln"));
> }
> public void setTopic(String t) {
> this.topic = t;
> }
> public String getTopic() {
> return this.topic;
> }
> public void newClient() {
> System.err.println("creating consumer");
> Properties kp = new Properties();
> for(String p : pna) {
> String v = this.prop.getProperty(p);
> if(v != null) {
> kp.put(p, v);
> }
> }
> //this.consumer = new KafkaConsumer<>(this.prop);
> this.consumer = new KafkaConsumer<>(kp);
> //this.consumer.subscribe(Collections.singletonList(this.topic));
> System.err.println("subscribing to " + this.topic);
> this.consumer.subscribe(Arrays.asList(this.topic));
> //this.seekToBeginning();
> }
> public void close() {
> this.consumer.close();
> this.consumer = null;
> }
> public void seekToBeginning() {
> if(this.topic == null) {
> System.err.println("KConsumer: topic not set");
> System.exit(1);
> }
> System.err.println

[jira] [Commented] (KAFKA-3967) Excessive Network IO between Kafka brokers

2016-08-07 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411376#comment-15411376
 ] 

Ewen Cheslack-Postava commented on KAFKA-3967:
--

[~Krishna82] Have you accounted for all replicas? What is the replication 
factor for the topic? At 2-5MB/sec with a common 3 replica setup, you'd expect 
7.2 - 18 GB/hr of traffic without any overhead. You didn't say how you're 
getting the 2-5 MB/sec number, but if it is just the size of messages recorded 
in your app, there is overhead both in writing messages to Kafka (e.g. fields 
you may not be accounting for) and in the protocol (request/response overhead).

> Excessive Network IO between Kafka brokers 
> ---
>
> Key: KAFKA-3967
> URL: https://issues.apache.org/jira/browse/KAFKA-3967
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.2
>Reporter: Krishna
>
> Excessive Network IO between Kafka brokers running on AWS in different AZ's 
> as compared to actual message volume. 
> We are producing  2-5 MB /Sec message volume however kafka seems to me moving 
> 20 gb /hr on network. The data volume has around 12 GB of message log on each 
> nodes. Is this a natural behavior ?. I believe only the new messages will get 
> replicated on non-leader nodes however here it seems that entire log is 
> re-synced  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3947) kafka-reassign-partitions.sh should support dumping current assignment

2016-08-07 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411318#comment-15411318
 ] 

Ewen Cheslack-Postava commented on KAFKA-3947:
--

[~kawamuray] Thanks for the contribution. The CLI tools are considered public 
API, so additions like this should be proposed on the mailing list and 
ultimately need a KIP to be accepted. This allows everyone to discuss and 
settle on the best approach before we commit to supporting and maintaining 
compatibility for an interface.

Here's, I wonder if the partition reassignment tool is the right place for this 
feature? Why not use the kafka-topics command? It seems like that already gives 
you all the info you want (though perhaps not currently in the format you want).

> kafka-reassign-partitions.sh should support dumping current assignment
> --
>
> Key: KAFKA-3947
> URL: https://issues.apache.org/jira/browse/KAFKA-3947
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.10.0.0
>Reporter: Yuto Kawamura
>Assignee: Yuto Kawamura
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> When I building my own tool to perform reassignment of partitions, I realized 
> that there's no way to dump the current partition assignment in machine 
> parsable format such as JSON.
> Actually giving {{\-\-generate}} option to the kafka-reassign-partitions.sh 
> script dumps the current assignment of topic given by 
> {{\-\-topics-to-assign-json-file}} but it's very inconvenient because of:
> - I want the dump containing all topics. That is, I wanna skip generating the 
> list of current topics to pass it to the generate command.
> - The output is concatenated with the result of reassignment so can't do 
> simply something like: {{kafka-reassign-partitions.sh --generate ... > 
> current-assignment.json}}
> - Don't need to ask kafka to generate reassginment to get the current 
> assignment in the first place.
> Here I'd like to add the {{\-\-dump}} option to kafka-reassign-partitions.sh.
> I was wondering whether this functionality should be provided by 
> {{kafka-reassign-partitions.sh}} or {{kafka-topics.sh}} but now I think 
> {{kafka-reassign-partitions.sh}} should be much proper as the resulting JSON 
> should be in the format of {{\-\-reassignment-json-file}} which sticks to 
> this command.
> Will follow up the patch implements this shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1562: KAFKA-3908; Set ReceiveBufferSize for socket used ...

2016-08-07 Thread lindong28
GitHub user lindong28 reopened a pull request:

https://github.com/apache/kafka/pull/1562

KAFKA-3908; Set ReceiveBufferSize for socket used by Processor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-3908

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1562.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1562


commit 9ad0909e8cfd821dd7cea19efdc6ce993f95ace7
Author: Dong Lin 
Date:   2016-06-27T22:37:59Z

KAFKA-3908; Set SendBufferSize for socket used by Processor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3908) Set SendBufferSize for socket used by Processor

2016-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411177#comment-15411177
 ] 

ASF GitHub Bot commented on KAFKA-3908:
---

Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/1562


> Set SendBufferSize for socket used by Processor
> ---
>
> Key: KAFKA-3908
> URL: https://issues.apache.org/jira/browse/KAFKA-3908
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> SRE should be able to control the receive buffer size of sockets that are 
> used to receive request from clients, for the same reason set receive buffer 
> size for all other sockets in the server and client. However, we currently 
> only set the send buffer size of this socket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1562: KAFKA-3908; Set ReceiveBufferSize for socket used ...

2016-08-07 Thread lindong28
Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/1562


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3908) Set SendBufferSize for socket used by Processor

2016-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411178#comment-15411178
 ] 

ASF GitHub Bot commented on KAFKA-3908:
---

GitHub user lindong28 reopened a pull request:

https://github.com/apache/kafka/pull/1562

KAFKA-3908; Set ReceiveBufferSize for socket used by Processor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-3908

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1562.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1562


commit 9ad0909e8cfd821dd7cea19efdc6ce993f95ace7
Author: Dong Lin 
Date:   2016-06-27T22:37:59Z

KAFKA-3908; Set SendBufferSize for socket used by Processor




> Set SendBufferSize for socket used by Processor
> ---
>
> Key: KAFKA-3908
> URL: https://issues.apache.org/jira/browse/KAFKA-3908
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> SRE should be able to control the receive buffer size of sockets that are 
> used to receive request from clients, for the same reason set receive buffer 
> size for all other sockets in the server and client. However, we currently 
> only set the send buffer size of this socket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #185

2016-08-07 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3479: Add new consumer metrics documentation

--
[...truncated 6404 lines...]

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.TableTest > basicOperations PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > connectorStatus PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > taskStatus PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testStartPaused PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPause PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testWakeupInCommitSyncCausesRetry PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testStartPaused PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPause PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testFailureInPoll PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSendRecordsRetries 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsTaskCommitRecordFail PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSlowTaskStart PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.r

Re: [kafka-clients] [VOTE] 0.10.0.1 RC2

2016-08-07 Thread Harsha Chintalapani
+1 (binding)
1. Ran 3 node cluser
2. Ran few tests in creating, producing , consuming from secure &
non-secure clients.

Thanks,
Harsha

On Fri, Aug 5, 2016 at 8:50 PM Manikumar Reddy 
wrote:

> +1 (non-binding).
> verified quick start and artifacts.
>
> On Sat, Aug 6, 2016 at 5:45 AM, Joel Koshy  wrote:
>
> > +1 (binding)
> >
> > Thanks Ismael!
> >
> > On Thu, Aug 4, 2016 at 6:54 AM, Ismael Juma  wrote:
> >
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the third candidate for the release of Apache Kafka 0.10.0.1.
> >> This is a bug fix release and it includes fixes and improvements from 53
> >> JIRAs (including a few critical bugs). See the release notes for more
> >> details:
> >>
> >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/RELEASE_NOTES.html
> >>
> >> When compared to RC1, RC2 contains a fix for a regression where an older
> >> version of slf4j-log4j12 was also being included in the libs folder of
> the
> >> binary tarball (KAFKA-4008). Thanks to Manikumar Reddy for reporting the
> >> issue.
> >>
> >> *** Please download, test and vote by Monday, 8 August, 8am PT ***
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> http://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/
> >>
> >> * Maven artifacts to be voted upon:
> >> https://repository.apache.org/content/groups/staging
> >>
> >> * Javadoc:
> >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/javadoc/
> >>
> >> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc2 tag:
> >> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> >> f8f56751744ba8e55f90f5c4f3aed8c3459447b2
> >>
> >> * Documentation:
> >> http://kafka.apache.org/0100/documentation.html
> >>
> >> * Protocol:
> >> http://kafka.apache.org/0100/protocol.html
> >>
> >> * Successful Jenkins builds for the 0.10.0 branch:
> >> Unit/integration tests: *
> https://builds.apache.org/job/kafka-0.10.0-jdk7/182/
> >> *
> >> System tests: *
> https://jenkins.confluent.io/job/system-test-kafka-0.10.0/138/
> >> *
> >>
> >> Thanks,
> >> Ismael
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "kafka-clients" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to kafka-clients+unsubscr...@googlegroups.com.
> >> To post to this group, send email to kafka-clie...@googlegroups.com.
> >> Visit this group at https://groups.google.com/group/kafka-clients.
> >> To view this discussion on the web visit https://groups.google.com/d/ms
> >> gid/kafka-clients/CAD5tkZYMMxDEjg_2jt4x-mVZZHgJ6EC6HKSf4Hn%2
> >> Bi59DbTdVoQ%40mail.gmail.com
> >> <
> https://groups.google.com/d/msgid/kafka-clients/CAD5tkZYMMxDEjg_2jt4x-mVZZHgJ6EC6HKSf4Hn%2Bi59DbTdVoQ%40mail.gmail.com?utm_medium=email&utm_source=footer
> >
> >> .
> >> For more options, visit https://groups.google.com/d/optout.
> >>
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit https://groups.google.com/d/
> > msgid/kafka-clients/CAAOfhrAUcmrFRH2PpsLLmv579WDOi
> > oMOcpy1LBrLJfdWff5iFA%40mail.gmail.com
> > <
> https://groups.google.com/d/msgid/kafka-clients/CAAOfhrAUcmrFRH2PpsLLmv579WDOioMOcpy1LBrLJfdWff5iFA%40mail.gmail.com?utm_medium=email&utm_source=footer
> >
> > .
> >
> > For more options, visit https://groups.google.com/d/optout.
> >
>


[GitHub] kafka pull request #1710: KAFKA-4025 - make sure file.encoding system proper...

2016-08-07 Thread radai-rosenblatt
GitHub user radai-rosenblatt opened a pull request:

https://github.com/apache/kafka/pull/1710

KAFKA-4025 - make sure file.encoding system property is set to UTF-8 when 
calling rat



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radai-rosenblatt/kafka fix-build-on-windows

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1710.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1710


commit 35563f203e993bc044bc408f50cad35dea2203bb
Author: radai-rosenblatt 
Date:   2016-08-07T02:26:15Z

KAFKA-4025 - make sure file.encoding system property is set to UTF-8 when 
invoking the rat ant task




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4025) build fails on windows due to rat target output encoding

2016-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411099#comment-15411099
 ] 

ASF GitHub Bot commented on KAFKA-4025:
---

GitHub user radai-rosenblatt opened a pull request:

https://github.com/apache/kafka/pull/1710

KAFKA-4025 - make sure file.encoding system property is set to UTF-8 when 
calling rat



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radai-rosenblatt/kafka fix-build-on-windows

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1710.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1710


commit 35563f203e993bc044bc408f50cad35dea2203bb
Author: radai-rosenblatt 
Date:   2016-08-07T02:26:15Z

KAFKA-4025 - make sure file.encoding system property is set to UTF-8 when 
invoking the rat ant task




> build fails on windows due to rat target output encoding
> 
>
> Key: KAFKA-4025
> URL: https://issues.apache.org/jira/browse/KAFKA-4025
> Project: Kafka
>  Issue Type: Bug
> Environment: windows 7, either regular command prompt or git bash
>Reporter: radai rosenblatt
>Priority: Minor
> Attachments: windows build debug output.txt
>
>
> kafka runs a rat report during the build, using [the rat ant report 
> task|http://creadur.apache.org/rat/apache-rat-tasks/report.html], which has 
> no output encoding parameter.
> this means that the resulting xml report is produced using the system-default 
> encoding, which is OS-dependent:
> the rat ant task code instantiates the output writer like so 
> ([org.apache.rat.anttasks.Report.java|http://svn.apache.org/repos/asf/creadur/rat/tags/apache-rat-project-0.11/apache-rat-tasks/src/main/java/org/apache/rat/anttasks/Report.java]
>  line 196):
> {noformat}
> out = new PrintWriter(new FileWriter(reportFile));{noformat}
> which eventually leads to {{Charset.defaultCharset()}} that relies on the 
> file.encoding system property. this causes an issue if the default encoding 
> isnt UTF-8 (which it isnt on windows) as the code called by 
> printUnknownFiles() in rat.gradle defaults to UTF-8 when reading the report 
> xml, causing the build to fail with:
> {noformat}
> com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: 
> Invalid byte 1 of 1-byte UTF-8 sequence.{noformat}
> (see complete output of {{gradlew --debug --stacktrace rat}} in attached file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4025) build fails on windows due to rat target output encoding

2016-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411096#comment-15411096
 ] 

ASF GitHub Bot commented on KAFKA-4025:
---

Github user radai-rosenblatt closed the pull request at:

https://github.com/apache/kafka/pull/1708


> build fails on windows due to rat target output encoding
> 
>
> Key: KAFKA-4025
> URL: https://issues.apache.org/jira/browse/KAFKA-4025
> Project: Kafka
>  Issue Type: Bug
> Environment: windows 7, either regular command prompt or git bash
>Reporter: radai rosenblatt
>Priority: Minor
> Attachments: windows build debug output.txt
>
>
> kafka runs a rat report during the build, using [the rat ant report 
> task|http://creadur.apache.org/rat/apache-rat-tasks/report.html], which has 
> no output encoding parameter.
> this means that the resulting xml report is produced using the system-default 
> encoding, which is OS-dependent:
> the rat ant task code instantiates the output writer like so 
> ([org.apache.rat.anttasks.Report.java|http://svn.apache.org/repos/asf/creadur/rat/tags/apache-rat-project-0.11/apache-rat-tasks/src/main/java/org/apache/rat/anttasks/Report.java]
>  line 196):
> {noformat}
> out = new PrintWriter(new FileWriter(reportFile));{noformat}
> which eventually leads to {{Charset.defaultCharset()}} that relies on the 
> file.encoding system property. this causes an issue if the default encoding 
> isnt UTF-8 (which it isnt on windows) as the code called by 
> printUnknownFiles() in rat.gradle defaults to UTF-8 when reading the report 
> xml, causing the build to fail with:
> {noformat}
> com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: 
> Invalid byte 1 of 1-byte UTF-8 sequence.{noformat}
> (see complete output of {{gradlew --debug --stacktrace rat}} in attached file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1708: KAFKA-4025 - make sure file.encoding system proper...

2016-08-07 Thread radai-rosenblatt
Github user radai-rosenblatt closed the pull request at:

https://github.com/apache/kafka/pull/1708


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3479) Add new consumer metrics documentation

2016-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411084#comment-15411084
 ] 

ASF GitHub Bot commented on KAFKA-3479:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1361


> Add new consumer metrics documentation
> --
>
> Key: KAFKA-3479
> URL: https://issues.apache.org/jira/browse/KAFKA-3479
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Kaufman Ng
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> We're missing documentation for the new consumer metrics defined in 
> NetworkClient, Fetcher, AbstractCoordinator, and ConsumerCoordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1361: KAFKA-3479: consumer metrics doc

2016-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1361


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3479) Add new consumer metrics documentation

2016-08-07 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3479.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.1
   0.10.1.0

Issue resolved by pull request 1361
[https://github.com/apache/kafka/pull/1361]

> Add new consumer metrics documentation
> --
>
> Key: KAFKA-3479
> URL: https://issues.apache.org/jira/browse/KAFKA-3479
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Kaufman Ng
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> We're missing documentation for the new consumer metrics defined in 
> NetworkClient, Fetcher, AbstractCoordinator, and ConsumerCoordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3770) KStream job should be able to specify linger.ms

2016-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411080#comment-15411080
 ] 

ASF GitHub Bot commented on KAFKA-3770:
---

Github user gfodor closed the pull request at:

https://github.com/apache/kafka/pull/1463


> KStream job should be able to specify linger.ms
> ---
>
> Key: KAFKA-3770
> URL: https://issues.apache.org/jira/browse/KAFKA-3770
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Greg Fodor
>Assignee: Greg Fodor
>
> The default linger.ms hardcoded into the StreamsConfig class of 100ms is 
> problematic for jobs that have lots of tasks, since this latency can accrue. 
> It seems useful to be able to override the linger.ms in the StreamsConfig. 
> Attached is a PR which allows this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3770) KStream job should be able to specify linger.ms

2016-08-07 Thread Greg Fodor (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Fodor resolved KAFKA-3770.
---
Resolution: Fixed

Fixed via KAFKA-3786

> KStream job should be able to specify linger.ms
> ---
>
> Key: KAFKA-3770
> URL: https://issues.apache.org/jira/browse/KAFKA-3770
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Greg Fodor
>Assignee: Greg Fodor
>
> The default linger.ms hardcoded into the StreamsConfig class of 100ms is 
> problematic for jobs that have lots of tasks, since this latency can accrue. 
> It seems useful to be able to override the linger.ms in the StreamsConfig. 
> Attached is a PR which allows this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1463: KAFKA-3770: KStream job should be able to specify ...

2016-08-07 Thread gfodor
Github user gfodor closed the pull request at:

https://github.com/apache/kafka/pull/1463


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1709: MINOR: Doc individual partition must fit on the se...

2016-08-07 Thread bitfurry
GitHub user bitfurry opened a pull request:

https://github.com/apache/kafka/pull/1709

MINOR: Doc individual partition must fit on the server that host it



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bitfurry/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1709.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1709


commit eb4c75f20e0fbb76e90d6b378b3f5b0084d9331d
Author: sahil kharb 
Date:   2016-08-07T10:44:35Z

MINOR: Doc individual partition must fit on the server that host it




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #1466

2016-08-07 Thread Apache Jenkins Server
See