Kafka Connect Source questions

2017-10-11 Thread Stephane Maarek
Hi,

 

I had a look at the Connect Source Worker code and have two questions:
When a Source Task commits offsets, does it perform compaction / optimisation 
before sending off? E.g.  I read from 1 source partition, and I read 1000 
messages. Will the offset flush send 1000 messages to the offset storage, or 
just 1 (the last one)?
I don’t really understand why WorkerSourceTask is trying to flush outstanding 
messages before committing the offsets? (cf 
https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L328
 ).  
I would believe that committing the offsets would just commit the offsets for 
the messages we know for sure have been flushed at the moment the commit is 
requested. That would remove one massive timeout from happening if the source 
task pulls a lot of message and the producer is overwhelmed / can’t complete 
the message flush in the 5 seconds of timeout.  
 

Thanks a lot for the responses. I may open JIRAs based on the answers of the 
questions, if that would help bring some performance improvements. 

 

Stephane



Re: [VOTE] 1.0.0 RC0

2017-10-11 Thread Vahid S Hashemian
Hi Guozhang,

Thanks for running the release.

I tested building from source and the quickstarts on Linux, Mac, and 
Windows 64 (with Java 8 and Gradle 4.2.1).

Everything worked well on Linux and Mac, but I ran into some issues on my 
Windows 64 VM:

I reported one issue in KAFKA-6055, but it's an easy one to fix (a PR is 
already submitted).

With that fix in place I continued my testing but ran into another issue 
after build. When trying to start a broker 
(bin\windows\kafka-server-start.bat config\server.properties) I get this 
error:

[2017-10-11 21:45:11,642] FATAL  (kafka.Kafka$)
java.lang.IllegalArgumentException: Unknown signal: HUP
at sun.misc.Signal.(Unknown Source)
at kafka.Kafka$.registerHandler$1(Kafka.scala:67)
at kafka.Kafka$.registerLoggingSignalHandler(Kafka.scala:73)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)

This seems to have been introduced by a recent commit (
https://github.com/apache/kafka/commit/8256f882c92daa1470382502ab94cbe2c16028f1#diff-ef81cee39236d0121040043e4d69d330
) and for some reason that fix does not work on Windows.

Thanks.
--Vahid





From:   Guozhang Wang 
To: "dev@kafka.apache.org" , 
"us...@kafka.apache.org" , kafka-clients 

Date:   10/10/2017 06:34 PM
Subject:[VOTE] 1.0.0 RC0



Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 1.0.0.

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new 
release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA&s=waGurzMZ-QrdW5_pNVc3hgTUFQoJ-a8786ce-ENb9UY&e=
<
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA&s=waGurzMZ-QrdW5_pNVc3hgTUFQoJ-a8786ce-ENb9UY&e=
>*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
(KIP)
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113)
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc0_RELEASE-5FNOTES.html&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA&s=Ba5qmFWmCACG3vS4n6iTkU2tK9HtCv-YHd2YgG-B84U&e=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc0_RELEASE-5FNOTES.html&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA&s=Ba5qmFWmCACG3vS4n6iTkU2tK9HtCv-YHd2YgG-B84U&e=
>*



*** Please download, test and vote by Friday, October 13, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA&s=IN3b-XNLtG1h0LbBPS8IQUrwLnaA6ff0iJ2Xk50Nl0o&e=


* Release artifacts to be voted upon (source and binary):
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc0_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA&s=yJLBkpLm16v7BWOKBi_gqrhivUXaC2gZwInCrRSNl9s&e=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc0_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=o8kdIbhT0v8egB8hJ137dwiXmIK8WlvIwjiFebdEGwA&s=yJLBkpLm16v7BWOKBi_gqrhivUXaC2gZwInCrRSNl9s&e=
>*

* Maven artifacts to be voted upon:
https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.org_content_groups_staging_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-

[jira] [Created] (KAFKA-6056) LogCleaner always cleaning into 1 Segment might exeed relativ offset range

2017-10-11 Thread Jan Filipiak (JIRA)
Jan Filipiak created KAFKA-6056:
---

 Summary: LogCleaner always cleaning into 1 Segment might exeed 
relativ offset range
 Key: KAFKA-6056
 URL: https://issues.apache.org/jira/browse/KAFKA-6056
 Project: Kafka
  Issue Type: Bug
  Components: core, log
Affects Versions: 0.11.0.0
Reporter: Jan Filipiak
Priority: Minor


After having an Issue with compaction stopping for some time. It can be an 
issue that the LogCleaner will always clean into 1 Segment per sizegroup. 

Usually  the Log enforces a maximum distance between min and max offset in a 
LogSegment. If that Distance would be exeeded in maybeRoll() a new logsegment 
would be rolled. I assume this is because relative offset might be stored as 
integer. The LogCleaner OTOH is not going to roll a new LogSegment as its only 
ever using 1 Segment to clean into. 

A lenghty discussion about this can be found in the slack community:

https://confluentcommunity.slack.com/archives/C49R61XMM/p150691444105

The observed stacktrace is as follows:

https://gist.github.com/brettrann/ce52343692696a45d5b9f4df723bcd14

I could imagin also enfocing Integer.MAX_VALUE as offset distance in
groupSegmentsBySize in the LogCleaner to make sure a Segment doesnt exeed this 
limit.










--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5988) Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE

2017-10-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved KAFKA-5988.
---
Resolution: Won't Fix

> Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE
> 
>
> Key: KAFKA-5988
> URL: https://issues.apache.org/jira/browse/KAFKA-5988
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>
> StreamThread#STREAM_THREAD_ID_SEQUENCE is used for naming (numbering) 
> StreamThread's .
> It is used in create() which is called from a loop in KafkaStreams ctor.
> We can remove STREAM_THREAD_ID_SEQUENCE and pass the loop index to create()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Limit Jenkins jobs on H0 to H13

2017-10-11 Thread Ted Yu
qnode3  experiences 'No
space left on device'


See https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2001/console


Can this server be taken out of PR run ?


On Wed, Sep 20, 2017 at 4:34 AM, Ismael Juma  wrote:

> Hi Ted,
>
> Thanks for following up with INFRA on the issues we've been seeing. I asked
> a clarifying comment in that ticket (for some reason it only allowed me to
> add an internal comment).
>
> Ismael
>
> On Wed, Sep 20, 2017 at 2:31 AM, Ted Yu  wrote:
>
> > Hi,
> > See Gavin's comment:
> >
> > https://issues.apache.org/jira/browse/INFRA-15084?page=
> > com.atlassian.jira.plugin.system.issuetabpanels:comment-
> > tabpanel&focusedCommentId=16172575#comment-16172575
> >
> > Can someone with admin permission modify the Jenkins job(s) ?
> >
> > Thanks
> >
>


Jenkins build is back to normal : kafka-trunk-jdk9 #118

2017-10-11 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4064: MINOR: add unit test for StateStoreSerdes

2017-10-11 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4064

MINOR: add unit test for StateStoreSerdes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka minor-add-state-serdes-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4064.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4064


commit 2b50b4a5f50d935985cd7e08d5e3f3facf7582cf
Author: Matthias J. Sax 
Date:   2017-10-11T23:29:59Z

MINOR: add unit test for StateStoreSerdes




---


Build failed in Jenkins: kafka-trunk-jdk8 #2129

2017-10-11 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Push JMX metric name mangling into the JmxReporter 
(KIP-190

[wangguoz] MINOR: Redesign of Streams page to include video & logos

--
[...truncated 369.98 KB...]
kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.Topic

Build failed in Jenkins: kafka-trunk-jdk7 #2881

2017-10-11 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Push JMX metric name mangling into the JmxReporter 
(KIP-190

[wangguoz] MINOR: Redesign of Streams page to include video & logos

--
[...truncated 367.08 KB...]
kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound PASSED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
STARTED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
PASSED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn STARTED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError PASSED

kafka.coordinator.transaction.Tr

[GitHub] kafka pull request #4063: MINOR: improve Store parameter checks

2017-10-11 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4063

MINOR: improve Store parameter checks



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
minor-improve-store-parameter-checks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4063.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4063


commit ae386e603980f6752394634847615c4211a2c320
Author: Matthias J. Sax 
Date:   2017-10-11T22:51:39Z

MINOR: improve Store parameter checks




---


[GitHub] kafka pull request #4062: KAFKA-6055: Fix a JVM misconfiguration that affect...

2017-10-11 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/4062

KAFKA-6055: Fix a JVM misconfiguration that affects Windows tools



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-6055

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4062.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4062


commit 7954c3db11fb2547ba66ba177de99393bdaf4fb6
Author: Vahid Hashemian 
Date:   2017-10-11T22:52:14Z

KAFKA-6055: Fix a JVM misconfiguration that affects Windows tools




---


[jira] [Created] (KAFKA-6055) Running tools on Windows fail due to misconfigured JVM config

2017-10-11 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6055:
--

 Summary: Running tools on Windows fail due to misconfigured JVM 
config
 Key: KAFKA-6055
 URL: https://issues.apache.org/jira/browse/KAFKA-6055
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian
Priority: Blocker
 Fix For: 1.0.0


This affects the current trunk and 1.0.0 RC0.

When running any of the Windows commands under {{bin/windows}} the following 
error is returned:

{code}
Missing +/- setting for VM option 'ExplicitGCInvokesConcurrent'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk9 #117

2017-10-11 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: Updates on release.py before 1.0.0

[rajinisivaram] MINOR: Push JMX metric name mangling into the JmxReporter 
(KIP-190

--
[...truncated 1.40 MB...]

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED

kafka.network.SocketServerTest > processDisconnectedException PASSED

kafka.network.SocketServerTest > sendCancelledKeyException STARTED

kafka.network.SocketServerTest > sendCancelledKeyException PASSED

kafka.network.SocketServerTest > processCompletedReceiveException STARTED

kafka.network.SocketServerTest > processCompletedReceiveException PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown STARTED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > pollException STARTED

kafka.network.SocketServerTest > pollException PASSED

kafka.network.SocketServerTest > testSslSocketServer STARTED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterShutdown STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterShutdown PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

Build failed in Jenkins: kafka-trunk-jdk8 #2128

2017-10-11 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: Updates on release.py before 1.0.0

--
[...truncated 1.77 MB...]
org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.in

[GitHub] kafka-site pull request #96: Fixed video paramas

2017-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/96


---


[GitHub] kafka-site issue #96: Fixed video paramas

2017-10-11 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/96
  
LGTM. Merged to `asf-site`.


---


Jenkins build is back to normal : kafka-1.0-jdk7 #25

2017-10-11 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.0.0 RC0

2017-10-11 Thread Ted Yu
Looks like the following change is needed for some downstream project to
compile their code (which was using 0.11.0.1):

-import org.apache.kafka.common.protocol.SecurityProtocol;
+import org.apache.kafka.common.security.auth.SecurityProtocol;

I took a look at docs/upgrade.html but didn't see any mentioning.

Should this be documented ?

On Tue, Oct 10, 2017 at 6:34 PM, Guozhang Wang  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 1.0.0.
>
> It's worth noting that starting in this version we are using a different
> version protocol with three digits: *major.minor.bug-fix*
>
> Any and all testing is welcome, but the following areas are worth
> highlighting:
>
> 1. Client developers should verify that their clients can produce/consume
> to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> 2. Performance and stress testing. Heroku and LinkedIn have helped with
> this in the past (and issues have been found and fixed).
> 3. End users can verify that their apps work correctly with the new
> release.
>
> This is a major version release of Apache Kafka. It includes 29 new KIPs.
> See the release notes and release plan
> (*https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=71764913
>  >*)
> for more details. A few feature highlights:
>
> * Java 9 support with significantly faster TLS and CRC32C implementations
> (KIP)
> * JBOD improvements: disk failure only disables failed disk but not the
> broker (KIP-112/KIP-113)
> * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> KIP-188, KIP-196)
> * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> and drop compatibility "Evolving" annotations
>
> Release notes for the 1.0.0 release:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/RELEASE_NOTES.html
> *
>
>
>
> *** Please download, test and vote by Friday, October 13, 8pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> *
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/javadoc/
> *
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc0 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> 2f97bc6a9ee269bf90b019e50b4eeb43df2f1143
>
> * Documentation:
> Note the documentation can't be pushed live due to changes that will not go
> live until the release. You can manually verify by downloading
> http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> kafka_2.11-1.0.0-site-docs.tgz
>
> * Successful Jenkins builds for the 1.0.0 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/20/
>
>
> /**
>
>
> Thanks,
> -- Guozhang
>


[GitHub] kafka-site pull request #96: Fixed video paramas

2017-10-11 Thread manjuapu
Github user manjuapu commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/96#discussion_r144151112
  
--- Diff: css/styles.css ---
@@ -1074,7 +1074,6 @@ nav .btn {
 .sticky-top {
 white-space: nowrap;
 overflow-y: hidden;
-overflow-x: scroll;
--- End diff --

@guozhangwang Yes, these changes are different. I am changing 
&modestbranding=1&controls=2& to rel=0& for all the videos. Does this make 
sense?


---


[GitHub] kafka pull request #4059: Redesign of Streams page to include video & logos

2017-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4059


---


[GitHub] kafka-site pull request #96: Fixed video paramas

2017-10-11 Thread guozhangwang
Github user guozhangwang commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/96#discussion_r144150407
  
--- Diff: css/styles.css ---
@@ -1074,7 +1074,6 @@ nav .btn {
 .sticky-top {
 white-space: nowrap;
 overflow-y: hidden;
-overflow-x: scroll;
--- End diff --

Are these changes on this file intentional? I'm asking because we already 
have lots of edits on this commit:

https://github.com/apache/kafka-site/pull/92/files

And I'm just double checking to see if any of them were unintentionally 
reverted in this PR.


---


[GitHub] kafka pull request #4061: KAFKA-6032: Unit Tests should be independent of lo...

2017-10-11 Thread gilles-degols
GitHub user gilles-degols opened a pull request:

https://github.com/apache/kafka/pull/4061

KAFKA-6032: Unit Tests should be independent of locale settings



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gilles-degols/kafka 
KAFKA-6032-Unit-Tests-should-be-independent-of-locale-settings

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4061.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4061


commit d80207ec7a74cf5cffacdf2cf9cd7d43f60880a5
Author: Gilles Degols 
Date:   2017-10-11T21:30:06Z

Unit Tests should be independent of locale settings




---


[GitHub] kafka pull request #3980: MINOR: Push JMX metric name mangling into the JmxR...

2017-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3980


---


Re: [VOTE] 1.0.0 RC0

2017-10-11 Thread Guozhang Wang
Thanks for the check Ted.

I just made the jars available at mvn staging now:

https://repository.apache.org/content/groups/staging/org/apache/kafka/


Guozhang

On Tue, Oct 10, 2017 at 6:43 PM, Ted Yu  wrote:

> Guozhang:
> I took a brief look under the staging tree.
> e.g.
> https://repository.apache.org/content/groups/staging/org/
> apache/kafka/kafka-clients/
>
> I don't see 1.0.0 jars.
>
> Would the jars be populated later ?
>
> Thanks
>
> On Tue, Oct 10, 2017 at 6:34 PM, Guozhang Wang  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 1.0.0.
> >
> > It's worth noting that starting in this version we are using a different
> > version protocol with three digits: *major.minor.bug-fix*
> >
> > Any and all testing is welcome, but the following areas are worth
> > highlighting:
> >
> > 1. Client developers should verify that their clients can produce/consume
> > to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> > 2. Performance and stress testing. Heroku and LinkedIn have helped with
> > this in the past (and issues have been found and fixed).
> > 3. End users can verify that their apps work correctly with the new
> > release.
> >
> > This is a major version release of Apache Kafka. It includes 29 new KIPs.
> > See the release notes and release plan
> > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=71764913
> >  action?pageId=71764913
> > >*)
> > for more details. A few feature highlights:
> >
> > * Java 9 support with significantly faster TLS and CRC32C implementations
> > (KIP)
> > * JBOD improvements: disk failure only disables failed disk but not the
> > broker (KIP-112/KIP-113)
> > * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> > KIP-188, KIP-196)
> > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> > and drop compatibility "Evolving" annotations
> >
> > Release notes for the 1.0.0 release:
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/RELEASE_NOTES.html
> > *
> >
> >
> >
> > *** Please download, test and vote by Friday, October 13, 8pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > *
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/javadoc/
> > *
> >
> > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc0 tag:
> >
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > 2f97bc6a9ee269bf90b019e50b4eeb43df2f1143
> >
> > * Documentation:
> > Note the documentation can't be pushed live due to changes that will not
> go
> > live until the release. You can manually verify by downloading
> > http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > kafka_2.11-1.0.0-site-docs.tgz
> >
> > * Successful Jenkins builds for the 1.0.0 branch:
> > Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/20/
> >
> >
> > /**
> >
> >
> > Thanks,
> > -- Guozhang
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #4054: HOTFIX: Updates on release.py before 1.0.0

2017-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4054


---


[GitHub] kafka pull request #4060: MINOR: Add Kafka Streams upgrade workflow

2017-10-11 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/4060

MINOR: Add Kafka Streams upgrade workflow



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka MINOR_add_upgrade_workflow

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4060.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4060


commit d9d4e41e914b1d2c1c594e7178abd542d0756c91
Author: Bill Bejeck 
Date:   2017-10-11T21:04:42Z

MINOR: add Kafka Streams upgrade workflow




---


[GitHub] kafka pull request #4059: Redesign of Streams page to include video & logos

2017-10-11 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka/pull/4059

Redesign of Streams page to include video & logos

@guozhangwang Please review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka redesign-streams-page

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4059.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4059


commit 580a17e30aca7fa03186145c7dd383850c1349cd
Author: Manjula K 
Date:   2017-10-11T19:58:02Z

Redisgn of Streams page to include video & logos




---


Re: Kafka Consumer - org.apache.kafka.common.errors.TimeoutException: Failed to get offsets by times in 305000 ms

2017-10-11 Thread Ted Yu
I don't see values for the Consumer Properties.

Can you try out 0.11.0.1 ?

See
http://search-hadoop.com/m/Kafka/uyzND1qxYr5prjxv?subj=Incorrect+consumer+offsets+after+broker+restart+0+11+0+0

On Wed, Oct 11, 2017 at 11:37 AM, SenthilKumar K 
wrote:

> Hi All , Recently we starting seeing Kafka Consumer error with Timeout .
> What could be the cause here ?
>
> Version : kafka_2.11-0.11.0.0
>
> Consumer Properties:
>
> *bootstrap.servers, enable.auto.commit,auto.commit.interval.ms
> ,session.timeout.ms
> ,group.id
> ,key.deserializer,value.deserializer,max.poll.records*
>
> --Senthil
>


[GitHub] kafka-site issue #96: Fixed video paramas

2017-10-11 Thread manjuapu
Github user manjuapu commented on the issue:

https://github.com/apache/kafka-site/pull/96
  
@guozhangwang  Can you please review.


---


[GitHub] kafka-site pull request #96: Fixed video paramas

2017-10-11 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka-site/pull/96

Fixed video paramas



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manjuapu/kafka-site streams-updates

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/96.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #96


commit d415111bcb15b5a4f7549ab410c85088a41d79f1
Author: Manjula K 
Date:   2017-10-10T02:42:05Z

Change apache-kafka image permission as image not appearing in twitter

commit b5c717ad1ca107f838f8d9baee963d853a294a04
Author: Manjula K 
Date:   2017-10-11T18:45:15Z

Fixed video paramas




---


Kafka Consumer - org.apache.kafka.common.errors.TimeoutException: Failed to get offsets by times in 305000 ms

2017-10-11 Thread SenthilKumar K
Hi All , Recently we starting seeing Kafka Consumer error with Timeout .
What could be the cause here ?

Version : kafka_2.11-0.11.0.0

Consumer Properties:

*bootstrap.servers, enable.auto.commit,auto.commit.interval.ms
,session.timeout.ms
,group.id
,key.deserializer,value.deserializer,max.poll.records*

--Senthil


[GitHub] kafka pull request #4058: MINOR: Fixed format string

2017-10-11 Thread mimaison
GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/4058

MINOR: Fixed format string

Use Scala string templates instead of format

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka minor_AFT_logging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4058.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4058


commit 5ffd9bf721e8f8e4f3081df11c1493278d1f37cc
Author: Mickael Maison 
Date:   2017-10-11T17:09:40Z

MINOR: Fixed format string

Use Scala string templates instead of format




---


[GitHub] kafka pull request #4057: KAFKA-6053: Fix NoSuchMethodError when creating Pr...

2017-10-11 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/4057

KAFKA-6053: Fix NoSuchMethodError when creating ProducerRecords with older 
client versions



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-6053-fix-no-such-method-error-in-producer-record

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4057.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4057


commit 221a96da1dc6221dd3f61786d5fc1119b6848a7f
Author: Apurva Mehta 
Date:   2017-10-11T16:58:44Z

Fix NoSuchMethodError when creating ProducerRecords with older client 
versions




---


[jira] [Created] (KAFKA-6054) ERROR "SubscriptionInfo - unable to decode subscription data: version=2" when upgrading from 0.10.0.0 to 0.10.2.1

2017-10-11 Thread James Cheng (JIRA)
James Cheng created KAFKA-6054:
--

 Summary: ERROR "SubscriptionInfo - unable to decode subscription 
data: version=2" when upgrading from 0.10.0.0 to 0.10.2.1
 Key: KAFKA-6054
 URL: https://issues.apache.org/jira/browse/KAFKA-6054
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.1
Reporter: James Cheng


We upgraded an app from kafka-streams 0.10.0.0 to 0.10.2.1. We did a rolling 
upgrade of the app, so that one point, there were both 0.10.0.0-based instances 
and 0.10.2.1-based instances running. 

We observed the following stack trace:

{code}
2017-10-11 07:02:19.964 [StreamThread-3] ERROR o.a.k.s.p.i.a.SubscriptionInfo -
unable to decode subscription data: version=2
org.apache.kafka.streams.errors.TaskAssignmentException: unable to decode
subscription data: version=2
at 
org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.decode(SubscriptionInfo.java:113)
at 
org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:235)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:260)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:404)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$900(AbstractCoordinator.java:81)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:358)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:340)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)

{code}

I spoke with [~mjsax] and he said this is a known issue that happens when you 
have both 0.10.0.0 instances and 0.10.2.1 instances running at the same time, 
because the internal version number of the protocol changed when adding 
Interactive Queries. Matthias asked me to file this JIRA>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6053) NoSuchMethodError when creating ProducerRecord in upgrade system tests

2017-10-11 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-6053:
---

 Summary: NoSuchMethodError when creating ProducerRecord in upgrade 
system tests
 Key: KAFKA-6053
 URL: https://issues.apache.org/jira/browse/KAFKA-6053
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta
Assignee: Apurva Mehta


This patch https://github.com/apache/kafka/pull/4029 used a new constructor for 
{{ProducerRecord}} which doesn't exist in older clients. Hence system tests 
which use older clients fail with: 

{noformat}
Exception in thread "main" java.lang.NoSuchMethodError: 
org.apache.kafka.clients.producer.ProducerRecord.(Ljava/lang/String;Ljava/lang/Integer;Ljava/lang/Long;Ljava/lang/Object;Ljava/lang/Object;)V
at 
org.apache.kafka.tools.VerifiableProducer.send(VerifiableProducer.java:232)
at 
org.apache.kafka.tools.VerifiableProducer.run(VerifiableProducer.java:462)
at 
org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:500)
{"timestamp":1507711495458,"name":"shutdown_complete"}
{"timestamp":1507711495459,"name":"tool_data","sent":0,"acked":0,"target_throughput":1,"avg_throughput":0.0}
amehta-macbook-pro:worker6 apurva$
{noformat}

A trivial fix is to only use the new constructor if a message create time is 
explicitly passed to the VerifiableProducer, since older versions of the test 
would never use it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6052) Consumers not polling when isolation.level=read_committed

2017-10-11 Thread Ansel Zandegran (JIRA)
Ansel Zandegran created KAFKA-6052:
--

 Summary: Consumers not polling when isolation.level=read_committed
 Key: KAFKA-6052
 URL: https://issues.apache.org/jira/browse/KAFKA-6052
 Project: Kafka
  Issue Type: Bug
  Components: consumer, producer 
Affects Versions: 0.11.0.0
 Environment: Windows 10. All processes running in embedded mode
Reporter: Ansel Zandegran
 Attachments: logFile.log

I am trying to send a transactional record with exactly once schematics. These 
are my producer, consumer and broker setups. 
public void sendWithTTemp(String topic, EHEvent event) {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
  "localhost:9092,localhost:9093,localhost:9094");
//props.put("bootstrap.servers", 
"34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put("transactional.id", "TID" + transactionId.incrementAndGet());
props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000");

Producer producer =
new KafkaProducer<>(props,
new StringSerializer(),
new StringSerializer());

Logger.log(this, "Initializing transaction...");

producer.initTransactions();

Logger.log(this, "Initializing done.");

try {
  Logger.log(this, "Begin transaction...");
  producer.beginTransaction();
  Logger.log(this, "Begin transaction done.");
  Logger.log(this, "Sending events...");
  producer.send(new ProducerRecord<>(topic,
 event.getKey().toString(),
 event.getValue().toString()));
  Logger.log(this, "Sending events done.");
  Logger.log(this, "Committing...");
  producer.commitTransaction();
  Logger.log(this, "Committing done.");
} catch (ProducerFencedException | OutOfOrderSequenceException
| AuthorizationException e) {
  producer.close();
  e.printStackTrace();
} catch (KafkaException e) {
  producer.abortTransaction();
  e.printStackTrace();
}

producer.close();
  }

*In Consumer*
I have set isolation.level=read_committed
*In 3 Brokers*
I'm running with the following properties
  Properties props = new Properties();
  props.setProperty("broker.id", "" + i);
  props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i));
  props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + i);
  props.setProperty("num.partitions", "1");
  props.setProperty("zookeeper.connect", "localhost:2181");
  props.setProperty("zookeeper.connection.timeout.ms", "6000");
  props.setProperty("min.insync.replicas", "2");
  props.setProperty("offsets.topic.replication.factor", "2");
  props.setProperty("offsets.topic.num.partitions", "1");
  props.setProperty("transaction.state.log.num.partitions", "2");
  props.setProperty("transaction.state.log.replication.factor", "2");
  props.setProperty("transaction.state.log.min.isr", "2");

I am not getting any records in the consumer. When I set 
isolation.level=read_uncommitted, I get the records. I assume that the records 
are not getting commited. What could be the problem? log attached



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4056: KAFKA-6051 Close the ReplicaFetcherBlockingSend ea...

2017-10-11 Thread mayt
GitHub user mayt opened a pull request:

https://github.com/apache/kafka/pull/4056

KAFKA-6051 Close the ReplicaFetcherBlockingSend earlier on shutdown

Rearranged the testAddPartitionDuringDeleteTopic() test to keep the
likelyhood of the race condition.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mayt/kafka KAFKA-6051

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4056.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4056


commit 36c1fa6ca3bab4dc070910cba9223f4141982d82
Author: Maytee Chinavanichkit 
Date:   2017-10-11T10:35:54Z

KAFKA-6051 Close the ReplicaFetcherBlockingSend earlier on shutdown

Rearranged the testAddPartitionDuringDeleteTopic() test to keep the
likelyhood of the race condition.




---


[jira] [Created] (KAFKA-6051) ReplicaFetcherThread should close the ReplicaFetcherBlockingSend earlier on shutdown

2017-10-11 Thread Maytee Chinavanichkit (JIRA)
Maytee Chinavanichkit created KAFKA-6051:


 Summary: ReplicaFetcherThread should close the 
ReplicaFetcherBlockingSend earlier on shutdown
 Key: KAFKA-6051
 URL: https://issues.apache.org/jira/browse/KAFKA-6051
 Project: Kafka
  Issue Type: Bug
Reporter: Maytee Chinavanichkit


The ReplicaFetcherBlockingSend works as designed and will blocks until it is 
able to get data. This becomes a problem when we are gracefully shutting down a 
broker. The controller will attempt to shutdown the fetchers and elect new 
leaders. When the last fetch of partition is removed, as part of the 
{replicaManager.becomeLeaderOrFollower} call will proceed to shut down any idle 
ReplicaFetcherThread. The shutdown process here can block up to until the last 
fetch request completes. This blocking delay is a big problem because the 
{replicaStateChangeLock}, and {mapLock} in {AbstractFetcherManager} is still 
locked causing latency spikes on multiple brokers.

At this point in time, we do not need the last response as the fetcher is 
shutting down. We should close the leaderEndpoint early during 
{initiateShutdown()} instead of after {super.shutdown()}.


For example we see here the shutdown blocked the broker from processing more 
replica changes for ~500 ms 

{code}
[2017-09-01 18:11:42,879] INFO [ReplicaFetcherThread-0-2], Shutting down 
(kafka.server.ReplicaFetcherThread) 
[2017-09-01 18:11:43,314] INFO [ReplicaFetcherThread-0-2], Stopped 
(kafka.server.ReplicaFetcherThread) 
[2017-09-01 18:11:43,314] INFO [ReplicaFetcherThread-0-2], Shutdown completed 
(kafka.server.ReplicaFetcherThread)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4055: MINOR: Update `config/consumer.properties` to have...

2017-10-11 Thread omkreddy
GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/4055

MINOR: Update `config/consumer.properties` to have new consumer properties



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka update-consumer-props

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4055.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4055


commit c5f3ef17fee9eaae47e0bfa66125e806dfd4dfaf
Author: Manikumar Reddy 
Date:   2017-10-11T09:30:27Z

MINOR: Update `config/consumer.properties` to have new consumer properties




---


[jira] [Created] (KAFKA-6050) Cannot alter default topic config

2017-10-11 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6050:
--

 Summary: Cannot alter default topic config
 Key: KAFKA-6050
 URL: https://issues.apache.org/jira/browse/KAFKA-6050
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley


The command to describe the default topic config
{noformat}
bin/kafka-configs.sh --zookeeper localhost:2181 \
  --describe --entity-type topics --entity-name ''
{noformat}

returns without error, but the equivalent command to alter the default topic 
config:

{noformat}
bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
  --entity-type topics --entity-name '' --add-config retention.ms=1000
{noformat}

returns an error:

{noformat}
Error while executing config command Topic name "" is illegal, it 
contains a character other than ASCII alphanumerics, '.', '_' and '-'
org.apache.kafka.common.errors.InvalidTopicException: Topic name "" is 
illegal, it contains a character other than ASCII alphanumerics, '.', '_' and 
'-'
at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-11 Thread Ted Yu
+1

On Mon, Oct 2, 2017 at 10:51 PM, Paolo Patierno  wrote:

> Hi all,
>
> I didn't see any further discussion around this KIP, so I'd like to start
> the vote for it.
>
> Just for reference : https://cwiki.apache.org/
> confluence/display/KAFKA/KIP-204+%3A+adding+records+
> deletion+operation+to+the+new+Admin+Client+API
>
>
> Thanks,
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-11 Thread Paolo Patierno
Hi all,


since I started voting KIP-204 on October 3rd I haven't seen any votes on that. 
I know you are busy on 1.0.0 release, just as reminder 


Thanks.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Paolo Patierno 
Sent: Tuesday, October 3, 2017 5:51 AM
To: dev@kafka.apache.org
Subject: [VOTE] KIP-204 : adding records deletion operation to the new Admin 
Client API

Hi all,

I didn't see any further discussion around this KIP, so I'd like to start the 
vote for it.

Just for reference : 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-204+%3A+adding+records+deletion+operation+to+the+new+Admin+Client+API


Thanks,

Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


[VOTE] KIP-201: Rationalising policy interfaces

2017-10-11 Thread Tom Bentley
I would like to start a vote on KIP-201, which proposes to replace the
existing policy interfaces with a single new policy interface that also
extends policy support to cover new and existing APIs in the AdminClient.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-201%3A+Rationalising+Policy+interfaces

Thanks for your time.

Tom