Why is this feature in the release note?

   - [KAFKA-264 <https://issues.apache.org/jira/browse/KAFKA-264>] - Change
   the consumer side load balancing and distributed co-ordination to use a
   consumer co-ordinator

I thought this was already done in 2015.

On Thu, Oct 6, 2016 at 4:55 PM, Vahid S Hashemian <vahidhashem...@us.ibm.com
> wrote:

> Jason,
>
> Thanks a lot for managing this release.
>
> I ran the quick start (Steps 2-8) with this release candidate on Ubuntu,
> Windows, and Mac and they mostly look great.
> These are some, hopefully, minor items and gaps I noticed with respect to
> the existing quick start documentation (and the updated quick start that
> leverages the new consumer).
> They may very well be carryovers from previous releases, or perhaps
> specific to my local environments.
> Hopefully others can confirm.
>
>
> Windows
>
> Since there are separate scripts on Windows platform, it probably would
> help if that is clarified in the quick start section. E.g. "On Windows
> platform replace `bin/` with `bin\windows\`". Or even have a separate
> quick start for Windows since a number of commands will be different on
> Windows.
> There is no `connect-standalone.sh` equivalent for Windows under
> bin\windows folder (Step 7).
> Step 8 is also not tailored for Windows terminals. I skipped this step.
> When I try to consume message using the new consumer (Step 5) I get an
> exception on the broker side. The old consumer works fine.
>
> java.io.IOException: Map failed
>         at sun.nio.ch.FileChannelImpl.map(Unknown Source)
>         at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:61)
>         at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:51)
>         at kafka.log.LogSegment.<init>(LogSegment.scala:67)
>         at kafka.log.Log.loadSegments(Log.scala:255)
>         at kafka.log.Log.<init>(Log.scala:108)
>         at kafka.log.LogManager.createLog(LogManager.scala:362)
>         at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:94)
>         at
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.
> apply(Partition.scala:174)
>         at
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.
> apply(Partition.scala:174)
>         at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>         at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:174)
>         at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:168)
>         at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
>         at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:242)
>         at kafka.cluster.Partition.makeLeader(Partition.scala:168)
>         at
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(
> ReplicaManager.scala:740)
>         at
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(
> ReplicaManager.scala:739)
>         at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.
> apply(HashMap.scala:98)
>         at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.
> apply(HashMap.scala:98)
>         at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>         at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>         at
> kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:739)
>         at
> kafka.server.ReplicaManager.becomeLeaderOrFollower(
> ReplicaManager.scala:685)
>         at
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
>         at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>         at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.OutOfMemoryError: Map failed
>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>         ... 29 more
>
> This issue seems to break the broker and I have to clear out the logs so I
> can bring the broker back up again.
>
>
> Ubuntu / Mac
>
> At Step 8, the output I'm seeing after going through the instructions in
> sequence is this (with unique words)
>
> all     1
> lead    1
> to      1
> hello   1
> streams 2
> join    1
> kafka   3
> summit  1
>
> which is different what I see in the documentation (with repeating words).
>
>
> --Vahid
>
>
>
>
> From:   Jason Gustafson <ja...@confluent.io>
> To:     users@kafka.apache.org, d...@kafka.apache.org, kafka-clients
> <kafka-clie...@googlegroups.com>
> Date:   10/04/2016 04:13 PM
> Subject:        Re: [VOTE] 0.10.1.0 RC0
>
>
>
> One clarification: this is a minor release, not a major one.
>
> -Jason
>
> On Tue, Oct 4, 2016 at 4:01 PM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 0.10.1.0. This
> is
> > a major release that includes great new features including throttled
> > replication, secure quotas, time-based log searching, and queryable
> state
> > for Kafka Streams. A full list of the content can be found here:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.1.
> Since
> > this is a major release, we will give people more time to try it out and
> > give feedback.
> >
> > Release notes for the 0.10.1.0 release:
> > http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Monday, Oct 10, 9am PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~jgus/kafka-0.10.1.0-rc0/javadoc/
> >
> > * Tag to be voted upon (off 0.10.1 branch) is the 0.10.1.0 tag:
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > b86130bad1a1a4a3d1dbe5c486977e6968b3ebc6
> >
> > * Documentation:
> > http://kafka.apache.org/0101/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0101/protocol.html
> >
> > Note that integration/system testing on Jenkins has been a major problem
> > this release cycle. In order to validate this RC, we need to get these
> > tests stable again. Any help we can get from the community will be
> greatly
> > appreciated.
> >
> > Thanks,
> >
> > Jason
> >
>
>
>
>
>

Reply via email to