Re: Kafka-Spark Integration - build failing with sbt

2017-06-17 Thread karan alang
Hey Jozef,

Thanks for the quick response ..
yes, you are right .. spark-sql dependency was missing .. added that & it
worked fine.

regds,
Karan Alang

On Sat, Jun 17, 2017 at 2:24 PM, Jozef.koval 
wrote:

> Hey Karan,
> I believe you are missing spark-sql dependency.
>
> Jozef
>
> Sent from ProtonMail , encrypted email based in
> Switzerland.
>
>
>  Original Message 
> Subject: Re: Kafka-Spark Integration - build failing with sbt
> Local Time: June 17, 2017 10:52 PM
> UTC Time: June 17, 2017 8:52 PM
> From: karan.al...@gmail.com
> To: users@kafka.apache.org, Jozef.koval 
>
>
> Thanks, i was able to get this working.
> here is what i added in build.sbt file
> 
> --
>
> scalaVersion := "2.11.7"
>
> val sparkVers = "2.1.0"
>
> // Base Spark-provided dependencies
>
> libraryDependencies ++= Seq(
>
>   "org.apache.spark" %% "spark-core" % sparkVers % "provided",
>
>   "org.apache.spark" %% "spark-streaming" % sparkVers % "provided",
>
>   "org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % sparkVers)
>
> 
> 
>
> However, i'm running into addition issue when compiling the file using sbt
> .. it gives errors as shown below :
>
> *Pls note - I've added the jars in eclipse, and it works .. however when i
> use sbt to compile, it is failing.*
>
> *What needs to be done ? *
>
> *I've also tried added the jars in CLASSPATH, but still get the same
> error. *
>
> 
> -
>
> [info] Compiling 1 Scala source to /Users/karanalang/Documents/
> Technology/Coursera_spark_scala/spark_kafka_code/target/
> scala-2.11/classes...
>
> [error] /Users/karanalang/Documents/Technology/Coursera_spark_
> scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:6:
> object SQLContext is not a member of package org.apache.spark.sql
>
> [error] import org.apache.spark.sql.SQLContext
>
> [error]^
>
> [error] /Users/karanalang/Documents/Technology/Coursera_spark_
> scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:10:
> object SparkSession is not a member of package org.apache.spark.sql
>
> [error] import org.apache.spark.sql.SparkSession
>
> [error]^
>
> [error] /Users/karanalang/Documents/Technology/Coursera_spark_
> scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:11:
> object types is not a member of package org.apache.spark.sql
>
> [error] import org.apache.spark.sql.types.StructField
>
> [error] ^
>
> [error] /Users/karanalang/Documents/Technology/Coursera_spark_
> scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:12:
> object types is not a member of package org.apache.spark.sql
>
> [error] import org.apache.spark.sql.types.StringType
>
> [error] ^
>
> [error] /Users/karanalang/Documents/Technology/Coursera_spark_
> scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:13:
> object types is not a member of package org.apache.spark.sql
>
> [error] import org.apache.spark.sql.types.StructType
>
> [error] ^
>
> [error] /Users/karanalang/Documents/Technology/Coursera_spark_
> scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:14:
> object Row is not a member of package org.apache.spark.sql
>
> [error] import org.apache.spark.sql.Row
>
> [error]^
>
>
>
> On Sat, Jun 17, 2017 at 12:35 AM, Jozef.koval 
> wrote:
>
>> Hi Karan,
>>
>> spark-streaming-kafka is for old spark (version < 1.6.3)
>> spark-streaming-kafka-0.8 is for current spark (version > 2.0)
>>
>> Jozef
>>
>> n.b. there is also version for kafka 0.10+ see [this](
>> https://spark.apache.org/docs/latest/streaming-kafka-integration.html)
>>
>> Sent from [ProtonMail](https://protonmail.ch), encrypted email based in
>> Switzerland.
>>
>>
>>  Original Message 
>> Subject: Kafka-Spark Integration - build failing with sbt
>> Local Time: June 17, 2017 1:50 AM
>> UTC Time: June 16, 2017 11:50 PM
>> From: karan.al...@gmail.com
>> To: users@kafka.apache.org
>>
>> I"m trying to compile kafka & Spark Streaming integration code i.e.
>> reading
>> from Kafka using Spark Streaming,
>> and the sbt build is failing with error -
>>
>> [error] (*:update) sbt.ResolveException: unresolved dependency:
>> org.apache.spark#spark-streaming-kafka_2.11;2.1.0: not found
>>
>> Scala version -> 2.10.7
>> Spark Version -> 2.1.0
>> Kafka version -> 0.9
>> sbt version -> 0.13
>>
>> Contents of sbt files is as shown below ->
>>
>> 1)
>> vi spark_kafka_code/project/plugins.sbt
>>
>> addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")
>>
>> 2)
>> vi spark_kafka_code/sparkkafka.sbt
>>
>> import AssemblyKeys._
>> assemblySettings
>>
>> name := "SparkKafka Project"
>>
>> ve

Re: Kafka-Spark Integration - build failing with sbt

2017-06-17 Thread Jozef.koval
Hey Karan,
I believe you are missing spark-sql dependency.

Jozef

Sent from [ProtonMail](https://protonmail.ch), encrypted email based in 
Switzerland.

 Original Message 
Subject: Re: Kafka-Spark Integration - build failing with sbt
Local Time: June 17, 2017 10:52 PM
UTC Time: June 17, 2017 8:52 PM
From: karan.al...@gmail.com
To: users@kafka.apache.org, Jozef.koval 

Thanks, i was able to get this working.
here is what i added in build.sbt file
--

scalaVersion := "2.11.7"

val sparkVers = "2.1.0"

// Base Spark-provided dependencies

libraryDependencies ++= Seq(

"org.apache.spark" %% "spark-core" % sparkVers % "provided",

"org.apache.spark" %% "spark-streaming" % sparkVers % "provided",

"org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % sparkVers)



However, i'm running into addition issue when compiling the file using sbt .. 
it gives errors as shown below :

Pls note - I've added the jars in eclipse, and it works .. however when i use 
sbt to compile, it is failing.

What needs to be done ?

I've also tried added the jars in CLASSPATH, but still get the same error.

-

[info] Compiling 1 Scala source to 
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/target/scala-2.11/classes...

[error] 
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:6:
 object SQLContext is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.SQLContext

[error] ^

[error] 
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:10:
 object SparkSession is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.SparkSession

[error] ^

[error] 
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:11:
 object types is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.types.StructField

[error] ^

[error] 
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:12:
 object types is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.types.StringType

[error] ^

[error] 
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:13:
 object types is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.types.StructType

[error] ^

[error] 
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:14:
 object Row is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.Row

[error] ^

On Sat, Jun 17, 2017 at 12:35 AM, Jozef.koval  wrote:
Hi Karan,

spark-streaming-kafka is for old spark (version < 1.6.3)
spark-streaming-kafka-0.8 is for current spark (version > 2.0)

Jozef

n.b. there is also version for kafka 0.10+ see 
[this](https://spark.apache.org/docs/latest/streaming-kafka-integration.html)

Sent from [ProtonMail](https://protonmail.ch), encrypted email based in 
Switzerland.

 Original Message 
Subject: Kafka-Spark Integration - build failing with sbt
Local Time: June 17, 2017 1:50 AM
UTC Time: June 16, 2017 11:50 PM
From: karan.al...@gmail.com
To: users@kafka.apache.org

I"m trying to compile kafka & Spark Streaming integration code i.e. reading
from Kafka using Spark Streaming,
and the sbt build is failing with error -

[error] (*:update) sbt.ResolveException: unresolved dependency:
org.apache.spark#spark-streaming-kafka_2.11;2.1.0: not found

Scala version -> 2.10.7
Spark Version -> 2.1.0
Kafka version -> 0.9
sbt version -> 0.13

Contents of sbt files is as shown below ->

1)
vi spark_kafka_code/project/plugins.sbt

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")

2)
vi spark_kafka_code/sparkkafka.sbt

import AssemblyKeys._
assemblySettings

name := "SparkKafka Project"

version := "1.0"
scalaVersion := "2.11.7"

val sparkVers = "2.1.0"

// Base Spark-provided dependencies
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVers % "provided",
"org.apache.spark" %% "spark-streaming" % sparkVers % "provided",
"org.apache.spark" %% "spark-streaming-kafka" % sparkVers)

mergeStrategy in assembly := {
case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard
case m if m.toLowerCase.startsWith("META-INF") => MergeStrategy.discard
case "reference.conf" => MergeStrategy.concat
case m if m.endsWith("UnusedStubClass.class") => MergeStrategy.discard
case _ => MergeStrategy.fi

Scala type mismatch after upgrade to 0.10.2.1

2017-06-17 Thread Björn Häuser
Hi!

I am maintaining an application which is written in Kafka and uses the 
kafka-streams library.

As said in the topic, after trying to upgrade from 0.10.1.1 to 0.10.2.1, I am 
getting the following compilation error:

[error]  found   : service.streams.transformers.FilterMainCoverSupplier
[error]  required: org.apache.kafka.streams.kstream.TransformerSupplier[_ >: 
String, _ >: ?0(in value x$1), org.apache.kafka.streams.KeyValue[?,?]]
[error] Note: String <: Any (and 
service.streams.transformers.FilterMainCoverSupplier <: 
org.apache.kafka.streams.kstream.TransformerSupplier[String,dto.ContentDataDto,org.apache.kafka.streams.KeyValue[String,dto.ContentDataDto]]),
 but Java-defined trait TransformerSupplier is invariant in type K.
[error] You may wish to investigate a wildcard type such as `_ <: Any`. (SLS 
3.2.10)
[error] Note: dto.ContentDataDto <: Any (and 
service.streams.transformers.FilterMainCoverSupplier <: 
org.apache.kafka.streams.kstream.TransformerSupplier[String,dto.ContentDataDto,org.apache.kafka.streams.KeyValue[String,dto.ContentDataDto]]),
 but Java-defined trait TransformerSupplier is invariant in type V.
[error] You may wish to investigate a wildcard type such as `_ <: Any`. (SLS 
3.2.10)
[error]   .transform(filterMainCover, 
FilterMainCoverSupplier.StateStoreName)

The definition of the Transformer is as follows:

class FilterMainCover extends Transformer[String, ContentDataDto, 
KeyValue[String, ContentDataDto]] {
}

The definition of the TransformerSupplier is as follows:

class FilterMainCoverSupplier extends TransformerSupplier[String, 
ContentDataDto, KeyValue[String, ContentDataDto]] {

  override def get(): Transformer[String, ContentDataDto, KeyValue[String, 
ContentDataDto]] = new FilterMainCover()

}


I went through the confluent examples and could see that it is supposed to just 
work. Anyone got an Idea what I am doing wrong?

Thanks
Björn



Re: Kafka-Spark Integration - build failing with sbt

2017-06-17 Thread karan alang
Thanks, i was able to get this working.
here is what i added in build.sbt file
--

scalaVersion := "2.11.7"

val sparkVers = "2.1.0"

// Base Spark-provided dependencies

libraryDependencies ++= Seq(

  "org.apache.spark" %% "spark-core" % sparkVers % "provided",

  "org.apache.spark" %% "spark-streaming" % sparkVers % "provided",

  "org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % sparkVers)



However, i'm running into addition issue when compiling the file using sbt
.. it gives errors as shown below :

*Pls note - I've added the jars in eclipse, and it works .. however when i
use sbt to compile, it is failing.*

*What needs to be done ? *

*I've also tried added the jars in CLASSPATH, but still get the same
error. *

-

[info] Compiling 1 Scala source to
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/target/scala-2.11/classes...

[error]
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:6:
object SQLContext is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.SQLContext

[error]^

[error]
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:10:
object SparkSession is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.SparkSession

[error]^

[error]
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:11:
object types is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.types.StructField

[error] ^

[error]
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:12:
object types is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.types.StringType

[error] ^

[error]
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:13:
object types is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.types.StructType

[error] ^

[error]
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/src/main/scala/spark/kafka/SparkKafkaDS.scala:14:
object Row is not a member of package org.apache.spark.sql

[error] import org.apache.spark.sql.Row

[error]^



On Sat, Jun 17, 2017 at 12:35 AM, Jozef.koval 
wrote:

> Hi Karan,
>
> spark-streaming-kafka is for old spark (version < 1.6.3)
> spark-streaming-kafka-0.8 is for current spark (version > 2.0)
>
> Jozef
>
> n.b. there is also version for kafka 0.10+ see [this](
> https://spark.apache.org/docs/latest/streaming-kafka-integration.html)
>
> Sent from [ProtonMail](https://protonmail.ch), encrypted email based in
> Switzerland.
>
>  Original Message 
> Subject: Kafka-Spark Integration - build failing with sbt
> Local Time: June 17, 2017 1:50 AM
> UTC Time: June 16, 2017 11:50 PM
> From: karan.al...@gmail.com
> To: users@kafka.apache.org
>
> I"m trying to compile kafka & Spark Streaming integration code i.e. reading
> from Kafka using Spark Streaming,
> and the sbt build is failing with error -
>
> [error] (*:update) sbt.ResolveException: unresolved dependency:
> org.apache.spark#spark-streaming-kafka_2.11;2.1.0: not found
>
> Scala version -> 2.10.7
> Spark Version -> 2.1.0
> Kafka version -> 0.9
> sbt version -> 0.13
>
> Contents of sbt files is as shown below ->
>
> 1)
> vi spark_kafka_code/project/plugins.sbt
>
> addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")
>
> 2)
> vi spark_kafka_code/sparkkafka.sbt
>
> import AssemblyKeys._
> assemblySettings
>
> name := "SparkKafka Project"
>
> version := "1.0"
> scalaVersion := "2.11.7"
>
> val sparkVers = "2.1.0"
>
> // Base Spark-provided dependencies
> libraryDependencies ++= Seq(
> "org.apache.spark" %% "spark-core" % sparkVers % "provided",
> "org.apache.spark" %% "spark-streaming" % sparkVers % "provided",
> "org.apache.spark" %% "spark-streaming-kafka" % sparkVers)
>
> mergeStrategy in assembly := {
> case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard
> case m if m.toLowerCase.startsWith("META-INF") => MergeStrategy.discard
> case "reference.conf" => MergeStrategy.concat
> case m if m.endsWith("UnusedStubClass.class") => MergeStrategy.discard
> case _ => MergeStrategy.first
> }
>
> i launch sbt, and then try to create an eclipse project, complete error
> is as shown below -
>
> -
>
> sbt
> [info] Loading global plugins 

Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-17 Thread Viktor Somogyi
Got it, thanks Hans!

On Sat, Jun 17, 2017 at 11:11 AM, Hans Jespersen  wrote:

>
> Offset commit is something that is done in the act of consuming (or
> reading) Kafka messages.
> Yes technically it is a write to the Kafka consumer offset topic but it's
> much easier for
> administers to think of ACLs in terms of whether the user is allowed to
> write (Produce) or
> read (Consume) messages and not the lower level semantics that are that
> consuming is actually
> reading AND writing (albeit only to the offset topic).
>
> -hans
>
>
>
>
> > On Jun 17, 2017, at 10:59 AM, Viktor Somogyi <
> viktor.somo...@cloudera.com> wrote:
> >
> > Hi Vahid,
> >
> > +1 for OffsetFetch from me too.
> >
> > I also wanted to ask the strangeness of the permissions, like why is
> > OffsetCommit a Read operation instead of Write which would intuitively
> make
> > more sense to me. Perhaps any expert could shed some light on this? :)
> >
> > Viktor
> >
> > On Tue, Jun 13, 2017 at 2:38 PM, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com > wrote:
> >
> >> Hi Michal,
> >>
> >> Thanks a lot for your feedback.
> >>
> >> Your statement about Heartbeat is fair and makes sense. I'll update the
> >> KIP accordingly.
> >>
> >> --Vahid
> >>
> >>
> >>
> >>
> >> From:Michal Borowiecki 
> >> To:users@kafka.apache.org, Vahid S Hashemian <
> >> vahidhashem...@us.ibm.com>, d...@kafka.apache.org
> >> Date:06/13/2017 01:35 AM
> >> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> >> Permission of OffsetFetch
> >> --
> >>
> >>
> >>
> >> Hi Vahid,
> >>
> >> +1 wrt OffsetFetch.
> >>
> >> The "Additional Food for Thought" mentions Heartbeat as a non-mutating
> >> action. I don't think that's true as the GroupCoordinator updates the
> >> latestHeartbeat field for the member and adds a new object to the
> >> heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration()
> >> called from handleHeartbeat()
> >>
> >> NB added dev mailing list back into CC as it seems to have been lost
> along
> >> the way.
> >>
> >> Cheers,
> >>
> >> Michał
> >>
> >>
> >> On 12/06/17 18:47, Vahid S Hashemian wrote:
> >> Hi Colin,
> >>
> >> Thanks for the feedback.
> >>
> >> To be honest, I'm not sure either why Read was selected instead of Write
> >> for mutating APIs in the initial design (I asked Ewen on the
> corresponding
> >> JIRA and he seemed unsure too).
> >> Perhaps someone who was involved in the design can clarify.
> >>
> >> Thanks.
> >> --Vahid
> >>
> >>
> >>
> >>
> >> From:   Colin McCabe *mailto:cmcc...@apache.org>>*
> mailto:cmcc...@apache.org>>
> >> To: *users@kafka.apache.org * <
> users@kafka.apache.org >
> >> Date:   06/12/2017 10:11 AM
> >> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> >> Permission of OffsetFetch
> >>
> >>
> >>
> >> Hi Vahid,
> >>
> >> I think you make a valid point that the ACLs controlling group
> >> operations are not very intuitive.
> >>
> >> This is probably a dumb question, but why are we using Read for mutating
> >> APIs?  Shouldn't that be Write?
> >>
> >> The distinction between Describe and Read makes a lot of sense for
> >> Topics.  A group isn't really something that you "read" from in the same
> >> way as a topic, so it always felt kind of weird there.
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:
> >>
> >> Hi all,
> >>
> >> I'm resending my earlier note hoping it would spark some conversation
> >> this
> >> time around :)
> >>
> >> Thanks.
> >> --Vahid
> >>
> >>
> >>
> >>
> >> From:   "Vahid S Hashemian" * vahidhashem...@us.ibm.com>>*
> >> mailto:vahidhashem...@us.ibm.com>>
> >> To: dev *mailto:d...@kafka.apache.org>>* <
> d...@kafka.apache.org >, "Kafka User"
> >>
> >> *mailto:users@kafka.apache.org>>* <
> users@kafka.apache.org >
> >>
> >> Date:   05/30/2017 08:33 AM
> >> Subject:KIP-163: Lower the Minimum Required ACL Permission of
> >> OffsetFetch
> >>
> >>
> >>
> >> Hi,
> >>
> >> I started a new KIP to improve the minimum required ACL permissions of
> >> some of the APIs:
> >>
> >>
> >>
> >> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch* <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch*>
> >>  163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch>>
> >>
> >>
> >>
> >> The KIP is to address KAFKA-4585.
> >>
> >> Feedback and suggestions are welcome!
> >>
> >> Thanks.
> >> --Vahid
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> --

Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-17 Thread Hans Jespersen

Offset commit is something that is done in the act of consuming (or reading) 
Kafka messages. 
Yes technically it is a write to the Kafka consumer offset topic but it's much 
easier for 
administers to think of ACLs in terms of whether the user is allowed to write 
(Produce) or 
read (Consume) messages and not the lower level semantics that are that 
consuming is actually
reading AND writing (albeit only to the offset topic).

-hans




> On Jun 17, 2017, at 10:59 AM, Viktor Somogyi  
> wrote:
> 
> Hi Vahid,
> 
> +1 for OffsetFetch from me too.
> 
> I also wanted to ask the strangeness of the permissions, like why is
> OffsetCommit a Read operation instead of Write which would intuitively make
> more sense to me. Perhaps any expert could shed some light on this? :)
> 
> Viktor
> 
> On Tue, Jun 13, 2017 at 2:38 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com > wrote:
> 
>> Hi Michal,
>> 
>> Thanks a lot for your feedback.
>> 
>> Your statement about Heartbeat is fair and makes sense. I'll update the
>> KIP accordingly.
>> 
>> --Vahid
>> 
>> 
>> 
>> 
>> From:Michal Borowiecki 
>> To:users@kafka.apache.org, Vahid S Hashemian <
>> vahidhashem...@us.ibm.com>, d...@kafka.apache.org
>> Date:06/13/2017 01:35 AM
>> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
>> Permission of OffsetFetch
>> --
>> 
>> 
>> 
>> Hi Vahid,
>> 
>> +1 wrt OffsetFetch.
>> 
>> The "Additional Food for Thought" mentions Heartbeat as a non-mutating
>> action. I don't think that's true as the GroupCoordinator updates the
>> latestHeartbeat field for the member and adds a new object to the
>> heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration()
>> called from handleHeartbeat()
>> 
>> NB added dev mailing list back into CC as it seems to have been lost along
>> the way.
>> 
>> Cheers,
>> 
>> Michał
>> 
>> 
>> On 12/06/17 18:47, Vahid S Hashemian wrote:
>> Hi Colin,
>> 
>> Thanks for the feedback.
>> 
>> To be honest, I'm not sure either why Read was selected instead of Write
>> for mutating APIs in the initial design (I asked Ewen on the corresponding
>> JIRA and he seemed unsure too).
>> Perhaps someone who was involved in the design can clarify.
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   Colin McCabe *mailto:cmcc...@apache.org>>* 
>> mailto:cmcc...@apache.org>>
>> To: *users@kafka.apache.org * 
>> mailto:users@kafka.apache.org>>
>> Date:   06/12/2017 10:11 AM
>> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
>> Permission of OffsetFetch
>> 
>> 
>> 
>> Hi Vahid,
>> 
>> I think you make a valid point that the ACLs controlling group
>> operations are not very intuitive.
>> 
>> This is probably a dumb question, but why are we using Read for mutating
>> APIs?  Shouldn't that be Write?
>> 
>> The distinction between Describe and Read makes a lot of sense for
>> Topics.  A group isn't really something that you "read" from in the same
>> way as a topic, so it always felt kind of weird there.
>> 
>> best,
>> Colin
>> 
>> 
>> On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:
>> 
>> Hi all,
>> 
>> I'm resending my earlier note hoping it would spark some conversation
>> this
>> time around :)
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   "Vahid S Hashemian" *> >*
>> mailto:vahidhashem...@us.ibm.com>>
>> To: dev *mailto:d...@kafka.apache.org>>* 
>> mailto:d...@kafka.apache.org>>, "Kafka User"
>> 
>> *mailto:users@kafka.apache.org>>* 
>> mailto:users@kafka.apache.org>>
>> 
>> Date:   05/30/2017 08:33 AM
>> Subject:KIP-163: Lower the Minimum Required ACL Permission of
>> OffsetFetch
>> 
>> 
>> 
>> Hi,
>> 
>> I started a new KIP to improve the minimum required ACL permissions of
>> some of the APIs:
>> 
>> 
>> 
>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch*
>>  
>> 
>> >  
>> >
>> 
>> 
>> 
>> The KIP is to address KAFKA-4585.
>> 
>> Feedback and suggestions are welcome!
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> > *Michal Borowiecki*
>> *Senior Software Engineer L4*
>> *T: * +44 208 742 1600 <(208)%20742-1600>
>> +44 203 249 8448 <(203)%20249-8448>
>> 
>> *E: * *michal.borowie...@openbet.com * 
>> mailto:michal.borowie...@openbet.com>>
>> *W: * *www.openbet.com * > >
>>

Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-17 Thread Viktor Somogyi
Hi Vahid,

+1 for OffsetFetch from me too.

I also wanted to ask the strangeness of the permissions, like why is
OffsetCommit a Read operation instead of Write which would intuitively make
more sense to me. Perhaps any expert could shed some light on this? :)

Viktor

On Tue, Jun 13, 2017 at 2:38 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Michal,
>
> Thanks a lot for your feedback.
>
> Your statement about Heartbeat is fair and makes sense. I'll update the
> KIP accordingly.
>
> --Vahid
>
>
>
>
> From:Michal Borowiecki 
> To:users@kafka.apache.org, Vahid S Hashemian <
> vahidhashem...@us.ibm.com>, d...@kafka.apache.org
> Date:06/13/2017 01:35 AM
> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> Permission of OffsetFetch
> --
>
>
>
> Hi Vahid,
>
> +1 wrt OffsetFetch.
>
> The "Additional Food for Thought" mentions Heartbeat as a non-mutating
> action. I don't think that's true as the GroupCoordinator updates the
> latestHeartbeat field for the member and adds a new object to the
> heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration()
> called from handleHeartbeat()
>
> NB added dev mailing list back into CC as it seems to have been lost along
> the way.
>
> Cheers,
>
> Michał
>
>
> On 12/06/17 18:47, Vahid S Hashemian wrote:
> Hi Colin,
>
> Thanks for the feedback.
>
> To be honest, I'm not sure either why Read was selected instead of Write
> for mutating APIs in the initial design (I asked Ewen on the corresponding
> JIRA and he seemed unsure too).
> Perhaps someone who was involved in the design can clarify.
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Colin McCabe ** 
> To: *users@kafka.apache.org* 
> Date:   06/12/2017 10:11 AM
> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> Permission of OffsetFetch
>
>
>
> Hi Vahid,
>
> I think you make a valid point that the ACLs controlling group
> operations are not very intuitive.
>
> This is probably a dumb question, but why are we using Read for mutating
> APIs?  Shouldn't that be Write?
>
> The distinction between Describe and Read makes a lot of sense for
> Topics.  A group isn't really something that you "read" from in the same
> way as a topic, so it always felt kind of weird there.
>
> best,
> Colin
>
>
> On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:
>
> Hi all,
>
> I'm resending my earlier note hoping it would spark some conversation
> this
> time around :)
>
> Thanks.
> --Vahid
>
>
>
>
> From:   "Vahid S Hashemian" **
> 
> To: dev ** , "Kafka User"
>
> ** 
>
> Date:   05/30/2017 08:33 AM
> Subject:KIP-163: Lower the Minimum Required ACL Permission of
> OffsetFetch
>
>
>
> Hi,
>
> I started a new KIP to improve the minimum required ACL permissions of
> some of the APIs:
>
>
>
> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch*
> 
>
>
>
> The KIP is to address KAFKA-4585.
>
> Feedback and suggestions are welcome!
>
> Thanks.
> --Vahid
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
>  *Michal Borowiecki*
> *Senior Software Engineer L4*
> *T: * +44 208 742 1600 <(208)%20742-1600>
> +44 203 249 8448 <(203)%20249-8448>
>
> *E: * *michal.borowie...@openbet.com* 
> *W: * *www.openbet.com* 
> *OpenBet Ltd*
> Chiswick Park Building 9
> 566 Chiswick High Rd
> London
> W4 5XT
> UK
> 
> This message is confidential and intended only for the addressee. If you
> have received this message in error, please immediately notify the
> *postmas...@openbet.com* and delete it from your
> system as well as any copies. The content of e-mails as well as traffic
> data may be monitored by OpenBet for employment and security purposes. To
> protect the environment please do not print this e-mail unless necessary.
> OpenBet Ltd. Registered Office: Chiswick Park Building 9, 566 Chiswick High
> Road, London, W4 5XT, United Kingdom. A company registered in England and
> Wales. Registered no. 3134634. VAT no. GB927523612
>
>
>
>


Re: Single Key Aggregation

2017-06-17 Thread Sameer Kumar
Continued from m last mail...

The code snippet that I shared was after joining impression and
notification logs. Here I am picking the line item and concatenating it
with date. You can also see there is a check for a TARGETED_LINE_ITEM, I am
not emitting the data otherwise.

-Sameer.

On Sat, Jun 17, 2017 at 3:55 PM, Sameer Kumar 
wrote:

> The example I gave was just for illustration. I have impression logs and
> notification logs. Notification logs are essentially tied to impressions
> served. An impression would serve multiple items.
>
> I was just trying to aggregate across a single line item, this means I am
> always generating a single key all the time and since data is streaming, my
> counter should keep on increasing.
>
> What I saw was that while on Machine1, the counter was 100 , another
> machine it was at 1. I saw it as inconsistent.
>
>
> -Sameer.
>
> On Fri, Jun 16, 2017 at 10:47 PM, Matthias J. Sax 
> wrote:
>
>> I just double checked you example code from an email before. There you
>> are using:
>>
>> stream.flatMap(...)
>>   .groupBy((k, v) -> k, Serdes.String(), Serdes.Integer())
>>   .reduce((value1, value2) -> value1 + value2,
>>
>> In you last email, you say that you want to count on category that is
>> contained in you value and your key is something different.
>>
>> If you want to count by category, you need to set the category as key in
>> you .groupBy() -- in you code snipped, you actually don't change the key
>> (seems to be the root cause of the issue you see).
>>
>> Does this help?
>>
>>
>> -Matthias
>>
>>
>> On 6/15/17 11:48 PM, Sameer Kumar wrote:
>> > Ok.. Let me try explain it again.
>> >
>> > So, Lets say my source processor has a different key, now the value
>> that it
>> > contains lets say contains an identifier type: which basically denotes
>> > category and I am counting on different categories. A specific case
>> would
>> > be I do a filter and outputs only a specific category: lets say
>> cat1...Now,
>> > starting from source processor which is being processed across multiple
>> > nodes, cat1 would be written across multiple nodes, since state store is
>> > local..it will be starting from 1 on each of the nodes.
>> >
>> > Does it clarifies the doubt. I think after a processor the same gets
>> > written to a intermediate Kafka topic and this should be taken care of
>> but
>> > this happens, I am not sure on it.
>> >
>> > Processor 1
>> > key1, value:{xyzcat1}
>> > key2, value:{xyzcat2}
>> > key3, value:{xyzcat3}
>> >
>> > Processor 2
>> > Machine1
>> > key: cat1, 1
>> >
>> > Machine2
>> > key:cat1,1
>> >
>> >
>> > -Sameer.
>> >
>> > On Thu, Jun 15, 2017 at 10:27 PM, Eno Thereska 
>> > wrote:
>> >
>> >> I'm not sure if I fully understand this but let me check:
>> >>
>> >> - if you start 2 instances, one instance will process half of the
>> >> partitions, the other instance will process the other half
>> >> - for any given key, like key 100, it will only be processed on one of
>> the
>> >> instances, not both.
>> >>
>> >> Does this help?
>> >>
>> >> Eno
>> >>
>> >>
>> >>
>> >>> On 15 Jun 2017, at 07:40, Sameer Kumar 
>> wrote:
>> >>>
>> >>> Also, I am writing a single key in the output all the time. I believe
>> >>> machine2 will have to write a key and since a state store is local it
>> >>> wouldn't know about the counter state on another machine. So, I guess
>> >> this
>> >>> will happen.
>> >>>
>> >>> -Sameer.
>> >>>
>> >>> On Thu, Jun 15, 2017 at 11:11 AM, Sameer Kumar <
>> sam.kum.w...@gmail.com>
>> >>> wrote:
>> >>>
>>  The input topic contains 60 partitions and data is distributed well
>> >> across
>>  different partitions on different keys. While consumption, I am doing
>> >> some
>>  filtering and writing only single key data.
>> 
>>  Output would be something of the form:- Machine 1
>> 
>>  2017-06-13 16:49:10 INFO  LICountClickImprMR2:116 - licount
>>  k=P:LIS:1236667:2017_06_13:I,v=651
>>  2017-06-13 16:49:30 INFO  LICountClickImprMR2:116 - licount
>>  k=P:LIS:1236667:2017_06_13:I,v=652
>> 
>>  Machine 2
>>  2017-06-13 16:49:10 INFO  LICountClickImprMR2:116 - licount
>>  k=P:LIS:1236667:2017_06_13:I,v=1
>>  2017-06-13 16:49:30 INFO  LICountClickImprMR2:116 - licount
>>  k=P:LIS:1236667:2017_06_13:I,v=2
>> 
>>  I am sharing a snippet of code,
>> 
>>  private KTable extractLICount(KStream<
>> >> Windowed,
>>  AdLog> joinedImprLogs) {
>> KTable liCount = joinedImprLogs.flatMap((key,
>> value)
>>  -> {
>>   List> l = new ArrayList<>();
>>   if (value == null) {
>> return l;
>>   }
>>   String date = new SimpleDateFormat("_MM_dd").format(new
>>  Date(key.window().end()));
>>   // Lineitemids
>>   if (value != null && value.getAdLogType() == 3) {
>> // log.info("Invalid data: " + value);
>> return l;
>>   }
>>   if (value.getAdLogType() == 2) 

Re: Single Key Aggregation

2017-06-17 Thread Sameer Kumar
The example I gave was just for illustration. I have impression logs and
notification logs. Notification logs are essentially tied to impressions
served. An impression would serve multiple items.

I was just trying to aggregate across a single line item, this means I am
always generating a single key all the time and since data is streaming, my
counter should keep on increasing.

What I saw was that while on Machine1, the counter was 100 , another
machine it was at 1. I saw it as inconsistent.


-Sameer.

On Fri, Jun 16, 2017 at 10:47 PM, Matthias J. Sax 
wrote:

> I just double checked you example code from an email before. There you
> are using:
>
> stream.flatMap(...)
>   .groupBy((k, v) -> k, Serdes.String(), Serdes.Integer())
>   .reduce((value1, value2) -> value1 + value2,
>
> In you last email, you say that you want to count on category that is
> contained in you value and your key is something different.
>
> If you want to count by category, you need to set the category as key in
> you .groupBy() -- in you code snipped, you actually don't change the key
> (seems to be the root cause of the issue you see).
>
> Does this help?
>
>
> -Matthias
>
>
> On 6/15/17 11:48 PM, Sameer Kumar wrote:
> > Ok.. Let me try explain it again.
> >
> > So, Lets say my source processor has a different key, now the value that
> it
> > contains lets say contains an identifier type: which basically denotes
> > category and I am counting on different categories. A specific case would
> > be I do a filter and outputs only a specific category: lets say
> cat1...Now,
> > starting from source processor which is being processed across multiple
> > nodes, cat1 would be written across multiple nodes, since state store is
> > local..it will be starting from 1 on each of the nodes.
> >
> > Does it clarifies the doubt. I think after a processor the same gets
> > written to a intermediate Kafka topic and this should be taken care of
> but
> > this happens, I am not sure on it.
> >
> > Processor 1
> > key1, value:{xyzcat1}
> > key2, value:{xyzcat2}
> > key3, value:{xyzcat3}
> >
> > Processor 2
> > Machine1
> > key: cat1, 1
> >
> > Machine2
> > key:cat1,1
> >
> >
> > -Sameer.
> >
> > On Thu, Jun 15, 2017 at 10:27 PM, Eno Thereska 
> > wrote:
> >
> >> I'm not sure if I fully understand this but let me check:
> >>
> >> - if you start 2 instances, one instance will process half of the
> >> partitions, the other instance will process the other half
> >> - for any given key, like key 100, it will only be processed on one of
> the
> >> instances, not both.
> >>
> >> Does this help?
> >>
> >> Eno
> >>
> >>
> >>
> >>> On 15 Jun 2017, at 07:40, Sameer Kumar  wrote:
> >>>
> >>> Also, I am writing a single key in the output all the time. I believe
> >>> machine2 will have to write a key and since a state store is local it
> >>> wouldn't know about the counter state on another machine. So, I guess
> >> this
> >>> will happen.
> >>>
> >>> -Sameer.
> >>>
> >>> On Thu, Jun 15, 2017 at 11:11 AM, Sameer Kumar  >
> >>> wrote:
> >>>
>  The input topic contains 60 partitions and data is distributed well
> >> across
>  different partitions on different keys. While consumption, I am doing
> >> some
>  filtering and writing only single key data.
> 
>  Output would be something of the form:- Machine 1
> 
>  2017-06-13 16:49:10 INFO  LICountClickImprMR2:116 - licount
>  k=P:LIS:1236667:2017_06_13:I,v=651
>  2017-06-13 16:49:30 INFO  LICountClickImprMR2:116 - licount
>  k=P:LIS:1236667:2017_06_13:I,v=652
> 
>  Machine 2
>  2017-06-13 16:49:10 INFO  LICountClickImprMR2:116 - licount
>  k=P:LIS:1236667:2017_06_13:I,v=1
>  2017-06-13 16:49:30 INFO  LICountClickImprMR2:116 - licount
>  k=P:LIS:1236667:2017_06_13:I,v=2
> 
>  I am sharing a snippet of code,
> 
>  private KTable extractLICount(KStream<
> >> Windowed,
>  AdLog> joinedImprLogs) {
> KTable liCount = joinedImprLogs.flatMap((key,
> value)
>  -> {
>   List> l = new ArrayList<>();
>   if (value == null) {
> return l;
>   }
>   String date = new SimpleDateFormat("_MM_dd").format(new
>  Date(key.window().end()));
>   // Lineitemids
>   if (value != null && value.getAdLogType() == 3) {
> // log.info("Invalid data: " + value);
> return l;
>   }
>   if (value.getAdLogType() == 2) {
> long lineitemid = value.getAdClickLog().getItmClmbLId();
> if (lineitemid == TARGETED_LI) {
>   String liKey = String.format("P:LIS:%s:%s:C", lineitemid,
> >> date);
>   l.add(new KeyValue<>(liKey, 1));
> }
> return l;
>   } else if (value.getAdLogType() == 1){
> 
> long[] lineitemids = value.getAdImprLog().getItmClmbLIds();
> if (value.getAdImprLog().isVisible()) {
>   for (int i = 0; i < lineite

Multiple consumers for the same topic/group in different threads of the same JVM

2017-06-17 Thread Cédric Chantepie
Hi,

Doing some benchmarks with multiple consumers for the same topic/group in 
different threads of the same JVM, it seems that the throughput when there is 
only one consumer per group is divided when they are two in the same group. 

Not having mem or cpu issue, I wondering whether there could be some other kind 
of retention in such case.

Thanks in advance

Re: Kafka-Spark Integration - build failing with sbt

2017-06-17 Thread Jozef.koval
Hi Karan,

spark-streaming-kafka is for old spark (version < 1.6.3)
spark-streaming-kafka-0.8 is for current spark (version > 2.0)

Jozef

n.b. there is also version for kafka 0.10+ see 
[this](https://spark.apache.org/docs/latest/streaming-kafka-integration.html)

Sent from [ProtonMail](https://protonmail.ch), encrypted email based in 
Switzerland.

 Original Message 
Subject: Kafka-Spark Integration - build failing with sbt
Local Time: June 17, 2017 1:50 AM
UTC Time: June 16, 2017 11:50 PM
From: karan.al...@gmail.com
To: users@kafka.apache.org

I"m trying to compile kafka & Spark Streaming integration code i.e. reading
from Kafka using Spark Streaming,
and the sbt build is failing with error -

[error] (*:update) sbt.ResolveException: unresolved dependency:
org.apache.spark#spark-streaming-kafka_2.11;2.1.0: not found

Scala version -> 2.10.7
Spark Version -> 2.1.0
Kafka version -> 0.9
sbt version -> 0.13

Contents of sbt files is as shown below ->

1)
vi spark_kafka_code/project/plugins.sbt

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")

2)
vi spark_kafka_code/sparkkafka.sbt

import AssemblyKeys._
assemblySettings

name := "SparkKafka Project"

version := "1.0"
scalaVersion := "2.11.7"

val sparkVers = "2.1.0"

// Base Spark-provided dependencies
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVers % "provided",
"org.apache.spark" %% "spark-streaming" % sparkVers % "provided",
"org.apache.spark" %% "spark-streaming-kafka" % sparkVers)

mergeStrategy in assembly := {
case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard
case m if m.toLowerCase.startsWith("META-INF") => MergeStrategy.discard
case "reference.conf" => MergeStrategy.concat
case m if m.endsWith("UnusedStubClass.class") => MergeStrategy.discard
case _ => MergeStrategy.first
}

i launch sbt, and then try to create an eclipse project, complete error
is as shown below -

-

sbt
[info] Loading global plugins from /Users/karanalang/.sbt/0.13/plugins
[info] Loading project definition from
/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/project
[info] Set current project to SparkKafka Project (in build
file:/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/)
> eclipse
[info] About to create Eclipse project files for your project(s).
[info] Updating
{file:/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/}spark_kafka_code...
[info] Resolving org.apache.spark#spark-streaming-kafka_2.11;2.1.0 ...
[warn] module not found:
org.apache.spark#spark-streaming-kafka_2.11;2.1.0
[warn]  local: tried
[warn]
/Users/karanalang/.ivy2/local/org.apache.spark/spark-streaming-kafka_2.11/2.1.0/ivys/ivy.xml
[warn]  activator-launcher-local: tried
[warn]
/Users/karanalang/.activator/repository/org.apache.spark/spark-streaming-kafka_2.11/2.1.0/ivys/ivy.xml
[warn]  activator-local: tried
[warn]
/Users/karanalang/Documents/Technology/SCALA/activator-dist-1.3.10/repository/org.apache.spark/spark-streaming-kafka_2.11/2.1.0/ivys/ivy.xml
[warn]  public: tried
[warn]
https://repo1.maven.org/maven2/org/apache/spark/spark-streaming-kafka_2.11/2.1.0/spark-streaming-kafka_2.11-2.1.0.pom
[warn]  typesafe-releases: tried
[warn]
http://repo.typesafe.com/typesafe/releases/org/apache/spark/spark-streaming-kafka_2.11/2.1.0/spark-streaming-kafka_2.11-2.1.0.pom
[warn]  typesafe-ivy-releasez: tried
[warn]
http://repo.typesafe.com/typesafe/ivy-releases/org.apache.spark/spark-streaming-kafka_2.11/2.1.0/ivys/ivy.xml
[info] Resolving jline#jline;2.12.1 ...
[warn] ::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::
[warn] :: org.apache.spark#spark-streaming-kafka_2.11;2.1.0: not found
[warn] ::
[warn]
[warn] Note: Unresolved dependencies path:
[warn] org.apache.spark:spark-streaming-kafka_2.11:2.1.0
(/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/sparkkafka.sbt#L12-16)
[warn] +- sparkkafka-project:sparkkafka-project_2.11:1.0
[trace] Stack trace suppressed: run last *:update for the full output.
[error] (*:update) sbt.ResolveException: unresolved dependency:
org.apache.spark#spark-streaming-kafka_2.11;2.1.0: not found
[info] Updating
{file:/Users/karanalang/Documents/Technology/Coursera_spark_scala/spark_kafka_code/}spark_kafka_code...
[info] Resolving org.apache.spark#spark-streaming-kafka_2.11;2.1.0 ...
[warn] module not found:
org.apache.spark#spark-streaming-kafka_2.11;2.1.0
[warn]  local: tried
[warn]
/Users/karanalang/.ivy2/local/org.apache.spark/spark-streaming-kafka_2.11/2.1.0/ivys/ivy.xml
[warn]  activator-launcher-local: tried
[warn]
/Users/karanalang/.activator/repository/org.apache.spark/spark-streaming-kafka_2.11/2.1.0/ivys/ivy.xml
[warn]  activator-local: tried
[warn]
/Users/karanalang/Documents/Technology/SCALA/activator-dis