Hey Jozef,
Thanks for the quick response ..
yes, you are right .. spark-sql dependency was missing .. added that & it
worked fine.
regds,
Karan Alang
On Sat, Jun 17, 2017 at 2:24 PM, Jozef.koval
wrote:
> Hey Karan,
> I believe you are missing spark-sql dependency.
>
> Jozef
>
> Sent from Proto
Hey Karan,
I believe you are missing spark-sql dependency.
Jozef
Sent from [ProtonMail](https://protonmail.ch), encrypted email based in
Switzerland.
Original Message
Subject: Re: Kafka-Spark Integration - build failing with sbt
Local Time: June 17, 2017 10:52 PM
UTC Time: Jun
Hi!
I am maintaining an application which is written in Kafka and uses the
kafka-streams library.
As said in the topic, after trying to upgrade from 0.10.1.1 to 0.10.2.1, I am
getting the following compilation error:
[error] found : service.streams.transformers.FilterMainCoverSupplier
[erro
Thanks, i was able to get this working.
here is what i added in build.sbt file
--
scalaVersion := "2.11.7"
val sparkVers = "2.1.0"
// Base Spark-provided dependencies
libraryDependencies ++= Seq(
"org
Got it, thanks Hans!
On Sat, Jun 17, 2017 at 11:11 AM, Hans Jespersen wrote:
>
> Offset commit is something that is done in the act of consuming (or
> reading) Kafka messages.
> Yes technically it is a write to the Kafka consumer offset topic but it's
> much easier for
> administers to think of
Offset commit is something that is done in the act of consuming (or reading)
Kafka messages.
Yes technically it is a write to the Kafka consumer offset topic but it's much
easier for
administers to think of ACLs in terms of whether the user is allowed to write
(Produce) or
read (Consume) mes
Hi Vahid,
+1 for OffsetFetch from me too.
I also wanted to ask the strangeness of the permissions, like why is
OffsetCommit a Read operation instead of Write which would intuitively make
more sense to me. Perhaps any expert could shed some light on this? :)
Viktor
On Tue, Jun 13, 2017 at 2:38 P
Continued from m last mail...
The code snippet that I shared was after joining impression and
notification logs. Here I am picking the line item and concatenating it
with date. You can also see there is a check for a TARGETED_LINE_ITEM, I am
not emitting the data otherwise.
-Sameer.
On Sat, Jun
The example I gave was just for illustration. I have impression logs and
notification logs. Notification logs are essentially tied to impressions
served. An impression would serve multiple items.
I was just trying to aggregate across a single line item, this means I am
always generating a single k
Hi,
Doing some benchmarks with multiple consumers for the same topic/group in
different threads of the same JVM, it seems that the throughput when there is
only one consumer per group is divided when they are two in the same group.
Not having mem or cpu issue, I wondering whether there could b
Hi Karan,
spark-streaming-kafka is for old spark (version < 1.6.3)
spark-streaming-kafka-0.8 is for current spark (version > 2.0)
Jozef
n.b. there is also version for kafka 0.10+ see
[this](https://spark.apache.org/docs/latest/streaming-kafka-integration.html)
Sent from [ProtonMail](https://pr
11 matches
Mail list logo