Hello I have a program which requires 0.10.1.0 streams API. The jar is
packaged by maven with all dependencies. I tried to consume a Kafka
topic spit from a Kafka 9 cluster.
It has such error:
org.apache.kafka.common.protocol.types.SchemaException: Error reading
field 'topic_metadata': Error
How to prevent it?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
Hi I have a Ubuntu box with 4GB memory and duo cores. Do you think it
won't be enough to run spark streaming and kafka? I try to install
standalone mode spark kafka so I can debug them in IDE. Do I need to
install hadoop?
Thanks!
J
Hi Gurus,
Please help.
But please don't tell me to use updateStateByKey because I need a
global variable (something like the clock time) across the micro
batches but not depending on key. For my case, it is not acceptable to
maintain a state for each key since each key comes in different times.
.
On Tue, Aug 18, 2015 at 2:27 PM, Joanne Contact joannenetw...@gmail.com
wrote:
Hi Gurus,
Please help.
But please don't tell me to use updateStateByKey because I need a
global variable (something like the clock time) across the micro
batches but not depending on key. For my case
Hi Guys,
I have struggled for a while on this seeming simple thing:
I have a sequence of timestamps and want to create a dataframe with 1 column.
Seq[java.sql.Timestamp]
//import collection.breakOut
var seqTimestamp = scala.collection.Seq(listTs:_*)
seqTimestamp: Seq[java.sql.Timestamp] =
Hello Sparkers,
I kept getting this error:
java.lang.ClassCastException: scala.Tuple2 cannot be cast to
org.apache.spark.mllib.regression.LabeledPoint
I have tried the following to convert v._1 to double:
Method 1:
(if(v._10) 1d else 0d)
Method 2:
def bool2Double(b:Boolean): Double = {
if
Hi I am following the instruction on this website.
http://www.infoobjects.com/spark-with-avro/
I installed the sparkavro libary on https://github.com/databricks/spark-avro
on a machine which only has hive gateway client role on a hadoop cluster.
somehow I got error on reading the avro file.
never mind. find my spark is still 1.2 but the avro library requires 1.3.
will try again.
On Fri, Mar 27, 2015 at 9:38 PM, Joanne Contact joannenetw...@gmail.com
wrote:
Hi I am following the instruction on this website.
http://www.infoobjects.com/spark-with-avro/
I installed the sparkavro
Hi gurus,
I am trying to install a real linux machine(not VM) where i will install spark
also Hadoop. I plan on learning the clusters.
I found Ubuntu has desktop and server versions. Do it matter?
Thanks!!
J
-
To
Hello I am new. Did not seem to find the answer after a brief research.
Please help.
Thanks!
J
Road, Warwick CV34 5AH
On Fri, Dec 26, 2014 at 7:59 PM, Joanne Contact joannenetw...@gmail.com
wrote:
Hello I am new. Did not seem to find the answer after a brief research.
Please help.
Thanks!
J
Hi Gurus,
I did not look at the code yet. I wonder if StreamingLinearRegressionWithSGD
http://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/regression/StreamingLinearRegressionWithSGD.html
is equivalent to
LinearRegressionWithSGD
Hi Gurus,
Sorry for my naive question. I am new.
I seemed to read somewhere that spark is still batch learning, but spark
streaming could allow online learning.
I could not find this on the website now.
http://spark.apache.org/docs/latest/streaming-programming-guide.html
I know MLLib uses
Thank you Tobias!
On Mon, Nov 24, 2014 at 5:13 PM, Tobias Pfeiffer t...@preferred.jp wrote:
Hi,
On Tue, Nov 25, 2014 at 9:40 AM, Joanne Contact joannenetw...@gmail.com
wrote:
I seemed to read somewhere that spark is still batch learning, but spark
streaming could allow online learning
Hello I am trying to read kafka stream to a text file by running spark from
my IDE (IntelliJ IDEA) . The code is similar as a previous thread on
persisting stream to a text file.
I am new to spark or scala. I believe the spark is on local mode as the
console shows
14/11/21 14:17:11 INFO
use the right email list.
-- Forwarded message --
From: Joanne Contact joannenetw...@gmail.com
Date: Fri, Nov 21, 2014 at 2:32 PM
Subject: Persist kafka streams to text file
To: u...@spark.incubator.apache.org
Hello I am trying to read kafka stream to a text file by running spark
17 matches
Mail list logo