Try to use 2.10.3
I have met the same problem
On Feb 19, 2014 4:21 AM, Tao Xiao xiaotao.cs@gmail.com wrote:
Hi,
I'm learning Spark 0.9 by its tutorial. To write my first application
in Scala, I followed the instructions of A Standalong App in
synchronization protocol between replications. If possible, could you
please give me one concrete workload that really needs synchronizing
between replications?
I have browesed through the flux paper and didn't find that concrete
workload.
Thanks
Dachuan.
On Feb 19, 2014 1:24 AM, Tathagata Das
I don't have a conclusive answer but I would like to discuss this.
If one node CPU is slower than the other, Windowing in absolute time won't
cause any trouble because data are well partitioned.
On Feb 19, 2014 1:06 AM, Aries Kong aries.ko...@gmail.com wrote:
hi all,
It seems that the
://www.sigmoidanalytics.com
https://twitter.com/mayur_rustagi
On Wed, Feb 19, 2014 at 5:05 AM, dachuan hdc1...@gmail.com wrote:
I don't have a conclusive answer but I would like to discuss this.
If one node CPU is slower than the other, Windowing in absolute time
won't cause any trouble because
I don't know the final answer, but I'd like to discuss about your problem.
How many nodes are you using, and how many NetworkReceivers have you
started?
Which specific class is not serializable?
thanks,
dachuan.
On Wed, Feb 19, 2014 at 9:42 AM, Sourav Chandra
sourav.chan...@livestream.com
snippet:
val ssc = new StreamingContext(...)
(1 to 4).foreach(i = {
val stream =
KafkaUtils.createStream(...).flatMap(...).reduceByKeyAndWindow(...)..filter(...).foreach(saveToCassandra())
})
ssc.start()
I am using 2 nodes
Thanks,
Sourav
On Wed, Feb 19, 2014 at 8:21 PM, dachuan hdc1
otherwise it
runs out of space)
Q2:
The paper talks about incremental reduce. I'd like to know what it is. I
do use reduce so I could get an aggregate of counts. What is this
incremental reduce?
Thanks
-A
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
I wish I could also understand the cryptic sbt configuration file. Because
only through that way, I can compile more programs correctly, for example,
programs in GraphX, Spark Streaming, and any new dependencies.
On Wed, Feb 19, 2014 at 9:45 PM, Tao Xiao xiaotao.cs@gmail.com wrote:
dachuan
;2.10: not found error.
Your graphx question is one stage later than mine, and do you know how does
sbt find scala-library?
thanks,
dachuan.
On Mon, Feb 17, 2014 at 1:10 PM, xben x...@free.fr wrote:
Sorry, I meant the following line
libraryDependencies += org.apache.spark %% *spark-graphx
the stream is checkpointed to
would grow forever. Is it every batch_duration?
ie:
For val streamingContext = new StreamingContext(conf, Seconds(10)) the
checkpointed data would be rewritten every 10 seconds.
Thanks
-Adrian
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Thanks for your reply.
I have changed scalaVersion := 2.10 to scalaVersion := 2.10.3 then
everything is good.
So this is a documentation bug :)
dachuan.
On Tue, Feb 18, 2014 at 6:50 PM, Denny Lee denny.g@gmail.com wrote:
What version of Scala are you using? For example, if you're using
, Andrew Ash wrote:
Dachuan,
Where did you find that faulty documentation? I'd like to get it fixed.
Thanks!
Andrew
On Tue, Feb 18, 2014 at 4:15 PM, dachuan hdc1...@gmail.com wrote:
Thanks for your reply.
I have changed scalaVersion := 2.10 to scalaVersion := 2.10.3 then
everything
/stderr
closed: Bad file descriptor
13/11/19 18:03:13 ERROR Worker: key not found: app-20131119180313-0001/0
Why is the worker killed as soon as it is started? I should mention I
don't have this problem when using pyspark.
thanks!
Umar
--
Dachuan Huang
Cellphone: 614-390-7234
2015
?
thanks,
dachuan.
what's the difference between Slaves, workers,
executors.
My understanding is the slaves and workers are interchangeable ?
Thanks.
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus, Ohio
U.S.A.
43210
am wrong.
thanks,
dachuan.
this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/PageView-streaming-sample-lost-page-views-tp1126p1128.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus
?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/PageView-streaming-sample-lost-page-views-tp1126p1143.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State
-user-list.1001560.n3.nabble.com/PageView-streaming-sample-lost-page-views-tp1126p1149.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus, Ohio
U.S.A.
43210
Fault-tolerant
questionhttp://apache-spark-user-list.1001560.n3.nabble.com/Spark-Fault-tolerant-question-tp1008.html
Sent from the Apache Spark User List mailing list
archivehttp://apache-spark-user-list.1001560.n3.nabble.com/at Nabble.com.
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil
code?
3,
Can anybody share your real world streaming example? for example, including
source code, and cluster configuration details?
thanks,
dachuan.
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus, Ohio
U.S.A.
43210
2-exp-12-15-2013-fifo-2--3.pdf
.
thanks,
dachuan.
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus, Ohio
U.S.A.
43210
HBase in
the Spark Shell (0.8)? I'd also like a Java example to look at once I've
confirmed basic connectivity from the shell. Thx!
-sudarshan
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus, Ohio
U.S.A.
43210
example to look at once I've
confirmed basic connectivity from the shell. Thx!
-sudarshan
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus, Ohio
U.S.A.
43210
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
they are in streaming.dstream.WindowDStream.
thanks.
:
org.apache.spark.streaming.examples.clickstream.PageView... can you
check the PageView class in the examples and make sure it has the
@serializable directive? I seem to remember having to add it.
good luck,
Thunder
On Tue, Oct 29, 2013 at 6:54 AM, dachuan hdc1...@gmail.com wrote:
Hi
, and follow the workflow, the first
class is SparkContext. And finally two Actors are alive (DriverActor and
Client).
I am happy to share my notes (which is in onenote format) if you need.
thanks,
dachuan.
hi, all,
sorry to ask this simple question, but any idea about how to join the dev
mailing list? I have sent an empty email to d...@spark.incubator.apache.org,
but I got rejected.
thanks.
On Fri, Oct 25, 2013 at 4:06 PM, dachuan hdc1...@gmail.com wrote:
Hi, I just started reading spark code
material about the memory management?
thanks,
dachuan.
On Thu, Oct 17, 2013 at 2:58 PM, Ameet Kini ameetk...@gmail.com wrote:
I'm using the scala 2.10 branch of Spark in standalone mode, and am
finding that the executor gets started with the default 512M even after
setting spark.executor.memory
)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus, Ohio
U.S.A.
43210
: Permission denied
[error] (examples/*:assembly) java.io.IOException: Permission denied
[error] Total time: 446 s, completed Oct 17, 2013 3:46:22 PM
What's going on here?
thanks!
--
Dachuan Huang
Cellphone: 614-390-7234
2015 Neil Avenue
Ohio State University
Columbus, Ohio
U.S.A.
43210
months.
On Thu, Oct 17, 2013 at 3:11 PM, dachuan hdc1...@gmail.com wrote:
I'm sorry if this doesn't answer your question directly, but I have tried
spark 0.9.0 and hdfs 1.0.4 just now, it works..
On Thu, Oct 17, 2013 at 6:05 PM, Koert Kuipers ko...@tresata.com wrote:
after upgrading from
32 matches
Mail list logo