When I try to execute my task with Spark it starts to copy the jars it
needs to HDFS and it finally fails, I don't know exactly why. I have
checked HDFS and it copies the files, so, it seems to work that part.
I changed the log level to debug but there's nothing else to help.
What else does Spark
I was adding some bad jars I guess. I deleted all the jars and copied
them again and it works.
2015-01-08 14:15 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com:
When I try to execute my task with Spark it starts to copy the jars it
needs to HDFS and it finally fails, I don't know exactly why. I
I'm trying to make some operation with windows and intervals.
I get data every15 seconds, and want to have a windows of 60 seconds
with batch intervals of 15 seconds.
I''m injecting data with ncat. if I inject 3 logs in the same interval
I get into the do something each 15 secods during one
the println(4...)?? shouldn't it execute all
the code each 15 seconds that it's what it's defined on the context
(val ssc = new StreamingContext(sparkConf, Seconds(15));)
2014-12-26 10:56 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com:
I'm trying to make some operation with windows and intervals.
I
Oh, I didn't understand what I was doing, my fault (too much parties
these xmas). Thought windows works in another weird way. Sorry for the
questions..
2014-12-26 13:42 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com:
I'm trying to understand why it's not working and I typed some println
I'm a newbie with Spark,,, a simple question
val errorLines = lines.filter(_.contains(h))
val mapErrorLines = errorLines.map(line = (key, line))
val grouping = errorLinesValue.groupByKeyAndWindow(Seconds(8), Seconds(4))
I get something like:
604: ---
605:
, Guillermo Ortiz konstt2...@gmail.com
wrote:
I'm a newbie with Spark,,, a simple question
val errorLines = lines.filter(_.contains(h))
val mapErrorLines = errorLines.map(line = (key, line))
val grouping = errorLinesValue.groupByKeyAndWindow(Seconds(8), Seconds(4))
I get something like:
604
and do something for each element.
}
I think that it must be pretty basic,, argg.
2014-12-17 18:43 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com:
What I would like to do it's to count the number of elements and if
it's greater than a number, I have to iterate all them and store them
in mysql
Why doesn't it work?? I guess that it's the same with \n.
2014-12-13 12:56 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com:
I got it, thanks,, a silly question,, why if I do:
out.write(hello + System.currentTimeMillis() + \n); it doesn't
detect anything and if I do
out.println(hello
Thanks.
2014-12-14 12:20 GMT+01:00 Gerard Maas gerard.m...@gmail.com:
Are you using a bufferedPrintWriter? that's probably a different flushing
behaviour. Try doing out.flush() after out.write(...) and you will have the
same result.
This is Spark unrelated btw.
-kr, Gerard.
ak...@sigmoidanalytics.com wrote:
socketTextStream is Socket client which will read from a TCP ServerSocket.
Thanks
Best Regards
On Fri, Dec 12, 2014 at 7:21 PM, Guillermo Ortiz konstt2...@gmail.com
wrote:
I dont' understand what spark streaming socketTextStream is waiting...
is it like
Hi,
I'm a newbie with Spark,, I'm just trying to use SparkStreaming and
filter some data sent with a Java Socket but it's not working... it
works when I use ncat
Why is it not working??
My sparkcode is just this:
val sparkConf = new SparkConf().setMaster(local[2]).setAppName(Test)
val
which will be sent to the client whoever connects
on 12345, i have it tested and is working with SparkStreaming
(socketTextStream).
Thanks
Best Regards
On Fri, Dec 12, 2014 at 6:25 PM, Guillermo Ortiz konstt2...@gmail.com
wrote:
Hi,
I'm a newbie with Spark,, I'm just trying to use
Hello,
I'm a newbie with Spark but I've been working with Hadoop for a while.
I have two questions.
Is there any case where MR is better than Spark? I don't know what
cases I should be used Spark by MR. When is MR faster than Spark?
The other question is, I know Java, is it worth it to learn
Hi,
I'm starting with Spark and I just trying to understand if I want to
use Spark Streaming, should I use to feed it Flume or Kafka? I think
there's not a official Sink for Flume to Spark Streaming and it seems
that Kafka it fits better since gives you readibility.
Could someone give a good
or something else) and make it available for a variety
of apps via Kafka.
Hope this helps!
Hari
On Wed, Nov 19, 2014 at 8:10 AM, Guillermo Ortiz konstt2...@gmail.com
wrote:
Hi,
I'm starting with Spark and I just trying to understand if I want to
use Spark Streaming, should I use to feed
Streaming
(from Flume or Kafka or something else) and make it available for a variety
of apps via Kafka.
Hope this helps!
Hari
On Wed, Nov 19, 2014 at 8:10 AM, Guillermo Ortiz konstt2...@gmail.com
wrote:
Hi,
I'm starting with Spark and I just trying to understand if I want to
use Spark
, Guillermo Ortiz konstt2...@gmail.com
wrote:
Thank you for your answer, I don't know if I typed the question
correctly. But your nswer helps me.
I'm going to make the question again for knowing if you understood me.
I have this topology:
DataSource1, , DataSourceN -- Kafka -- SparkStreaming
I would like to define the names of my output in Spark, I have a process
which write many fails and I would like to name them, is it possible? I
guess that it's not possible with saveAsText method.
It would be something similar to the MultipleOutput of Hadoop.
101 - 119 of 119 matches
Mail list logo