I am student of telecommunications engineering and this year I worked with
spark. It is a world that I like and want to know if this job having in
this area.
Thanks for all
Regards
Note: CCing user@spark.apache.org
First, you must check if the RDD is empty:
messages.foreachRDD { rdd =>
if (!rdd.isEmpty) { }}
Now, you can obtain the instance of a SQLContext:
val sqlContext = SQLContextSingleton.getInstance(rdd.sparkContext)
I am trying save some data in Cassandra in app with spark Streaming:
Messages.foreachRDD {
. . .
CassandraRDD.saveToCassandra("test","test")
}
When I run, the app is closes when I recibe data or can't connect with
Cassandra.
Some idea? Thanks
--
Atte. Sergio Jiménez
I have a Counter family colums in Cassandra. I want update this counters
with a aplication in spark Streaming. How can I update counter cassandra
with Spark?
Thanks.
Hi,
I am trying create a DashBoard of a job of Apache Spark. I need run Spark
Streaming 24/7 and when recive a ajax request this answer with the actual
state of the job. I have created the client, and the program in Spark. I
tried create the service of response with play, but this run the program
;
> Thanks
> Best Regards
>
> On Fri, Apr 24, 2015 at 11:20 PM, Sergio Jiménez Barrio <
> drarse.a...@gmail.com> wrote:
>
>> Hi,
>>
>> I need compare the count of messages recived if is 0 or not, but
>> messages.count() return a DStrea
But if a use messages.count().print this show a single number :/
2015-04-24 20:22 GMT+02:00 Sean Owen :
> It's not a Long. it's an infinite stream of Longs.
>
> On Fri, Apr 24, 2015 at 2:20 PM, Sergio Jiménez Barrio
> wrote:
> > It isn't the sum. This
data so far but may have data in the future.
> That's why I say you can count records received to date.
>
> On Fri, Apr 24, 2015 at 1:57 PM, Sergio Jiménez Barrio
> wrote:
> > My problem is that I need know if I have a DStream with data. If in this
> > second I didn'
Hi,
I need compare the count of messages recived if is 0 or not, but
messages.count() return a DStream[Long]. I tried this solution:
val cuenta = messages.count().foreachRDD{ rdd =>
rdd.first()
}
But th
Spark Documentation
Thanks for all!
2015-04-23 10:29 GMT+02:00 Sergio Jiménez Barrio :
> Thank you ver much, Tathagata!
>
>
> El miércoles, 22 de abril de 2015, Tathagata Das
> escribió:
>
>> Aaah, that. That is probably a limitation of the SQLContext (cc'ing Yin
>
Thank you ver much, Tathagata!
El miércoles, 22 de abril de 2015, Tathagata Das
escribió:
> Aaah, that. That is probably a limitation of the SQLContext (cc'ing Yin
> for more information).
>
>
> On Wed, Apr 22, 2015 at 7:07 AM, Sergio Jiménez Barrio <
> drars
Sorry, this is the error:
[error] /home/sergio/Escritorio/hello/streaming.scala:77: Implementation
restriction: case classes cannot have more than 22 parameters.
2015-04-22 16:06 GMT+02:00 Sergio Jiménez Barrio :
> I tried the solution of the guide, but I exceded the size of case class
&g
; What about sqlcontext.createDataframe(rdd)?
>> On 22 Apr 2015 23:04, "Sergio Jiménez Barrio"
>> wrote:
>>
>>> Hi,
>>>
>>> I am using Kafka with Apache Stream to send JSON to Apache Spark:
>>>
>>> val messages = KafkaUtils.c
Hi,
I am using Kafka with Apache Stream to send JSON to Apache Spark:
val messages = KafkaUtils.createDirectStream[String, String,
StringDecoder, StringDecoder](ssc, kafkaParams, topicsSet)
Now, I want parse the DStream created to DataFrame, but I don't know if
Spark 1.3 have some easy way for t
the user list.
>
> On Mon, Apr 6, 2015 at 6:53 AM, Sergio Jiménez Barrio <
> drarse.a...@gmail.com> wrote:
>
>> Hi!,
>>
>> I had tried your solution, and I saw that the first row is null. This is
>> important? Can I work with null rows? Some rows have
15 matches
Mail list logo