Hi,
I'm also trying to use the insertInto method, but end up getting the
assertion error
Is there any workaround to this??
rgds
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Trying-to-run-SparkSQL-over-Spark-Streaming-tp12530p21316.html
Sent from
.nabble.com/Trying-to-run-SparkSQL-over-Spark-Streaming-tp12530p13004.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands
.nabble.com/Trying-to-run-SparkSQL-over-Spark-Streaming-tp12530p12812.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands
Since this is the case then is there any way to run join over data received
from two different streams?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Trying-to-run-SparkSQL-over-Spark-Streaming-tp12530p12739.html
Sent from the Apache Spark User
Hi,
On Mon, Aug 25, 2014 at 7:11 PM, praveshjain1991 praveshjain1...@gmail.com
wrote:
If you want to issue an SQL statement on streaming data, you must have
both
the registerAsTable() and the sql() call *within* the foreachRDD(...)
block,
or -- as you experienced -- the table name will be
Hi again,
On Tue, Aug 26, 2014 at 10:13 AM, Tobias Pfeiffer t...@preferred.jp wrote:
On Mon, Aug 25, 2014 at 7:11 PM, praveshjain1991
praveshjain1...@gmail.com wrote:
If you want to issue an SQL statement on streaming data, you must have
both
the registerAsTable() and the sql() call
is lost and,
from what I understand, there is then no way to learn about the column names
of the returned data, as this information is only encoded in the
SchemaRDD/*
Why is this bad??
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Trying-to-run-SparkSQL
Hi,
On Thu, Aug 21, 2014 at 3:11 PM, praveshjain1991 praveshjain1...@gmail.com
wrote:
The part that you mentioned */the variable `result ` is of type
DStream[Row]. That is, the meta-information from the SchemaRDD is lost and,
from what I understand, there is then no way to learn about the
Oh right. Got it. Thanks
Also found this link on that discussion:
https://github.com/thunderain-project/StreamSQL
Does this provide more features than Spark?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Trying-to-run-SparkSQL-over-Spark-Streaming
...@spark.incubator.apache.org
Subject: Re: Trying to run SparkSQL over Spark Streaming
Oh right. Got it. Thanks
Also found this link on that discussion:
https://github.com/thunderain-project/StreamSQL
Does this provide more features than Spark?
--
View this message in context:
http://apache-spark-user
))
val teenagers = sqc.sql(SELECT name FROM data WHERE age = 13 AND age
= 19)
ssc.start()
ssc.awaitTermination()
}
}
Any suggestions welcome. Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Trying-to-run-SparkSQL-over-Spark-Streaming
Hi,
On Thu, Aug 21, 2014 at 2:19 PM, praveshjain1991 praveshjain1...@gmail.com
wrote:
Using Spark SQL with batch data works fine so I'm thinking it has to do
with
how I'm calling streamingcontext.start(). Any ideas what is the issue? Here
is the code:
Please have a look at
12 matches
Mail list logo