Thanks Ryan! In this case, I will have Dataset so is there a way to
convert Row to Json string?
Thanks
On Sat, Sep 9, 2017 at 5:14 PM, Shixiong(Ryan) Zhu
wrote:
> It's because "toJSON" doesn't support Structured Streaming. The current
> implementation will convert the
It's because "toJSON" doesn't support Structured Streaming. The current
implementation will convert the Dataset to an RDD, which is not supported
by streaming queries.
On Sat, Sep 9, 2017 at 4:40 PM, kant kodali wrote:
> yes it is a streaming dataset. so what is the problem
toJSON on Row object.
On Sat, Sep 9, 2017 at 4:18 PM, Felix Cheung
wrote:
> toJSON on Dataset/DataFrame?
>
> --
> *From:* kant kodali
> *Sent:* Saturday, September 9, 2017 4:15:49 PM
> *To:* user @spark
> *Subject:* How
yes it is a streaming dataset. so what is the problem with following code?
Dataset ds = dataset.toJSON().map(()->{some function that
returns a string});
StreamingQuery query = ds.writeStream().start();
query.awaitTermination();
On Sat, Sep 9, 2017 at 4:20 PM, Felix Cheung
What is newDS?
If it is a Streaming Dataset/DataFrame (since you have writeStream there) then
there seems to be an issue preventing toJSON to work.
From: kant kodali
Sent: Saturday, September 9, 2017 4:04:33 PM
To: user @spark
Subject:
toJSON on Dataset/DataFrame?
From: kant kodali
Sent: Saturday, September 9, 2017 4:15:49 PM
To: user @spark
Subject: How to convert Row to JSON in Java?
Hi All,
How to convert Row to JSON in Java? It would be nice to have .toJson() method
Hi All,
How to convert Row to JSON in Java? It would be nice to have .toJson()
method in the Row class.
Thanks,
kant
Hi All,
I have the following code and I am not sure what's wrong with it? I cannot
call dataset.toJSON() (which returns a DataSet) ? I am using spark 2.2.0 so
I am wondering if there is any work around?
Dataset ds = newDS.toJSON().map(()->{some function that
returns a string});
StreamingQuery
I am running a spark streaming application on a cluster composed by three
nodes, each one with a worker and three executors (so a total of 9
executors). I am using the spark standalone mode (version 2.1.1).
The application is run with a spark-submit command with option "-deploy-mode
client" and
I am running a spark streaming application on a cluster composed by three
nodes, each one with a worker and three executors (so a total of 9
executors). I am using the spark standalone mode (version 2.1.1).
The application is run with a spark-submit command with option "-deploy-mode
client" and
Hello,
you might get the information you are looking for from this hidden API:
http://:/json/
Hope it helps,
Davide
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail:
Hello,
you might get the information you are looking for from this hidden API:
http://:/json/
Hope it helps,
Davide
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail:
Hello,
you might get the information you are looking for from this hidden API:
http://:/json/
Hope it helps,
Davide
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail:
Hello,
you might get the information you are looking for from this hidden API:
http://:/json/
Hope it helps,
Davide
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail:
Hello,
you might get the information you are looking for from this hidden API:
http://:/json/
Hope it helps,
Davide
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail:
I am running a spark streaming application on a cluster composed by three
nodes, each one with a worker and three executors (so a total of 9
executors). I am using the spark standalone mode (version 2.1.1).
The application is run with a spark-submit command with option "-deploy-mode
client" and
Hi,
Naga has kindly suggested here that I should push the file into RDD and get
rid of header. But my partitions have hundreds of files in it and just
opening and processing the files using RDD is a way old method of working.
I think that SPARK community has moved on from RDD, to Dataframes to
17 matches
Mail list logo