Why don't you do a normal .saveAsTextFiles?
Thanks
Best Regards
On Mon, Jun 22, 2015 at 11:55 PM, anshu shukla anshushuk...@gmail.com
wrote:
Thanx for reply !!
YES , Either it should write on any machine of cluster or Can you please
help me ... that how to do this . Previously i was
Thanks alot ,
Because i just want to log timestamp and unique message id and not full
RDD .
On Tue, Jun 23, 2015 at 12:41 PM, Akhil Das ak...@sigmoidanalytics.com
wrote:
Why don't you do a normal .saveAsTextFiles?
Thanks
Best Regards
On Mon, Jun 22, 2015 at 11:55 PM, anshu shukla
Running perfectly in local system but not writing to file in cluster
mode .ANY suggestions please ..
//msgid is long counter
JavaDStreamString newinputStream=inputStream.map(new
FunctionString, String() {
@Override
public String call(String v1) throws Exception {
String
Can not we write some data to a txt file in parallel with multiple
executors running in parallel ??
--
Thanks Regards,
Anshu Shukla
Is spoutLog just a non-spark file writer? If you run that in the map call
on a cluster its going to be writing in the filesystem of the executor its
being run on. I'm not sure if that's what you intended.
On Mon, Jun 22, 2015 at 1:35 PM, anshu shukla anshushuk...@gmail.com
wrote:
Running
Thanx for reply !!
YES , Either it should write on any machine of cluster or Can you please
help me ... that how to do this . Previously i was using writing using
collect () , so some of my tuples are missing while writing.
//previous logic that was just creating the file on master -