Re: combineByKey

2019-04-05 Thread Madabhattula Rajesh Kumar
. > > The type-safety that you're after (that eventually makes life *easier*) is > best supported by Dataset (would have prevented the .id vs .Id error). > Although there are some performance tradeoffs vs RDD and DataFrame... > > > > > > > On Fri, Apr 5, 2019 at 2:11 A

combineByKey

2019-04-05 Thread Madabhattula Rajesh Kumar
Hi, Any issue in the below code. case class Messages(timeStamp: Int, Id: String, value: String) val messages = Array( Messages(0, "d1", "t1"), Messages(0, "d1", "t1"), Messages(0, "d1", "t1"), Messages(0, "d1", "t1"), Messages(0, "d1", "t2"), Messages(0,

spark jobserver

2017-03-05 Thread Madabhattula Rajesh Kumar
Hi, I am getting below an exception when I start the job-server ./server_start.sh: line 41: kill: (11482) - No such process Please let me know how to resolve this error Regards, Rajesh

SimpleConfigObject

2017-03-02 Thread Madabhattula Rajesh Kumar
Hi, How to read json string from SimpleConfigObject. SimpleConfigObject({"ID":"123","fileName":"123.txt"}) Regards, Rajesh

Continuous or Categorical

2017-03-01 Thread Madabhattula Rajesh Kumar
Hi, How to check given a set of values(example:- Column values in CSV file) are Continuous or Categorical? Any statistical test is available? Regards, Rajesh

MultiLabelBinarizer

2017-02-08 Thread Madabhattula Rajesh Kumar
Hi, Do we have a below equivalent preprocessing function in Spark ML? from sklearn.preprocessing import MultiLabelBinarizer Regards, Rajesh

ML version of Kmeans

2017-01-31 Thread Madabhattula Rajesh Kumar
Hi, I am not able to find predict method on "ML" version of Kmeans. Mllib version has a predict method. KMeansModel.predict(point: Vector) . How to predict the cluster point for new vectors in ML version of kmeans ? Regards, Rajesh

PrefixSpan

2017-01-24 Thread Madabhattula Rajesh Kumar
Hi, Please point me the internal functionality of PrefixSpan with examples. Regards, Rajesh

Need help :- org.apache.spark.SparkException :- No such file or directory

2016-09-29 Thread Madabhattula Rajesh Kumar
Hi Team, I getting below exception in spark jobs. Please let me know how to fix this issue. *Below is my cluster configuration :- * I am using SparkJobServer to trigger the jobs. Below is my configuration in SparkJobServer. - num-cpu-cores = 4 - memory-per-node = 4G I have a 4 workers

Re: LabeledPoint creation

2016-09-08 Thread Madabhattula Rajesh Kumar
s <aka.f...@gmail.com> wrote: > It has 4 categories > a = 1 0 0 > b = 0 0 0 > c = 0 1 0 > d = 0 0 1 > > -- > Oleksiy Dyagilev > > On Wed, Sep 7, 2016 at 10:42 AM, Madabhattula Rajesh Kumar < > mrajaf...@gmail.com> wrote: > >> Hi, >&g

Forecasting algorithms in spark ML

2016-09-07 Thread Madabhattula Rajesh Kumar
Hi, Please let me know supported Forecasting algorithms in spark ML Regards, Rajesh

Re: LabeledPoint creation

2016-09-07 Thread Madabhattula Rajesh Kumar
Hi, Any help on above mail use case ? Regards, Rajesh On Tue, Sep 6, 2016 at 5:40 PM, Madabhattula Rajesh Kumar < mrajaf...@gmail.com> wrote: > Hi, > > I am new to Spark ML, trying to create a LabeledPoint from categorical > dataset(example code from spark). For this,

LabeledPoint creation

2016-09-06 Thread Madabhattula Rajesh Kumar
Hi, I am new to Spark ML, trying to create a LabeledPoint from categorical dataset(example code from spark). For this, I am using One-hot encoding feature. Below is my code val df = sparkSession.createDataFrame(Seq( (0, "a"), (1, "b"), (2,

Re: parallel processing with JDBC

2016-08-15 Thread Madabhattula Rajesh Kumar
; *Disclaimer:* Use it at your own risk. Any and all responsibility for any > loss, damage or destruction of data or any other property which may arise > from relying on this email's technical content is explicitly disclaimed. > The author will in no case be liable for any monetary dama

Re: parallel processing with JDBC

2016-08-15 Thread Madabhattula Rajesh Kumar
G FROM >> scratchpad.dummy) >> *WHERE ID >= 1601 >> AND ID < 1701* >> >> HTH >> >> >> >> >> >> >> >> Dr Mich Talebzadeh >> >> >> >> Li

Re: parallel processing with JDBC

2016-08-15 Thread Madabhattula Rajesh Kumar
Hi Mich, I have a below question. I want to join two tables and return the result based on the input value. In this case, how we need to specify lower bound and upper bound values ? select t1.id, t1.name, t2.course, t2.qualification from t1, t2 where t1.transactionid=*1* and t1.id = t2.id

SparkSQL parallelism

2016-02-11 Thread Madabhattula Rajesh Kumar
Hi, I have a spark cluster with One Master and 3 worker nodes. I have written a below code to fetch the records from oracle using sparkSQL val sqlContext = new org.apache.spark.sql.SQLContext(sc) val employees = sqlContext.read.format("jdbc").options( Map("url" ->

spark-cassandra

2016-02-03 Thread Madabhattula Rajesh Kumar
Hi, I am using Spark Jobserver to submit the jobs. I am using spark-cassandra connector to connect to Cassandra. I am getting below exception through spak jobserver. If I submit the job through *Spark-Submit *command it is working fine,. Please let me know how to solve this issue Exception in

SQL

2016-01-26 Thread Madabhattula Rajesh Kumar
Hi, To read data from oracle. I am using sqlContext. Below is the method signature. Is lowerBound and upperBound values are belongs to actual table lower and upper value of column (or) We can give any numbers Please clarify. sqlContext.read.format("jdbc").options( Map("url" -> "jdbcURL",

Re: Clarification on Data Frames joins

2016-01-24 Thread Madabhattula Rajesh Kumar
Hi, Any suggestions on this approach? Regards, Rajesh On Sat, Jan 23, 2016 at 11:24 PM, Madabhattula Rajesh Kumar < mrajaf...@gmail.com> wrote: > Hi, > > I have a big database table(1 million plus records) in oracle. I need to > query records based on input numbers. For t

Clarification on Data Frames joins

2016-01-23 Thread Madabhattula Rajesh Kumar
Hi, I have a big database table(1 million plus records) in oracle. I need to query records based on input numbers. For this use case, I am doing below steps I am creating two data frames. DF1 = I am computing this DF1 using sql query. It has one million + records. DF2 = I have a list of

Re: Concurrent Spark jobs

2016-01-19 Thread Madabhattula Rajesh Kumar
Hi, Just a thought. Can we use Spark Job Server and trigger jobs through rest apis. In this case, all jobs will share same context and run the jobs parallel. If any one has other thoughts please share Regards, Rajesh On Tue, Jan 19, 2016 at 10:28 PM, emlyn wrote: > We

spark job server

2016-01-16 Thread Madabhattula Rajesh Kumar
Hi, I am not able to start spark job sever. I am facing below error. Please let me know, how to resolve this issue. I have configured one master and two workers in cluster mode. ./server_start.sh *./server_start.sh: line 52: kill: (19621) - No such process./server_start.sh: line 78:

Re: spark job server

2016-01-16 Thread Madabhattula Rajesh Kumar
2.6 but didn't find > either compute-classpath.sh or server_start.sh > > Cheers > > On Sat, Jan 16, 2016 at 5:33 AM, Madabhattula Rajesh Kumar < > mrajaf...@gmail.com> wrote: > >> Hi, >> >> I am not able to start spark job sever. I am facing below error. Plea

java.sql.SQLException: Unsupported type -101

2015-12-25 Thread Madabhattula Rajesh Kumar
Hi I'm not able to read "Oracle Table - TIMESTAMP(6) WITH TIME ZONE datatype" column using Spark SQL. I'm getting below exception. Please let me know how to resolve this issue. *Exception :-* Exception in thread "main" java.sql.SQLException: Unsupported type -101 at

Re: Stand Alone Cluster - Strange issue

2015-12-22 Thread Madabhattula Rajesh Kumar
c 22, 2015 at 9:34 AM, Madabhattula Rajesh Kumar < > mrajaf...@gmail.com> wrote: > >> Hi, >> >> I have a standalone cluster. One Master + One Slave. I'm getting below >> "NULL POINTER" exception. >> >> Could you please help me on this issue.

Stand Alone Cluster - Strange issue

2015-12-22 Thread Madabhattula Rajesh Kumar
Hi, I have a standalone cluster. One Master + One Slave. I'm getting below "NULL POINTER" exception. Could you please help me on this issue. *Code Block :-* val accum = sc.accumulator(0) sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x) *==> This line giving exception.* Exception :-

Re: spark-submit for dependent jars

2015-12-21 Thread Madabhattula Rajesh Kumar
2015 at 7:18 PM, satish chandra j <jsatishchan...@gmail.com> wrote: > Hi Rajesh, > Could you please try giving your cmd as mentioned below: > > ./spark-submit --master local --class --jars > > > Regards, > Satish Chandra > > On Mon, Dec 21, 2015 at

spark-submit for dependent jars

2015-12-21 Thread Madabhattula Rajesh Kumar
Hi, How to add dependent jars in spark-submit command. For example: Oracle. Could you please help me to resolve this issue I have a standalone cluster. One Master and One slave. I have used below command it is not working ./spark-submit --master local --class test.Main

Creation of RDD in foreachAsync is failing

2015-12-11 Thread Madabhattula Rajesh Kumar
Hi, I have a below query. Please help me to solve this I have a 2 ids. I want to join these ids to table. This table contains some blob data. So i can not join these 2 ids to this table in one step. I'm planning to join this table in a chunks. For example, each step I will join 5000

Re: How to use collections inside foreach block

2015-12-10 Thread Madabhattula Rajesh Kumar
ote: > >> Your list is defined on the driver, whereas function specified in forEach >> will be evaluated on each executor. >> You might want to add an accumulator or handle a Sequence of list from >> each partition. >> >> On Wed, Dec 9, 2015 at 11:19 AM, Mad

How to use collections inside foreach block

2015-12-08 Thread Madabhattula Rajesh Kumar
Hi, I have a below query. Please help me to solve this I have a 2 ids. I want to join these ids to table. This table contains some blob data. So i can not join these 2000 ids to this table in one step. I'm planning to join this table in a chunks. For example, each step I will join 5000 ids.

Spark SQL IN Clause

2015-12-04 Thread Madabhattula Rajesh Kumar
Hi, How to use/best practices "IN" clause in Spark SQL. Use Case :- Read the table based on number. I have a List of numbers. For example, 1million. Regards, Rajesh

Re: How to test https://issues.apache.org/jira/browse/SPARK-10648 fix

2015-12-03 Thread Madabhattula Rajesh Kumar
> > On Thu, Dec 3, 2015 at 12:39 AM, Madabhattula Rajesh Kumar < > mrajaf...@gmail.com> wrote: > >> Hi Team, >> >> Looks like this issue is fixed in 1.6 release. How to test this fix? Is >> any jar is available? So I can add that jar in dependency and test t

How to test https://issues.apache.org/jira/browse/SPARK-10648 fix

2015-12-03 Thread Madabhattula Rajesh Kumar
Hi Team, Looks like this issue is fixed in 1.6 release. How to test this fix? Is any jar is available? So I can add that jar in dependency and test this fix. (Or) Any other way, I can test this fix in 1.15.2 code base. Could you please let me know the steps. Thank you for your support Regards,

Re: Spark 1.6 Build

2015-11-24 Thread Madabhattula Rajesh Kumar
com> wrote: > you can refer..: > https://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/building-spark.html#building-with-buildmvn > > > On Tue, Nov 24, 2015 at 7:16 AM, Madabhattula Rajesh Kumar < > mrajaf...@gmail.com> wrote: > >> Hi, >> &

Re: Spark 1.6 Build

2015-11-24 Thread Madabhattula Rajesh Kumar
ark+1+6+0+Release+Preview > > On Tue, Nov 24, 2015 at 9:31 AM, Madabhattula Rajesh Kumar < > mrajaf...@gmail.com> wrote: > >> Hi Prem, >> >> Thank you for the details. I'm not able to build. I'm facing some issues. >> >> Any repository link, where I can d

Spark 1.6 Build

2015-11-24 Thread Madabhattula Rajesh Kumar
Hi, I'm not able to build Spark 1.6 from source. Could you please share the steps to build Spark 1.16 Regards, Rajesh

Spark 1.5.3 release

2015-11-19 Thread Madabhattula Rajesh Kumar
Hi, Please let me know Spark 1.5.3 release date details Regards, Rajesh

Re: Spark sql jdbc fails for Oracle NUMBER type columns

2015-11-05 Thread Madabhattula Rajesh Kumar
rom: Richard Hillegas/San Francisco/IBM@IBMUS > > To: Madabhattula Rajesh Kumar <mrajaf...@gmail.com> > > Cc: "user@spark.apache.org" <user@spark.apache.org>, > > "u...@spark.incubator.apache.org" <u...@spark.incubator.apache.org> > > Date: 1

Spark sql jdbc fails for Oracle NUMBER type columns

2015-11-05 Thread Madabhattula Rajesh Kumar
Hi, Is this issue fixed in 1.5.1 version? Regards, Rajesh

Spark Titan

2015-06-21 Thread Madabhattula Rajesh Kumar
Hi, How to connect TItan database from Spark? Any out of the box api's available? Regards, Rajesh

Pyspark saveAsTextFile exceptions

2015-03-13 Thread Madabhattula Rajesh Kumar
Hi Team, I'm getting below exception for saving the results into hadoop. *Code :* rdd.saveAsTextFile(hdfs://localhost:9000/home/rajesh/data/result.rdd) Could you please help me how to resolve this issue. 15/03/13 17:19:31 INFO spark.SparkContext: Starting job: saveAsTextFile at

Re: GraphX path traversal

2015-03-03 Thread Madabhattula Rajesh Kumar
Hi, Could you please let me know how to do this? (or) Any suggestion Regards, Rajesh On Mon, Mar 2, 2015 at 4:47 PM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi, I have a below edge list. How to find the parents path for every vertex? Example : Vertex 1 path : 2, 3, 4, 5, 6

Re: GraphX path traversal

2015-03-03 Thread Madabhattula Rajesh Kumar
understand the question. Could you restate what you are trying to do. Sent from my iPhone On 2 Mar 2015, at 11:17, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi, I have a below edge list. How to find the parents path for every vertex? Example : Vertex 1 path : 2, 3, 4, 5, 6 Vertex

Re: GraphX path traversal

2015-03-03 Thread Madabhattula Rajesh Kumar
)) *But I'm looking below output. * (4,Set(5, 6)) (1,Set(2, 3, 4, 5, 6)) (6,Set()) (3,Set(4, 5, 6)) (5,Set(6)) (2,Set(3, 4, 5, 6)) Could you please correct me where I'm doing wrong. Regards, Rajesh On Tue, Mar 3, 2015 at 8:42 PM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi Robin, Thank

GraphX path traversal

2015-03-02 Thread Madabhattula Rajesh Kumar
Hi, I have a below edge list. How to find the parents path for every vertex? Example : Vertex 1 path : 2, 3, 4, 5, 6 Vertex 2 path : 3, 4, 5, 6 Vertex 3 path : 4,5,6 vertex 4 path : 5,6 vertex 5 path : 6 Could you please let me know how to do this? (or) Any suggestion Source Vertex

GraphX vs GraphLab

2015-01-12 Thread Madabhattula Rajesh Kumar
Hi Team, Is any one done comparison(pros and cons ) study between GraphX ad GraphLab. Could you please let me know any links for this comparison. Regards, Rajesh

Re: when will the spark 1.3.0 be released?

2014-12-17 Thread Madabhattula Rajesh Kumar
Hi All, When will the Spark 1.2.0 be released? and What are the features in Spark 1.2.0 Regards, Rajesh On Wed, Dec 17, 2014 at 11:14 AM, Andrew Ash and...@andrewash.com wrote: Releases are roughly every 3mo so you should expect around March if the pace stays steady. 2014-12-16 22:56

Re: JSON Input files

2014-12-15 Thread Madabhattula Rajesh Kumar
, 2014 at 7:21 PM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Thank you Yanbo Regards, Rajesh On Sun, Dec 14, 2014 at 3:15 PM, Yanbo yanboha...@gmail.com wrote: Pay attention to your JSON file, try to change it like following. Each record represent as a JSON string. {NAME

Re: JSON Input files

2014-12-15 Thread Madabhattula Rajesh Kumar
object (other than the fact that it must fit into memory on the executors). On Mon, Dec 15, 2014 at 7:22 AM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi Peter, Thank you for the clarification. Now we need to store each JSON object into one line. Is there any limitation

Re: JSON Input files

2014-12-14 Thread Madabhattula Rajesh Kumar
...@datastax.com wrote: One solution can be found here: https://spark.apache.org/docs/1.1.0/sql-programming-guide.html#json-datasets - Helena @helenaedelson On Dec 13, 2014, at 11:18 AM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi Team, I have a large JSON file in Hadoop. Could you

Re: JSON Input files

2014-12-14 Thread Madabhattula Rajesh Kumar
, } {NAME : Device 2, GROUP : 2, SITE : sss, DIRECTION : North, } 在 2014年12月14日,下午5:01,Madabhattula Rajesh Kumar mrajaf...@gmail.com 写道: { Device 1 : {NAME : Device 1, GROUP : 1, SITE : qqq, DIRECTION : East, } Device 2

JSON Input files

2014-12-13 Thread Madabhattula Rajesh Kumar
Hi Team, I have a large JSON file in Hadoop. Could you please let me know 1. How to read the JSON file 2. How to parse the JSON file Please share any example program based on Scala Regards, Rajesh

Re: Spark and Stanford CoreNLP

2014-11-24 Thread Madabhattula Rajesh Kumar
Hello, I'm new to Stanford CoreNLP. Could any one share good training material and examples(java or scala) on NLP. Regards, Rajesh On Mon, Nov 24, 2014 at 9:38 PM, Ian O'Connell i...@ianoconnell.com wrote: object MyCoreNLP { @transient lazy val coreNLP = new coreNLP() } and then refer

To find distances to reachable source vertices using GraphX

2014-11-03 Thread Madabhattula Rajesh Kumar
Hi All, I'm trying to understand below link example program. When I run this program, I'm getting *java.lang.NullPointerException* at below highlighted line. *https://gist.github.com/ankurdave/4a17596669b36be06100 https://gist.github.com/ankurdave/4a17596669b36be06100* val updatedDists =

GraphX : Vertices details in Triangles

2014-11-03 Thread Madabhattula Rajesh Kumar
Hi All, I'm new to GraphX. I'm understanding Triangle Count use cases. I'm able to get number of triangles in a graph but I'm not able to collect vertices details in each Triangle. *For example* : I'm playing one of the graphx graph example Vertices and Edges val vertexArray = Array( (1L,

Fwd: GraphX : Vertices details in Triangles

2014-11-03 Thread Madabhattula Rajesh Kumar
Hi All, I'm new to GraphX. I'm understanding Triangle Count use cases. I'm able to get number of triangles in a graph but I'm not able to collect vertices details in each Triangle. *For example* : I'm playing one of the graphx graph example Vertices and Edges val vertexArray = Array( (1L,

Re: To find distances to reachable source vertices using GraphX

2014-11-03 Thread Madabhattula Rajesh Kumar
edited the Gist with a workaround. Does that fix the problem? Ankur http://www.ankurdave.com/ On Mon, Nov 3, 2014 at 12:23 AM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi All, I'm trying to understand below link example program. When I run this program, I'm getting

Re: Spark Streaming + Actors

2014-09-26 Thread Madabhattula Rajesh Kumar
Hi Team, Could you please respond on my below request. Regards, Rajesh On Thu, Sep 25, 2014 at 11:38 PM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi Team, Can I use Actors in Spark Streaming based on events type? Could you please review below Test program and let me know

Spark Streaming + Actors

2014-09-25 Thread Madabhattula Rajesh Kumar
Hi Team, Can I use Actors in Spark Streaming based on events type? Could you please review below Test program and let me know if any thing I need to change with respect to best practices import akka.actor.Actor import akka.actor.{ActorRef, Props} import org.apache.spark.SparkConf import

Spark : java.io.NotSerializableException: org.apache.hadoop.hbase.client.Result

2014-09-24 Thread Madabhattula Rajesh Kumar
Hi Team, I'm getting below exception. Could you please me to resolve this issue. Below is my piece of code val rdd = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat], classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable], classOf[org.apache.hadoop.hbase.client.Result]) var s =rdd.map(x

Spark Hbase

2014-09-24 Thread Madabhattula Rajesh Kumar
Hi Team, Could you please point me the example program for Spark HBase to read columns and values Regards, Rajesh

Re: Spark Hbase

2014-09-24 Thread Madabhattula Rajesh Kumar
/HBaseConverters.scala Cheers On Wed, Sep 24, 2014 at 9:39 AM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi Team, Could you please point me the example program for Spark HBase to read columns and values Regards, Rajesh

Re: Iterate over ArrayBuffer

2014-09-04 Thread Madabhattula Rajesh Kumar
Hi Deep, If you are requirement is to read the values from ArrayBuffer use below code scala import scala.collection.mutable.ArrayBuffer import scala.collection.mutable.ArrayBuffer scala var a = ArrayBuffer(5,3,1,4) a: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer(5, 3, 1, 4) scala

Re: Number of elements in ArrayBuffer

2014-09-02 Thread Madabhattula Rajesh Kumar
Hi Deep, Please find below results of ArrayBuffer in scala REPL scala import scala.collection.mutable.ArrayBuffer import scala.collection.mutable.ArrayBuffer scala val a = ArrayBuffer(5,3,1,4) a: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer(5, 3, 1, 4) scala a.head res2: Int = 5

Re: spark sql

2014-08-02 Thread Madabhattula Rajesh Kumar
Hi Team, Could you please help me to resolve above compilation issue. Regards, Rajesh On Sat, Aug 2, 2014 at 2:02 AM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi Team, I'm not able to print the values from Spark Sql JavaSchemaRDD. Please find below my code

Re: Hbase

2014-08-01 Thread Madabhattula Rajesh Kumar
());* * System.out.println(Value : + sb);* * }* *return null;* *}* } } Hope it helps. Thanks Best Regards On Fri, Aug 1, 2014 at 4:44 PM, Madabhattula Rajesh Kumar mrajaf...@gmail.com wrote: Hi Akhil, Thank you for your response. I'm facing below issues

javasparksql Hbase

2014-07-28 Thread Madabhattula Rajesh Kumar
Hi Team, Could you please let me know example program/link for JavaSparkSql to join 2 Hbase tables. Regards, Rajesh

spark checkpoint details

2014-07-27 Thread Madabhattula Rajesh Kumar
Hi Team, Could you please help me on below query. I'm using JavaStreamingContext to read streaming files from hdfs shared directory. When i start spark streaming job it is reading files from hdfs shared directory and doing some process. When i stop and restart the job it is again reading old

Re: Need help on spark Hbase

2014-07-16 Thread Madabhattula Rajesh Kumar
Rajesh Kumar mrajaf...@gmail.com wrote: Hi Team, Could you please help me to resolve the issue. *Issue *: I'm not able to connect HBase from Spark-submit. Below is my code. When i execute below program in standalone, i'm able to connect to Hbase and doing the operation. When i execute

Re: Possible bug in Spark Streaming :: TextFileStream

2014-07-14 Thread Madabhattula Rajesh Kumar
Hi Team, Is this issue with JavaStreamingContext.textFileStream(hdfsfolderpath) API also? Please conform. If yes, could you please help me to fix this issue. I'm using spark 1.0.0 version. Regards, Rajesh On Tue, Jul 15, 2014 at 5:42 AM, Tathagata Das tathagata.das1...@gmail.com wrote: Oh

how to convert JavaDStreamString to JavaRDDString

2014-07-09 Thread Madabhattula Rajesh Kumar
Hi Team, Could you please help me to resolve below query. My use case is : I'm using JavaStreamingContext to read text files from Hadoop - HDFS directory JavaDStreamString lines_2 = ssc.textFileStream(hdfs://localhost:9000/user/rajesh/EventsDirectory/); How to convert JavaDStreamString result