Error starting HiveServer2: could not start ThriftBinaryCLIService

2016-07-15 Thread ram kumar
Hi all, I started Hive Thrift Server with command, /sbin/start-thriftserver.sh --master yarn -hiveconf hive.server2.thrift.port 10003 The Thrift server started at the particular node without any error. When doing the same, except pointing to different node to start the server,

Re: Getting error in inputfile | inputFile

2016-07-15 Thread ram kumar
check the "*inputFile*" variable name lol On Fri, Jul 15, 2016 at 12:12 PM, RK Spark wrote: > I am using Spark version is 1.5.1, I am getting errors in first program of > spark,ie.e., word count. Please help me to solve this > > *scala> val inputfile =

Re: Spark with HBase Error - Py4JJavaError

2016-07-07 Thread ram kumar
Hi Puneet, Have you tried appending --jars $SPARK_HOME/lib/spark-examples-*.jar to the execution command? Ram On Thu, Jul 7, 2016 at 5:19 PM, Puneet Tripathi < puneet.tripa...@dunnhumby.com> wrote: > Guys, Please can anyone help on the issue below? > > > > Puneet > > > > *From:* Puneet

Re: Error joining dataframes

2016-05-18 Thread ram kumar
, Divya Gehlot <divya.htco...@gmail.com> wrote: > Can you try var df_join = df1.join(df2,df1( "Id") ===df2("Id"), > "fullouter").drop(df1("Id")) > On May 18, 2016 2:16 PM, "ram kumar" <ramkumarro...@gmail.

Re: Error joining dataframes

2016-05-18 Thread ram kumar
id| A| id| B| > > +++++ > > | 1| 0|null|null| > > | 2| 0| 2| 0| > > |null|null| 3| 0| > > +++++ > > > df1: org.apache.spark.sql.DataFrame = [id: int, A: int] > > df2: org.apache.spark.sql.DataFrame =

Re: Error joining dataframes

2016-05-18 Thread ram kumar
0)).toDF("id", "A") > val df2 = Seq((1, 0), (2, 0), (3, 0)).toDF("id", "B") > df1.join(df2, df1("id") === df2("id"), "outer").show > > // maropu > > > On Wed, May 18, 2016 at 3:29 PM, ram kumar <ramkumarro..

Re: Error joining dataframes

2016-05-18 Thread ram kumar
> On 17 May 2016 at 21:52, Bijay Kumar Pathak <bkpat...@mtu.edu> wrote: > >> Hi, >> >> Try this one: >> >> >> df_join = df1.*join*(df2, 'Id', "fullouter") >> >> Thanks, >> Bijay >> >> >> On Tue, May 17, 2016 at

Re: Error joining dataframes

2016-05-18 Thread ram kumar
: > Hi, > > Try this one: > > > df_join = df1.*join*(df2, 'Id', "fullouter") > > Thanks, > Bijay > > > On Tue, May 17, 2016 at 9:39 AM, ram kumar <ramkumarro...@gmail.com> > wrote: > >> Hi, >> >> I tried to join two dat

Error joining dataframes

2016-05-17 Thread ram kumar
Hi, I tried to join two dataframe df_join = df1.*join*(df2, ((df1("Id") === df2("Id")), "fullouter") df_join.registerTempTable("join_test") When querying "Id" from "join_test" 0: jdbc:hive2://> *select Id from join_test;* *Error*: org.apache.spark.sql.AnalysisException: Reference 'Id' is

Fwd: AuthorizationException while exposing via JDBC client (beeline)

2016-04-28 Thread ram kumar
Hi, I wrote a spark job which registers a temp table and when I expose it via beeline (JDBC client) $ *./bin/beeline* beeline> * !connect jdbc:hive2://IP:10003 -n ram -p *0: jdbc:hive2://IP> *show

AuthorizationException while exposing via JDBC client (beeline)

2016-04-27 Thread ram kumar
Hi, I wrote a spark job which registers a temp table and when I expose it via beeline (JDBC client) $ *./bin/beeline* beeline> * !connect jdbc:hive2://IP:10003 -n ram -p *0: jdbc:hive2://IP> *show

How to stop hivecontext

2016-04-15 Thread ram kumar
Hi, I started hivecontext as, val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc); I want to stop this sql context Thanks

Exposing temp table via Hive Thrift server

2016-04-14 Thread ram kumar
Hi, In spark-shell (scala), we import, *org.apache.spark.sql.hive.thriftserver._* for starting Hive Thrift server programatically for particular hive context as *HiveThriftServer2.startWithContext(hiveContext)* to expose registered temp table for that particular session. We used pyspark for

Importing hive thrift server

2016-04-12 Thread ram kumar
Hi, In spark-shell, we start hive thrift server by importing, import org.apache.spark.sql.hive.thriftserver._ Is there a package for importing it from pyspark Thanks

Re: Could not load shims in class org.apache.hadoop.hive.schshim.FairSchedulerShim

2016-04-05 Thread ram kumar
I am facing this same issue. Can any1 help me with this Thanks On Mon, Dec 7, 2015 at 9:14 AM, Shige Song wrote: > Hard to tell. > > On Mon, Dec 7, 2015 at 11:35 AM, zhangjp <592426...@qq.com> wrote: > >> Hi all, >> >> I'm using saprk prebuild version 1.5.2+hadoop2.6 and

Re: Can't able to access temp table via jdbc client

2016-04-05 Thread ram kumar
h2gBxianrbJd6zP6AcPCCdOABUrV8Pw > <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* > > > > http://talebzadehmich.wordpress.com > > > > On 5 April 2016 at 05:52, ram kumar <ramkumarro...@gmail.com> wrote: > >> HI, >> >> I started a hi

Exposing dataframe via thrift server

2016-03-30 Thread ram kumar
Hi, I started thrift server cd $SPARK_HOME ./sbin/start-thriftserver.sh Then, jdbc client $ ./bin/beeline Beeline version 1.5.2 by Apache Hive beeline>!connect jdbc:hive2://ip:1 show tables; ++--+--+ | tableName | isTemporary | ++--+--+ |

exception while running job as pyspark

2016-03-16 Thread ram kumar
Hi, I get the following error when running a job as pyspark, {{{ An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3

Re: Doubt on data frame

2016-03-11 Thread ram kumar
; Cheers > > On Fri, Mar 11, 2016 at 5:02 AM, ram kumar <ramkumarro...@gmail.com> > wrote: > >> Hi, >> >> I registered a dataframe as a table using registerTempTable >> and I didn't close the Spark context. >> >> Will the table be available for longer time? >> >> Thanks >> > >

Doubt on data frame

2016-03-11 Thread ram kumar
Hi, I registered a dataframe as a table using registerTempTable and I didn't close the Spark context. Will the table be available for longer time? Thanks

Re: spark streaming - checkpoint

2015-06-29 Thread ram kumar
on using yarn-cluster, it works good On Mon, Jun 29, 2015 at 12:07 PM, ram kumar ramkumarro...@gmail.com wrote: SPARK_CLASSPATH=$CLASSPATH:/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/* in spark-env.sh I think i am facing the same issue https://issues.apache.org/jira/browse/SPARK-6203 On Mon

Re: spark streaming - checkpoint

2015-06-29 Thread ram kumar
SPARK_CLASSPATH=$CLASSPATH:/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/* in spark-env.sh I think i am facing the same issue https://issues.apache.org/jira/browse/SPARK-6203 On Mon, Jun 29, 2015 at 11:38 AM, ram kumar ramkumarro...@gmail.com wrote: I am using Spark 1.2.0.2.2.0.0-82 (git revision

spark streaming - checkpoint

2015-06-26 Thread ram kumar
Hi, - JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(1)); ssc.checkpoint(checkPointDir); JavaStreamingContextFactory factory = new JavaStreamingContextFactory() { public JavaStreamingContext create() {