Returns Null when reading data from XML Ask Question

2018-02-27 Thread Sateesh Karuturi
I am trying to Parsing the data from XML file through Spark using databrics
library

Here is my code:

import org.apache.spark.SparkConfimport
org.apache.spark.SparkContextimport
org.apache.spark.sql.SQLContextimport
org.apache.spark.sql.functionsimport java.text.Formatimport
org.apache.spark.sql.functions.concat_wsimport
org.apache.spark.sqlimport org.apache.spark.sql.types._import
org.apache.spark.sql.catalyst.plans.logical.Withimport
org.apache.spark.sql.functions.litimport
org.apache.spark.sql.functions.udfimport scala.sys.process._import
org.apache.spark.sql.functions.litimport
org.apache.spark.sql.functions.udfimport
org.apache.spark.sql.functions._ object printschema {
   def main(args: Array[String]): Unit =
  {
  val conf = new SparkConf().setAppName("printschema").setMaster("local")
  conf.set("spark.debug.maxToStringFields", "1000")
  val context = new SparkContext(conf)
  val sqlCotext = new SQLContext(context)
  import sqlCotext.implicits._
  val df = sqlCotext.read.format("com.databricks.spark.xml")
 .option("rowTag", "us-bibliographic-data-application")
 .option("treatEmptyValuesAsNulls", true)
 .load("/Users/praveen/Desktop/ipa0105.xml")
val q1= 
df.withColumn("document",$"application-reference.document-id.doc-number".cast(sql.types.StringType))
   
.withColumn("document_number",$"application-reference.document-id.doc-number".cast(sql.types.StringType)).select("document","document_number")
   for(l<-q1)
   {
 val m1=l.get(0)
 val m2=l.get(1)
 println(m1,m2)
   }
  }}


When I run the code on ScalaIDE/IntelliJ IDEA its works fine and Here is my
Output.

(14789882,14789882)(14755945,14755945)(14755919,14755919)(14755034,14755034)

But, when i make a jar and run by using spark-submit it returns simply null
values

OUTPUT :

NULL,NULL
NULL,NULL
NULL,NULL


Please help me out.

Thanks in advance.


Getting Exception while running Drill-mondrian cube

2017-06-13 Thread Sateesh Karuturi
Hello..,

I am trying to execute the mondrian schema which is integrated with Apache
Drill. While running this schema i am getting *customer_w_ter  *table not
found exception.

here is my schema example:

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**

**


Please help me out.


Re: write dataframe to phoenix

2017-03-27 Thread Sateesh Karuturi
Hello Modi,

Thanks for the response.

i am running the code via spark-submit command, and i have included
following jars to spark classpath. still getting exception.

hoenix-4.8.0-HBase-1.1-client.jar
phoenix-spark-4.8.0-HBase-1.1.jar
phoenix-core-4.8.0-HBase-1.1.jar

On Mon, Mar 27, 2017 at 10:30 PM, Dhaval Modi <dhavalmod...@gmail.com>
wrote:

> Hi Sateesh,
>
> If you are running from spark shell, then please include Phoenix spark jar
> in classpath.
>
> Kindly refer to url that Sandeep provide.
>
>
> Regards,
> Dhaval
>
>
> On Mar 27, 2017 21:20, "Sateesh Karuturi" <sateesh.karutu...@gmail.com>
> wrote:
>
> Thanks Sandeep for your response.
>
> This is the exception what i am getting:
>
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 
> (TID 411, ip-x-xx-xxx.ap-southeast-1.compute.internal): 
> java.lang.RuntimeException: java.sql.SQLException: No suitable driver found 
> for jdbc:phoenix:localhost:2181:/hbase-unsecure;
> at 
> org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:58)
> at 
> org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1030)
> at 
> org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1014)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:88)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>
> On Mon, Mar 27, 2017 at 8:17 PM, Sandeep Nemuri <nhsande...@gmail.com>
> wrote:
>
>> What is the error you are seeing ?
>>
>> Ref: https://phoenix.apache.org/phoenix_spark.html
>>
>> df.write \
>>   .format("org.apache.phoenix.spark") \
>>   .mode("overwrite") \
>>   .option("table", "TABLE1") \
>>   .option("zkUrl", "localhost:2181") \
>>   .save()
>>
>>
>>
>> On Mon, Mar 27, 2017 at 10:19 AM, Sateesh Karuturi <
>> sateesh.karutu...@gmail.com> wrote:
>>
>>> Please anyone help me out how to write dataframe to phoenix in java?
>>>
>>> here is my code:
>>>
>>> pos_offer_new_join.write().format("org.apache.phoenix.spark"
>>> ).mode(SaveMode.Overwrite)
>>>
>>> .options(ImmutableMap.of("driver",
>>> "org.apache.phoenix.jdbc.PhoenixDriver","zkUrl",
>>>
>>> "jdbc:phoenix:localhost:2181","table","RESULT"))
>>>
>>> .save();
>>>
>>>
>>> but i am not able to write data to phoenix.
>>>
>>>
>>> Thanks.
>>>
>>>
>>>
>>
>>
>> --
>> *  Regards*
>> *  Sandeep Nemuri*
>>
>
>
>


Re: write dataframe to phoenix

2017-03-27 Thread Sateesh Karuturi
Thanks Sandeep for your response.

This is the exception what i am getting:

org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3
in stage 3.0 (TID 411,
ip-x-xx-xxx.ap-southeast-1.compute.internal):
java.lang.RuntimeException: java.sql.SQLException: No suitable driver
found for jdbc:phoenix:localhost:2181:/hbase-unsecure;
at 
org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:58)
at 
org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1030)
at 
org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1014)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)


On Mon, Mar 27, 2017 at 8:17 PM, Sandeep Nemuri <nhsande...@gmail.com>
wrote:

> What is the error you are seeing ?
>
> Ref: https://phoenix.apache.org/phoenix_spark.html
>
> df.write \
>   .format("org.apache.phoenix.spark") \
>   .mode("overwrite") \
>   .option("table", "TABLE1") \
>   .option("zkUrl", "localhost:2181") \
>   .save()
>
>
>
> On Mon, Mar 27, 2017 at 10:19 AM, Sateesh Karuturi <
> sateesh.karutu...@gmail.com> wrote:
>
>> Please anyone help me out how to write dataframe to phoenix in java?
>>
>> here is my code:
>>
>> pos_offer_new_join.write().format("org.apache.phoenix.spark"
>> ).mode(SaveMode.Overwrite)
>>
>> .options(ImmutableMap.of("driver",
>> "org.apache.phoenix.jdbc.PhoenixDriver","zkUrl",
>>
>> "jdbc:phoenix:localhost:2181","table","RESULT"))
>>
>> .save();
>>
>>
>> but i am not able to write data to phoenix.
>>
>>
>> Thanks.
>>
>>
>>
>
>
> --
> *  Regards*
> *  Sandeep Nemuri*
>


write dataframe to phoenix

2017-03-26 Thread Sateesh Karuturi
Please anyone help me out how to write dataframe to phoenix in java?

here is my code:

pos_offer_new_join.write().format("org.apache.phoenix.spark").mode(SaveMode.Overwrite)

.options(ImmutableMap.of("driver",
"org.apache.phoenix.jdbc.PhoenixDriver","zkUrl",

"jdbc:phoenix:localhost:2181","table","RESULT"))

.save();


but i am not able to write data to phoenix.


Thanks.


Re: write Dataframe to phoenix

2017-03-20 Thread Sateesh Karuturi
Thanks for response NaHeon..,

i added phoenix-spark jar in pom.xml and i am able to read data from
phoenix.
The problem is getting exception while writing Dataframe to phoenix.

On Mon, Mar 20, 2017 at 1:23 PM, NaHeon Kim <honey.and...@gmail.com> wrote:

> Did you check your project has dependency on phoenix-spark jar? : )
> See Spark setup at http://phoenix.apache.org/phoenix_spark.html
>
> Regards,
> NaHeon
>
> 2017-03-20 15:31 GMT+09:00 Sateesh Karuturi <sateesh.karutu...@gmail.com>:
>
>>
>> I am trying to write Dataframe to Phoenix.
>>
>> Here is my code:
>>
>>
>>1. df.write.format("org.apache.phoenix.spark").mode(SaveMode.Overwrite
>>).options(collection.immutable.Map(
>>2. "zkUrl" -> "localhost:2181/hbase-unsecure",
>>3. "table" -> "TEST")).save();
>>
>> and i am getting following exception:
>>
>>
>>1. org.apache.spark.SparkException: Job aborted due to stage failure: 
>> Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in 
>> stage 3.0 (TID 411, ip-x-xx-xxx.ap-southeast-1.compute.internal): 
>> java.lang.RuntimeException: java.sql.SQLException: No suitable driver found 
>> for jdbc:phoenix:localhost:2181:/hbase-unsecure;
>>2. at 
>> org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:58)
>>3. at 
>> org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1030)
>>4. at 
>> org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1014)
>>5. at 
>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>6. at org.apache.spark.scheduler.Task.run(Task.scala:88)
>>7. at 
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>8. at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>9. at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>
>>
>>
>


write Dataframe to phoenix

2017-03-20 Thread Sateesh Karuturi
I am trying to write Dataframe to Phoenix.

Here is my code:


   1. df.write.format("org.apache.phoenix.spark").mode(SaveMode.Overwrite).
   options(collection.immutable.Map(
   2. "zkUrl" -> "localhost:2181/hbase-unsecure",
   3. "table" -> "TEST")).save();

and i am getting following exception:


   1. org.apache.spark.SparkException: Job aborted due to stage
failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost
task 0.3 in stage 3.0 (TID 411,
ip-x-xx-xxx.ap-southeast-1.compute.internal):
java.lang.RuntimeException: java.sql.SQLException: No suitable driver
found for jdbc:phoenix:localhost:2181:/hbase-unsecure;
   2. at
org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:58)
   3. at
org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1030)
   4. at
org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1014)
   5. at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
   6. at org.apache.spark.scheduler.Task.run(Task.scala:88)
   7. at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
   8. at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   9. at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)


Phoenix-spark read example in spark-2.0.0

2017-03-18 Thread Sateesh Karuturi
Hello friends..,

I am very new to Apache Phoenix and i just started running sample phoenix
spark example in spark 1.6 version. it was successful and now i want to run
this example in spark version 2.0.0. Is phoenix provides support for
spark-2.0.0?

previously i used this command:

DataFrame fromPhx = context.read().format("org.apache.phoenix.spark")

.options(ImmutableMap.of("driver", "org.apache.phoenix.jdbc.PhoenixDriver",
"zkUrl",

"jdbc:phoenix:localhost:2181", "table", "SAMPLE"))

.load();


In spark 2.0.0:


org.apache.spark.sql.Dataset df  = spark.read().format(
"org.apache.phoenix.spark")

.options(ImmutableMap.of("driver", "org.apache.phoenix.jdbc.PhoenixDriver",
"zkUrl",

"jdbc:phoenix:localhost:2181", "table", "SAMPLE"))

.load();


This is correct or i need to change any code?


please help me out.


Getting Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/phoenix/jdbc/PhoenixDriver Exception

2017-03-16 Thread Sateesh Karuturi
Hello folks..,

i am trying to run sample phoenix spark application, while i am trying to
run i am getting following exception:

Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/phoenix/jdbc/PhoenixDriver


Here is my sample code:


package com.inndata.spark.sparkphoenix;


import org.apache.spark.SparkConf;

import org.apache.spark.SparkContext;

import org.apache.spark.api.java.JavaSparkContext;

import org.apache.spark.sql.DataFrame;

import org.apache.spark.sql.SQLContext;


import com.google.common.collect.ImmutableMap;


import java.io.Serializable;


/**

 *

 */

public class SparkConnection implements Serializable {


public static void main(String args[]) {

SparkConf sparkConf = new SparkConf();

sparkConf.setAppName("spark-phoenix-df");

sparkConf.setMaster("local[*]");

JavaSparkContext sc = new JavaSparkContext(sparkConf);

SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);


/*DataFrame df = sqlContext.read()

.format("org.apache.phoenix.spark")

.option("table", "TABLE1")

.option("zkUrl", "localhost:2181")

.load();

df.count();*/



DataFrame fromPhx = sqlContext.read().format("jdbc")

.options(ImmutableMap.of("driver", "org.apache.phoenix.jdbc.PhoenixDriver",
"url",

"jdbc:phoenix:ZK_QUORUM:2181:/hbase-secure", "dbtable", "TABLE1"))

.load();



fromPhx.show();


}

}


I have included phoenix-spark jar to the spark library and as well as
spark-submit command. and i also added *spark.executor.extraClassPath
and **spark.driver.extraClassPath
in spark-env.sh*


Getting Exception in thread "main" java.lang.ClassNotFoundException: Failed to find data source: org.apache.phoenix.spark. Please find packages at http://spark-packages.org Exception

2017-03-15 Thread Sateesh Karuturi
Hello folks..,

I am trying to execute sample spark-phoenix application.
but i am getting
 Exception in thread "main" java.lang.ClassNotFoundException: Failed to
find data source: org.apache.phoenix.spark. Please find packages at
http://spark-packages.org exception.

here is my code:

package com.inndata.spark.sparkphoenix;

import org.apache.spark.SparkConf;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;

import java.io.Serializable;

/**
 *
 */
public class SparkConnection implements Serializable {

public static void main(String args[]) {
SparkConf sparkConf = new SparkConf();
sparkConf.setAppName("spark-phoenix-df");
sparkConf.setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);

DataFrame df = sqlContext.read()
.format("org.apache.phoenix.spark")
.option("table", "ORDERS")
.option("zkUrl", "localhost:2181")
.load();
df.count();

}
}

and here is my pom.xml:


  org.apache.phoenix
  phoenix-core
  4.8.0-HBase-1.2



  org.scala-lang
  scala-library
  2.10.6
  provided


org.apache.phoenix
phoenix-spark
4.8.0-HBase-1.2





  org.apache.spark
  spark-core_2.10
  1.6.2



org.apache.spark
spark-sql_2.10
1.6.2




  org.apache.hadoop
  hadoop-client
  2.7.3




  org.apache.hadoop
  hadoop-common
  2.7.3




  org.apache.hadoop
  hadoop-common
  2.7.3




  org.apache.hadoop
  hadoop-hdfs
  2.7.3




  org.apache.hbase
  hbase-client
  1.2.4







  org.apache.hbase
  hbase-hadoop-compat
  1.2.4




  org.apache.hbase
  hbase-hadoop2-compat
  1.2.4



  org.apache.hbase
  hbase-server
  1.2.4



  org.apache.hbase
  hbase-it
  1.2.4
  test-jar




  junit
  junit
  3.8.1
  test

  


here is the stackoverflow link:


http://stackoverflow.com/questions/42816998/getting-failed-to-find-data-source-org-apache-phoenix-spark-please-find-packag

please help me out.


access Broadcast Variables in Spark java

2016-12-20 Thread Sateesh Karuturi
I need to process spark Broadcast variables using Java RDD API. This is my
code what i have tried so far:

This is only sample code to check whether its works or not? In my case i
need to work on two csvfiles.


SparkConf conf = new
SparkConf().setAppName("BroadcastVariable").setMaster("local");
  JavaSparkContext ctx = new JavaSparkContext(conf);
  Map map = new HashMap();
  map.put(1, "aa");
  map.put(2, "bb");
  map.put(9, "ccc");
  Broadcast> broadcastVar = ctx.broadcast(map);
  List list = new ArrayList();
  list.add(1);
  list.add(2);
  list.add(9);
  JavaRDD listrdd = ctx.parallelize(list);
  JavaRDD mapr = listrdd.map(x -> broadcastVar.value());
  System.out.println(mapr.collect());

and its prints output like this:

[{1=aa, 2=bb, 9=ccc}, {1=aa, 2=bb, 9=ccc}, {1=aa, 2=bb, 9=ccc}]

and my requirement is :

 [{aa, bb, ccc}]

Is it possible to do like in my required way?

please help me out.


Getting empty values while receiving from kafka Spark streaming

2016-09-18 Thread Sateesh Karuturi
i am very new to *Spark streaming* and i am implementing small exercise
like sending *XML* data from *kafka* and need to receive that *streaming* data
through *spark streaming.* I tried in all possible ways.. but every time i
am getting *empty values.*


*There is no problem in Kafka side, only problem is receiving the Streaming
data from Spark side.Here is the code how i am implementing:package
com.package; import org.apache.spark.SparkConf; import
org.apache.spark.api.java.JavaSparkContext; import
org.apache.spark.streaming.Duration; import
org.apache.spark.streaming.api.java.JavaStreamingContext; public class
SparkStringConsumer { public static void main(String[] args) { SparkConf
conf = new SparkConf() .setAppName("kafka-sandbox") .setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(conf); JavaStreamingContext ssc
= new JavaStreamingContext(sc, new Duration(2000)); Map
kafkaParams = new HashMap<>(); kafkaParams.put("metadata.broker.list",
"localhost:9092"); Set topics = Collections.singleton("mytopic");
JavaPairInputDStream directKafkaStream =
KafkaUtils.createDirectStream(ssc, String.class, String.class,
StringDecoder.class, StringDecoder.class, kafkaParams, topics);
directKafkaStream.foreachRDD(rdd -> { System.out.println("--- New RDD with
" + rdd.partitions().size() + " partitions and " + rdd.count() + "
records"); rdd.foreach(record -> System.out.println(record._2)); });
ssc.start(); ssc.awaitTermination(); } } And i am using following
versions:**Zookeeper 3.4.6Scala 2.11Spark 2.0Kafka 0.8.2***


Spark Streaming from existing RDD

2016-01-29 Thread Sateesh Karuturi
Anyone please  help me out how to create a DStream from existing RDD. My
code is:

JavaSparkContext ctx = new JavaSparkContext(conf);JavaRDD rddd
= ctx.parallelize(arraylist);

Now i need to use these *rddd* as input to *JavaStreamingContext*.


Stream S3 server to Cassandra

2016-01-28 Thread Sateesh Karuturi
Hello Anyone... please help me to how to Stream the XML files from S3
server to cassandra db using Spark Streaming java. presently iam using
Spark core to do that job..but problem is i have to to run for every 15
mints.. thats why iam looking for Spark Streaming.


Deleting empty rows from hive table through java

2016-01-04 Thread Sateesh Karuturi
Hello...
Anyone please help me how to delete empty rows from hive table through java?
Thanks in advance


how to fetch all of data from hbase table in spark java

2015-12-18 Thread Sateesh Karuturi
Hello experts... i am new to spark, anyone please explain me how to fetch
data from hbase table in spark java
Thanks in Advance...


flatMap function in Spark

2015-12-08 Thread Sateesh Karuturi
Guys... I am new to Spark..
Please anyone please explain me how flatMap function works with a little
sample example...
Thanks in advance...


RDD functions

2015-12-04 Thread Sateesh Karuturi
Hello Spark experts...
Iam new to Apache Spark..Can anyone send me the proper Documentation to
learn RDD functions.
Thanks in advance...


[no subject]

2015-12-04 Thread Sateesh Karuturi
user-sc.1449231970.fbaoamghkloiongfhbbg-sateesh.karuturi9=
gmail@spark.apache.org


Getting error while performing Insert query

2015-09-08 Thread Sateesh Karuturi
hello...,
iam using hive 1.1 and tez 0.7...
Whenever iam trying to INSERT


Getting error while performing Insert query

2015-09-08 Thread Sateesh Karuturi
hello...,
iam using hive 1.1 and tez 0.7...
Whenever iam trying to INSERT  data into hive table using tez via java iam
getting following error:

*Exception in thread "main" org.apache.hive.service.cli.HiveSQLException:
Error while compiling statement: FAILED: SemanticException [Error 10293]:
Unable to create temp file for insert values Expression of type
TOK_TABLE_OR_COL not supported in insert/values*


Getting error while performing Insert query

2015-09-08 Thread Sateesh Karuturi
hello...,
iam using hive 1.1 and tez 0.7...
Whenever iam trying to INSERT  data into hive table using tez via java iam
getting following error:

*Exception in thread "main" org.apache.hive.service.cli.HiveSQLException:
Error while compiling statement: FAILED: SemanticException [Error 10293]:
Unable to create temp file for insert values Expression of type
TOK_TABLE_OR_COL not supported in insert/values*


Hive on tez error

2015-08-27 Thread Sateesh Karuturi
I am trying to connect hive database(execution.engine value changed to tez)
using Java code... In case of select query its working But in the case
of INSERT getting an error:
The error looks like.
Error while processing statement: FAILED: Execution Error, return code 1
from org.apache.hadoop.hive.ql.exec.tez.TezTask
Please help me out


Hive on tez error

2015-08-27 Thread Sateesh Karuturi
I am trying to connect hive database(execution.engine value changed to tez)
using Java code... In case of select query its working But in the case
of INSERT getting an error:
The error looks like.
Error while processing statement: FAILED: Execution Error, return code 1
from org.apache.hadoop.hive.ql.exec.tez.TezTask
Please help me out


getting mismatched input 'ROW' expecting EOF error in hive creation

2015-08-04 Thread Sateesh Karuturi
i want create a hive table using java code. my code is
package com.inndata.services;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Connection;
import java.sql.Statement;
import java.sql.DriverManager;
public class HiveCreateTable {
  private static String driverName =
com.facebook.presto.jdbc.PrestoDriver;

  public static void main(String[] args) throws SQLException {

 // Register driver and create driver instance
 try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
 System.out.println(haii);
 Connection con = DriverManager.getConnection(jdbc:
presto://192.168.1.118:8023, hadoop, cassandra);
 con.setCatalog(hive);
 con.setSchema(log);
 Statement stmt = con.createStatement();
 ResultSet res = stmt.executeQuery(create table access_log (c_ip
varchar,
 +
cs_username varchar,
 +
cs_computername varchar,
 + cs_date
varchar,
 + cs_code
varchar,
 +
cs_method varchar,
 +
cs_uri_stem varchar,
 +
cs_uri_query varchar,
 +
cs_status_code varchar,
 +
cs_bytes varchar)
 + ROW
 FORMAT DELIMITED FIELDS TERMINATED BY '\b' LINES TERMINATED BY '\n');
 System.out.println(Table access_log created.);
 con.close();
  }
}
and getting
Exception in thread main java.sql.SQLException: Query failed
(#20150804_152058_4_r8ehs): line 1:214: mismatched input 'ROW'
expecting EOF


getting mismatched input 'ROW' expecting EOF error in hive creation

2015-08-04 Thread Sateesh Karuturi
i want create a hive table using java code. my code is
package com.inndata.services;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Connection;
import java.sql.Statement;
import java.sql.DriverManager;
public class HiveCreateTable {
  private static String driverName =
com.facebook.presto.jdbc.PrestoDriver;

  public static void main(String[] args) throws SQLException {

 // Register driver and create driver instance
 try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
 System.out.println(haii);
 Connection con = DriverManager.getConnection(jdbc:
presto://192.168.1.118:8023, hadoop, cassandra);
 con.setCatalog(hive);
 con.setSchema(log);
 Statement stmt = con.createStatement();
 ResultSet res = stmt.executeQuery(create table access_log (c_ip
varchar,
 +
cs_username varchar,
 +
cs_computername varchar,
 + cs_date
varchar,
 + cs_code
varchar,
 +
cs_method varchar,
 +
cs_uri_stem varchar,
 +
cs_uri_query varchar,
 +
cs_status_code varchar,
 +
cs_bytes varchar)
 + ROW
 FORMAT DELIMITED FIELDS TERMINATED BY '\b' LINES TERMINATED BY '\n');
 System.out.println(Table access_log created.);
 con.close();
  }
}
and getting
Exception in thread main java.sql.SQLException: Query failed
(#20150804_152058_4_r8ehs): line 1:214: mismatched input 'ROW'
expecting EOF


Getting error in creating a hive table via java

2015-07-31 Thread Sateesh Karuturi
I would like to create a table in hive using Java. Using the following way
to do it:

public class HiveCreateTable {
private static String driverName = com.facebook.presto.jdbc.PrestoDriver;
public static void main(String[] args) throws SQLException {
// Register driver and create driver instance
try {
Class.forName(driverName);
}
catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(haii);
Connection con =
DriverManager.getConnection(jdbc:presto://192.168.1.119:8082/default,
hadoop, password);
con.setCatalog(hive);
con.setSchema(log);
Statement stmt = con.createStatement();
String tableName = sample;
ResultSet res = stmt.executeQuery(create table access_log2
(cip string, csusername string, cscomputername string));
System.out.println(Table employee created.);
con.close();
}}

Exception in thread main java.sql.SQLException: Query failed
(#20150731_101653_8_hv68j): Unknown type for column 'cip'


execution error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.teztask error on hive on tez

2015-07-29 Thread Sateesh Karuturi
iam using hive 1.0 and tez 0.7 whenever iam performing insert query its
returns following error:
execution error, return code 1 from
org.apache.hadoop.hive.ql.exec.tez.teztask


hive on tez

2015-06-22 Thread Sateesh Karuturi
i have a small doubt towards tez installation
which version of hive is required for apache tez 0.7.0?


error on hive 1.2.0

2015-06-18 Thread Sateesh Karuturi
iam using *hive 1.2.0* and *hadoop 2.6.0*. whenever iam running hive on my
machine... *select* query works fine but in case of *count(*)* it shows
following *error*:

Diagnostic Messages for this Task: Container launch failed for
container_1434646588807_0001_01_05
:*org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException:
The auxService:mapreduce_shuffle does not exist* at
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at
org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at
org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
at
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total
MapReduce CPU Time Spent: 0 msec


error on hive insert query

2015-06-16 Thread Sateesh Karuturi
iam using *hive 1.0.0* and *tez 0.5.2.* when i set
*hive.execution.engine* value
in hive-site.xml to *tez*select query works well... but in case of
*insert* getting
error. the query is :

*insert into table tablename values(intvalue,'string value');*

and the error is :

*FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.tez.Tez Task*


hive on tez error

2015-06-15 Thread Sateesh Karuturi
iam using hive 1.0.0 and tez 0.5.2.. whenever iam trying to open the hive
getting following error:
Exception in thread main java.lang.RuntimeException: java.io.IOException:
Previous writer likely failed to write
hdfs://localhost:9000/tmp/hive/hadoop/_tez_session_dir/002dad89-59b6-43c9-92f9-1c7b2232b1c4/tez.
Failing because I am unlikely to write too.
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:457)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:626)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: Previous writer likely failed to write
hdfs://localhost:9000/tmp/hive/hadoop/_tez_session_dir/002dad89-59b6-43c9-92f9-1c7b2232b1c4/tez.
Failing because I am unlikely to write too.
at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:979)
at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:860)
at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:803)
at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:228)
at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:154)
at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:122)
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:454)
... 8 more


Re: hive on tez error

2015-06-15 Thread Sateesh Karuturi
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp0  0 127.0.0.1:9000  0.0.0.0:*
LISTEN  8030/java
tcp0  0 127.0.0.1:9000  127.0.0.1:40274
ESTABLISHED 8030/java
tcp0  0 127.0.0.1:40442 127.0.0.1:9000
 ESTABLISHED 8323/java
tcp0  0 127.0.0.1:9000  127.0.0.1:40442
ESTABLISHED 8030/java
tcp0  0 127.0.0.1:40441 127.0.0.1:9000
 TIME_WAIT   -
tcp0  0 127.0.0.1:40274 127.0.0.1:9000
 ESTABLISHED 8157/java

On Tue, Jun 16, 2015 at 1:36 AM, Steve Howard stevedhow...@gmail.com
wrote:

 What does netstat -anp | grep 9000 show?

 On Mon, Jun 15, 2015 at 3:47 PM, Sateesh Karuturi 
 sateesh.karutu...@gmail.com wrote:

 iam using hive 1.0.0 and tez 0.5.2.. whenever iam trying to open the hive
 getting following error:
 Exception in thread main java.lang.RuntimeException:
 java.io.IOException: Previous writer likely failed to write
 hdfs://localhost:9000/tmp/hive/hadoop/_tez_session_dir/002dad89-59b6-43c9-92f9-1c7b2232b1c4/tez.
 Failing because I am unlikely to write too.
 at
 org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:457)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:626)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 Caused by: java.io.IOException: Previous writer likely failed to write
 hdfs://localhost:9000/tmp/hive/hadoop/_tez_session_dir/002dad89-59b6-43c9-92f9-1c7b2232b1c4/tez.
 Failing because I am unlikely to write too.
 at
 org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:979)
 at
 org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:860)
 at
 org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:803)
 at
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:228)
 at
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:154)
 at
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:122)
 at
 org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:454)
 ... 8 more





hive on tez error

2015-06-15 Thread Sateesh Karuturi
iam using hive 1.0.0 and tez 0.5.2.. whenever iam trying to open the hive
getting following error:
Exception in thread main java.lang.RuntimeException: java.io.IOException:
Previous writer likely failed to write
hdfs://localhost:9000/tmp/hive/hadoop/_tez_session_dir/002dad89-59b6-43c9-92f9-1c7b2232b1c4/tez.
Failing because I am unlikely to write too.
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:457)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:626)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: Previous writer likely failed to write
hdfs://localhost:9000/tmp/hive/hadoop/_tez_session_dir/002dad89-59b6-43c9-92f9-1c7b2232b1c4/tez.
Failing because I am unlikely to write too.
at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:979)
at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:860)
at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:803)
at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:228)
at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:154)
at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:122)
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:454)
... 8 more


Re: hive on tez error

2015-06-10 Thread Sateesh Karuturi
hadoop@localhost:~$ yarn logs -applicationId application_1433913377715_0003
15/06/10 17:40:04 INFO client.RMProxy: Connecting to ResourceManager at /
0.0.0.0:8032
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/apache-tez-0.5.2-src/tez-dist/target/tez/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
/tmp/logs/hadoop/logs/application_1433913377715_0003does not exist.
Log aggregation has not completed or is not enabled.


On Wed, Jun 10, 2015 at 6:49 PM, Jianfeng (Jeff) Zhang 
jzh...@hortonworks.com wrote:

  And seems you run this command from hive. Actually you should run it
 from bash.

  BTW, by default hive log is located in /tmp/{user.name}/hive.log



  Best Regard,
 Jeff Zhang


   From: Jianfeng Zhang jzh...@hortonworks.com
 Reply-To: user@tez.apache.org user@tez.apache.org
 Date: Wednesday, June 10, 2015 at 9:09 PM

 To: user@tez.apache.org user@tez.apache.org
 Subject: Re: hive on tez error

   hadoop_20150610170505_7dbb99e1-1d69-45f0-b1af-87452061bf39 is not yarn
 app log.
 It should be something like this format application_1433831838406_0002.
 Please try to look at the hive log to the app id. If still could not find
 it, then please refer to the Resource Manager UI.



  Best Regard,
 Jeff Zhang


   From: Sateesh Karuturi sateesh.karutu...@gmail.com
 Reply-To: user@tez.apache.org user@tez.apache.org
 Date: Wednesday, June 10, 2015 at 7:51 PM
 To: user@tez.apache.org user@tez.apache.org
 Subject: Re: hive on tez error

   yarn logs -applicationId
 hadoop_20150610170505_7dbb99e1-1d69-45f0-b1af-87452061bf39;
 NoViableAltException(26@[])
 at
 org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1015)
 at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
 at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:389)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:303)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1067)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1129)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:201)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:153)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:364)
 at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:712)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:631)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 FAILED: ParseException line 1:1 cannot recognize input near 'yarn' 'logs'
 '-'
 hive


 On Wed, Jun 10, 2015 at 4:28 PM, Prakash Ramachandran 
 pramachand...@hortonworks.com wrote:

   You need to replace appId with your applicationId
 Ex. yarn logs -applicationId application_1432137758849_0029

   From: Sateesh Karuturi
 Reply-To: user@tez.apache.org
 Date: Wednesday, June 10, 2015 at 2:40 PM
 To: user@tez.apache.org

 Subject: Re: Re: hive on tez error

   bash: syntax error near unexpected token `newline'


 On Wed, Jun 10, 2015 at 2:38 PM, r7raul1...@163.com r7raul1...@163.com
 wrote:

  sorry, the command is
 yarn logs -applicationId appId

  --
  r7raul1...@163.com


  *From:* r7raul1...@163.com
 *Date:* 2015-06-10 17:06
  *To:* user user@tez.apache.org
 *Subject:* Re: Re: hive on tez error
 check log

  yarn -logs applicationId

  --
  r7raul1...@163.com


  *From:* Sateesh Karuturi sateesh.karutu...@gmail.com
 *Date:* 2015-06-10 17:10
 *To:* user user@tez.apache.org
 *Subject:* Re: Re: hive on tez error
configuration
   property
 nametez.lib.uris/name
 value${fs.defaultFS}/tez,${fs.defaultFS}/tez/lib/value
   /property
 /configuration


 On Wed, Jun 10, 2015 at 2:02 PM, r7raul1...@163.com r7raul1...@163.com
 wrote:

  show me tez-site.xml?

  --
  r7raul1...@163.com


  *From:* Sateesh Karuturi sateesh.karutu...@gmail.com
 *Date:* 2015-06-10 16:50
 *To:* user user@tez.apache.org
 *Subject:* Re: hive on tez error
tez 0.5.2

 On Wed, Jun 10, 2015 at 1:49 PM, r7raul1

Re: Re: hive on tez error

2015-06-10 Thread Sateesh Karuturi
shows error
Unrecognized option: -logs
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

On Wed, Jun 10, 2015 at 2:36 PM, r7raul1...@163.com r7raul1...@163.com
wrote:

 check log

 yarn -logs applicationId

 --
 r7raul1...@163.com


 *From:* Sateesh Karuturi sateesh.karutu...@gmail.com
 *Date:* 2015-06-10 17:10
 *To:* user user@tez.apache.org
 *Subject:* Re: Re: hive on tez error
 configuration
   property
 nametez.lib.uris/name
 value${fs.defaultFS}/tez,${fs.defaultFS}/tez/lib/value
   /property
 /configuration


 On Wed, Jun 10, 2015 at 2:02 PM, r7raul1...@163.com r7raul1...@163.com
 wrote:

 show me tez-site.xml?

 --
 r7raul1...@163.com


 *From:* Sateesh Karuturi sateesh.karutu...@gmail.com
 *Date:* 2015-06-10 16:50
 *To:* user user@tez.apache.org
 *Subject:* Re: hive on tez error
 tez 0.5.2

 On Wed, Jun 10, 2015 at 1:49 PM, r7raul1...@163.com r7raul1...@163.com
 wrote:

 tez version??

 --
 r7raul1...@163.com


 *From:* Sateesh Karuturi sateesh.karutu...@gmail.com
 *Date:* 2015-06-10 16:09
 *To:* user user@tez.apache.org
 *Subject:* hive on tez error
 whenever iam perform a simple insert operation on hive
 1.0.0(hive.execution.engine=tez) its shows following error:

 FAILED: Execution Error, return code 1 from
 org.apache.hadoop.hive.ql.exec.tez.TezTask

 when i changed hive.execution.engine to mr its working fine.

 please help me out






hive tez error

2015-06-08 Thread Sateesh Karuturi
getting FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.tez.TezTask
 error... when iam trying to perform insert operation on hive(set
hive.execution.engine=tez).
whenever hive.execution.engine value is set to mr its works fine

please help me out


Re: hive tez error

2015-06-08 Thread Sateesh Karuturi
when i try to check getting this

hadoop@localhost:~$ yarn logs -applicationId
options parsing failed: Missing argument for option: applicationId
Retrieve logs for completed YARN applications.
usage: yarn logs -applicationId application ID [OPTIONS]

general options are:
 -appOwner Application Owner   AppOwner (assumed to be current user if
 not specified)
 -containerId Container ID ContainerId (must be specified if node
 address is specified)
 -nodeAddress Node Address NodeAddress in the format nodename:port
 (must be specified if container id is
 specified)


On Mon, Jun 8, 2015 at 1:20 PM, Jianfeng (Jeff) Zhang 
jzh...@hortonworks.com wrote:


  Could you check the yarn app logs ?

  By invoking command : “yarn logs -applicationId

  Best Regard,
 Jeff Zhang


   From: Sateesh Karuturi sateesh.karutu...@gmail.com
 Reply-To: user@hive.apache.org user@hive.apache.org
 Date: Monday, June 8, 2015 at 3:45 PM
 To: user@hive.apache.org user@hive.apache.org
 Subject: hive tez error

   getting FAILED: Execution Error, return code 1 from
 org.apache.hadoop.hive.ql.exec.tez.TezTask
  error... when iam trying to perform insert operation on hive(set
 hive.execution.engine=tez).
 whenever hive.execution.engine value is set to mr its works fine

  please help me out



Exception in thread main java.lang.RuntimeException: java.io.IOException: Previous writer likely failed to write hdfs://localhost:8020/tmp/hive/hadoop/_tez_session_dir/39ba9c15-d9ed-4582-a4ce-11ae8b

2015-06-04 Thread Sateesh Karuturi
I am using *hive 1.0.0* and *apache tez 0.5.2* When I configure hive to use
tez I get an exception.

In *hive-site.xml* when the *hive.execution.engine* value is mr its works
fine. But if I set it to tez I get this error:


Exception in thread main java.lang.RuntimeException: java.io.IOException:
Previous writer likely failed to write
hdfs://localhost:8020/tmp/hive/hadoop/_tez_session_dir/39ba9c15-d9ed-4582-a4ce-11ae8bc586b7/mysql-connector-java-5.1.35-bin.jar.
Failing because I am unlikely to write too.

at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:457)

at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:626)

at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Caused by: java.io.IOException: Previous writer likely failed to write
hdfs://localhost:8020/tmp/hive/hadoop/_tez_session_dir/39ba9c15-d9ed-4582-a4ce-11ae8bc586b7/mysql-connector-java-5.1.35-bin.jar.
Failing because I am unlikely to write too.

at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:979)

at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:860)

at
org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:803)

at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:228)

at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:154)

at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:122)

at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:454)

... 8 more


create statement in hive 1.0.0.

2015-06-04 Thread Sateesh Karuturi
anyone help me please... how to write insert statement in hive 1.0.0?


Re: NoSuchMethodError when hive.execution.engine value its tez

2015-06-02 Thread Sateesh Karuturi
I am using *hive 1.0.0* and *apache tez 0.4.1* When I configure hive to use
tez I get an exception.

In *hive-site.xml* when the *hive.execution.engine* value is mr its works
fine. But if I set it to tez I get this error:


Exception in thread main java.lang.NoSuchMethodError:
org.apache.tez.mapreduce.hadoop.MRHelpers.updateEnvBasedOnMRAMEnv(Lorg/apache/hadoop/conf/Configuration;Ljava/util/Map;)V

at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:169)

at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:122)

at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:454)

at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:626)

at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)


On Tue, Jun 2, 2015 at 7:09 PM, Sateesh Karuturi 
sateesh.karutu...@gmail.com wrote:

 I am using *hive 1.0.0* and *apache tez 0.4.1* When I configure hive to
 use tez I get an exception.

 In *hive-site.xml* when the *hive.execution.engine* value is mr its works
 fine. But if I set it to tez I get this error:


 Exception in thread main java.lang.NoSuchMethodError:
 org.apache.tez.mapreduce.hadoop.MRHelpers.updateEnvBasedOnMRAMEnv(Lorg/apache/hadoop/conf/Configuration;Ljava/util/Map;)V

 at
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:169)

 at
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:122)

 at
 org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:454)

 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:626)

 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

 at java.lang.reflect.Method.invoke(Method.java:606)

 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)




NoSuchMethodError when hive.execution.engine value its tez

2015-06-02 Thread Sateesh Karuturi
I am using *hive 1.0.0* and *apache tez 0.4.1* When I configure hive to use
tez I get an exception.

In *hive-site.xml* when the *hive.execution.engine* value is mr its works
fine. But if I set it to tez I get this error:


Exception in thread main java.lang.NoSuchMethodError:
org.apache.tez.mapreduce.hadoop.MRHelpers.updateEnvBasedOnMRAMEnv(Lorg/apache/hadoop/conf/Configuration;Ljava/util/Map;)V

at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:169)

at
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:122)

at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:454)

at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:626)

at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:570)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)