Re: Mechanism when doing a select *

2016-03-21 Thread Gopal Vijayaraghavan
>> Or does all the data go directly from the datanodes to my client ? Not yet. https://issues.apache.org/jira/browse/HIVE-11527 Cheers, Gopal

Re: Mechanism when doing a select *

2016-03-21 Thread Gopal Vijayaraghavan
> Or does all the data go directly from the datanodes to my client ? Not yet. https://issues.apache.org/jira/browse/HIVE-11527 Cheers, Gopal

Re: Error selecting from a Hive ORC table in Spark-sql

2016-03-21 Thread Mich Talebzadeh
sounds like with ORC transactional table this happens When I create that table as ORC but non transactional it works! Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

Re: Error selecting from a Hive ORC table in Spark-sql

2016-03-21 Thread Eugene Koifman
The system thinks t2 is an Acid table but the files on disk don’t follow the convention acid system would expect. Perhaps Xuefu Zhang would know more on Spark/Aicd integration. From: Mich Talebzadeh > Reply-To:

Error selecting from a Hive ORC table in Spark-sql

2016-03-21 Thread Mich Talebzadeh
Hi, Do we know the cause of this error when selecting from an Hive ORC table spark-sql> *select * from t2;*16/03/21 16:38:33 ERROR SparkSQLDriver: Failed in [select * from t2] java.lang.RuntimeException: serious problem at

Re: Mechanism when doing a select *

2016-03-21 Thread Mich Talebzadeh
You are correct. it should not. There is nothing to optimise here. 0: jdbc:hive2://rhes564:10010/default> *select * from countries;*OK INFO : Compiling command(queryId=hduser_20160321162726_7efeecbb-46ee-431f-9095-f67e0602b318): select * from countries INFO : Semantic Analysis Completed INFO

Re: Mechanism when doing a select *

2016-03-21 Thread Tale Firefly
Oh my bad, even with the execution engine set to MR, my query turns into a MR job. I'm gonna make more tests with Hive CLI and beeline, and excel to check if this behaviour is linked to the ODBC driver. BR. Tale. On Mon, Mar 21, 2016 at 4:56 PM, Tale Firefly wrote: > Hm,

Re: Mechanism when doing a select *

2016-03-21 Thread Tale Firefly
Hm, I need to check if statistics are enabled for this table and up-to-date. I'm going to check this. I don't know if I was clear in my previous statement, but I am surprised that a job is launched just by doing a select * from my_table. I thought a select * from my_table was not running any MR

Re: Mechanism when doing a select *

2016-03-21 Thread Mich Talebzadeh
Well I use Spark as engine. Now the question is have you updated statistics on ORC table? HTH Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw *

Re: Mechanism when doing a select *

2016-03-21 Thread Tale Firefly
Re. Ty ty for your answer. I'm using Tez as execution engine for this query. And it launches a job to yarn. Do you know why it launches a job just for a select when I use Tez as execution engine ? BR. Tale On Mon, Mar 21, 2016 at 4:17 PM, Mich Talebzadeh wrote:

Re: How to work around non-executive /tmp with Hive in Parquet+Snappy compression?

2016-03-21 Thread Tale Firefly
Hey ! Are you talking about the hdfs /tmp or the local FS /tmp ? For the HDFS one, I think it should be the property : hive.exec.scratchdir For the local, I think it should be the property : hive.exec.local.scratchdir BR Tale On Sat, Mar 19, 2016 at 8:46 PM, Rex X wrote:

Re: Mechanism when doing a select *

2016-03-21 Thread Mich Talebzadeh
Hi, Your query is a table level query that covers all rows in the table. Using ODBC you are connecting to Hive server 2 that runs on a given port. Depending on the version of Hive you are running Hive under the bonnet is most likely using Map-Reduce as the execution engine. Data has to be

Mechanism when doing a select *

2016-03-21 Thread Tale Firefly
Hello guys ! I'm trying to understand the mechanism for a simple query select * from my_table when using HiveServer2. I'm using the hortonworks ODBC Driver for HiveServer2. I just do a select * from my_table. my_table is an ORC table based on files divised into blocks located on all my

Re: Column type conversion in Hive

2016-03-21 Thread Edward Capriolo
Explicit conversion is done using cast (x as bigint) You said: As a matter of interest what is the underlying storage for Integer? This is dictated on disk by the input format the "temporal in memory format" is dictated by the serde, an integer could be stored as "1", "1" , as dictated by the

Re: Error in Hive on Spark

2016-03-21 Thread Stana
Does anyone have suggestions in setting property of hive-exec-2.0.0.jar path in application? Something like 'hiveConf.set("hive.remote.driver.jar","hdfs://storm0:9000/tmp/spark-assembly-1.4.1-hadoop2.6.0.jar")'. 2016-03-11 10:53 GMT+08:00 Stana : > Thanks for reply > > I