Re: Group by query

2015-06-03 Thread Bhagwan S. Soni
Ravishankar,

below query will works fine without including group by
SELECT
 COUNT(adventurepersoncontacts.contactid)  as contact
 FROM
 default.adventurepersoncontacts

But when you are planning to get aggregated column with non-aggregated
column you have to specify the level of aggregation.
Why don't you include selected column in group by, I don't think so it will
affect your output.

Thanks,
Bhagwan

On Wed, Jun 3, 2015 at 4:05 PM, Ravisankar Mani  wrote:

> Hi everyone,
>
>I have selected specific column using aggregation ." FAILED:
> SemanticException [Error 10002]: Line 2:26 Invalid column reference
> 'contactid " .  Exception occur while executing following query. But it
> working in sql server without using group by. Kindly refer the query.
>
>
> SELECT
>   adventurepersoncontacts.contactid as contactid
>  ,adventurepersoncontacts.fullname as fullname
>  ,adventurepersoncontacts.age as age
>  ,adventurepersoncontacts.emailaddress as emailaddress
>  ,adventurepersoncontacts.phoneno as phoneno
>  ,adventurepersoncontacts.modifieddate as modifieddate
> , COUNT(adventurepersoncontacts.contactid)  as contact
>  FROM
>  default.adventurepersoncontacts
>
> Regards,
> Ravisankar M R
>


Re: Null condition hive query

2015-06-03 Thread Bhagwan S. Soni
Why don't you try IF(column_name IS NULL, 0) AS column_name;
After this you can perform your aggregation on top of this.

On Wed, Jun 3, 2015 at 11:16 AM, Ravisankar Mani  wrote:

> Hi Everyone,
>
>   I need to check condition measure value is null or not. If it is null is
> then replace 0 otherwise in returns same values
>
> It is a sample sql query
>
>  SELECT ISNULL(SUM(CAST([Purchasing].[Vendor].[BusinessEntityID] AS
> decimal(38,6))),0)
> AS [BusinessEntityID],
> [Purchasing].[Vendor].[AccountNumber] AS [AccountNumber]
> FROM [Purchasing].[Vendor]
> GROUP BY [Purchasing].[Vendor].[AccountNumber]  ORDER BY 2 ASC
>
> if  i give  'sum(columnname,0) ' it replace 0 if any null found
>
> i have check in hive query   "ISNULL(columnname)" its only returns true or
> false the how to i replace 0.. Could you please help regarding this query
>
>
> Regards.
>
> Ravi
>


Re: current_date function in hive

2015-06-02 Thread Bhagwan S. Soni
Use "from_unixtime(unix_timestamp())". This might help you to get what you
want.
You may have to split date and time because this function will returns
TIMESTAMP.

On Tue, Jun 2, 2015 at 7:49 PM, Ayazur Rehman 
wrote:

> Hi everyone,
>
> I am trying to schedule a hive query using Oozie, to perform aggregation
> on a table on data of a particular day and save the results in another
> table whenever every 24 hours.
>
> the schema of my table is something like (tablename - currenttable)
> id  string
> cdatetime   timestamp
> average int
> locations   array
> color   string
>
> And currently the query that I perform manually everyday is something like
>
> insert into table lotable select id, lv, cdatetime, color, count(color)
> from currenttable lateral view explode(locations) lvtable as lv where
> to_date(cdatetime)='2015-06-01' group by cdatetime, color, lv, id;
>
> So, in order to automate the process I want to use a date function that
> would let hive aggregate on the data of the previous day.
> I tried using current_date function but I can't get the syntax right. I
> get the following error
>   FAILED: SemanticException [Error 10011]: Line 1:47 Invalid
> function 'current_date'
>
> Could you please help me with the syntax.
>
>
> --
> Thanking You,
> Ayaz
>
>


Re: cast column float

2015-05-27 Thread Bhagwan S. Soni
could you also provide some sample dataset for these two columns?

On Wed, May 27, 2015 at 7:17 PM, patcharee 
wrote:

> Hi,
>
> I queried a table based on value of two float columns
>
> select count(*) from u where xlong_u = 7.1578474 and xlat_u = 55.192524;
> select count(*) from u where xlong_u = cast(7.1578474 as float) and xlat_u
> = cast(55.192524 as float);
>
> Both query returned 0 records, even though there are some records matched
> the condition. What can be wrong? I am using Hive 0.14
>
> BR,
> Patcharee
>


Re: Re: upgrade of Hadoop cluster from 5.1.2 to 5.2.1.

2015-05-24 Thread Bhagwan S. Soni
are you member of group "*g_**hdp_storeops*" ?

On Mon, May 25, 2015 at 6:21 AM, r7raul1...@163.com 
wrote:

> config hdfs acl
>
> http://zh.hortonworks.com/blog/hdfs-acls-fine-grained-permissions-hdfs-files-hadoop/
>
> --
> r7raul1...@163.com
>
>
> *From:* Anupam sinha 
> *Date:* 2015-05-21 12:44
> *To:* user 
> *Subject:* Re: upgrade of Hadoop cluster from 5.1.2 to 5.2.1.
>
> Hello everyone,
>
>
> i am a member of nested group which has select priviledges.
>
>
> Still not able to access Hive StoreOps Database,
>
>
> Please suggest on this.
>
>
> Thank you.
>
>
>
>
>
>
> On Thu, May 21, 2015 at 9:30 AM, Anupam sinha  wrote:
>
>> I have upgrade of Hadoop cluster from 5.1.2 to 5.2.1.
>> Now i am unable to read the hive tables,
>> before the upgrade i am able to access the Hive StoreOps Database.
>>
>> Please suggest any changes i need to do
>>
>>
>> Here is the output i receive,
>>
>> java.io.IOException: org.apache.hadoop.security.AccessControlException:
>> Permission denied: user=dmadjar, access=EXECUTE,
>> inode="/smith/storeops":srv-hdp-storeops-d:g_hdp_storeops:drwxrwx---
>> at
>> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:255)
>> at
>> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:236)
>> at
>> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:178)
>> at
>> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6250)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3942)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:811)
>> at
>> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:502)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:815)
>> at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>> at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>>
>>
>>
>> Thank you,
>>
>
>


Re: Output of Hive

2015-05-16 Thread Bhagwan S. Soni
Does your hive table has data? Please verify it first.

On Sat, May 16, 2015 at 8:00 PM, Daniel Haviv <
daniel.ha...@veracity-group.com> wrote:

> It seems like your query returns no results,try using  count to confirm.
>
> Daniel
>
> On 16 במאי 2015, at 14:40, Anand Murali  wrote:
>
> Dear All:
>
> I am new to hive so pardon my ignorance. I have the following query but do
> not see any output. I wondered it maybe in HDFS and checked there and do
> not find it there. Can somebody advise
>
> hive> select year, MAX(Temperature) from records where temperature <> 
> and (quality = 0 or quality = 1 or quality = 4 or quality = 5 or quality =
> 9)
> > group by year
> > ;
> Query ID = anand_vihar_20150516170505_9b23d8ba-19d7-4fa7-b972-4f199e3bf56a
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Job running in-process (local Hadoop)
> 2015-05-16 17:05:11,504 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_local927727978_0003
> MapReduce Jobs Launched:
> Stage-Stage-1:  HDFS Read: 5329140 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> OK
> Time taken: 1.258 seconds
>
> Thanks
>
> Anand Murali
>
>


hive job not making progress due to Number of reduce tasks is set to 0 since there's no reduce operator

2015-05-14 Thread Bhagwan S. Soni
Hi Hive Users,

I'm using Cloudera distribution and Hive's 13th version on my cluster.

I came across a problem where job is not making any progress after writing
log line - "*Number of reduce tasks is set to 0 since there's no reduce
operator*"

Below is the log for the same, could you help me what kind of issue is this
because this is not a code issue as if i re-run the same job it completes
successfully.

Logging initialized using configuration in
jar:file:/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/jars/hive-common-0.13.1-cdh5.2.1.jar!/hive-log4j.properties
Total jobs = 5
Launching Job 1 out of 5
Launching Job 2 out of 5
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
  set mapreduce.job.reduces=
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1431159077692_1399, Tracking URL =
http://xyz.com:8088/proxy/application_1431159077692_1399/
Starting Job = job_1431159077692_1398, Tracking URL =
http://xyz.com:8088/proxy/application_1431159077692_1398/
Kill Command =
/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/lib/hadoop/bin/hadoop job
-kill job_1431159077692_1399
Kill Command =
/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/lib/hadoop/bin/hadoop job
-kill job_1431159077692_1398
Hadoop job information for Stage-12: number of mappers: 5; number of
reducers: 10
Hadoop job information for Stage-1: number of mappers: 5; number of
reducers: 10
2015-05-12 19:59:12,298 Stage-1 map = 0%,  reduce = 0%
2015-05-12 19:59:12,298 Stage-12 map = 0%,  reduce = 0%
2015-05-12 19:59:20,832 Stage-1 map = 20%,  reduce = 0%, Cumulative CPU 2.5
sec
2015-05-12 19:59:20,832 Stage-12 map = 80%,  reduce = 0%, Cumulative CPU
8.63 sec
2015-05-12 19:59:21,905 Stage-1 map = 60%,  reduce = 0%, Cumulative CPU
7.06 sec
2015-05-12 19:59:22,968 Stage-1 map = 80%,  reduce = 0%, Cumulative CPU
9.34 sec
2015-05-12 19:59:24,031 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
11.46 sec
2015-05-12 19:59:26,265 Stage-12 map = 100%,  reduce = 0%, Cumulative CPU
10.92 sec
2015-05-12 19:59:32,665 Stage-12 map = 100%,  reduce = 30%, Cumulative CPU
24.51 sec
2015-05-12 19:59:33,726 Stage-12 map = 100%,  reduce = 100%, Cumulative CPU
57.61 sec
2015-05-12 19:59:35,021 Stage-1 map = 100%,  reduce = 30%, Cumulative CPU
20.99 sec
MapReduce Total cumulative CPU time: 57 seconds 610 msec
Ended Job = job_1431159077692_1399
2015-05-12 19:59:36,084 Stage-1 map = 100%,  reduce = 80%, Cumulative CPU
39.24 sec
2015-05-12 19:59:37,146 Stage-1 map = 100%,  reduce = 90%, Cumulative CPU
42.37 sec
2015-05-12 19:59:38,203 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
45.97 sec
MapReduce Total cumulative CPU time: 45 seconds 970 msec
Ended Job = job_1431159077692_1398
2015-05-12 19:59:45,180 WARN  [main] conf.Configuration
(Configuration.java:loadProperty(2510)) -
file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an
attempt to override final parameter: hadoop.ssl.require.client.cert;
Ignoring.
2015-05-12 19:59:45,193 WARN  [main] conf.Configuration
(Configuration.java:loadProperty(2510)) -
file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an
attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2015-05-12 19:59:45,196 WARN  [main] conf.Configuration
(Configuration.java:loadProperty(2510)) -
file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an
attempt to override final parameter: hadoop.ssl.client.conf;  Ignoring.
2015-05-12 19:59:45,201 WARN  [main] conf.Configuration
(Configuration.java:loadProperty(2510)) -
file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an
attempt to override final parameter: hadoop.ssl.keystores.factory.class;
Ignoring.
2015-05-12 19:59:45,210 WARN  [main] conf.Configuration
(Configuration.java:loadProperty(2510)) -
file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an
attempt to override final parameter: hadoop.ssl.server.conf;  Ignoring.
2015-05-12 19:59:45,258 WARN  [main] conf.Configuration
(Configuration.java:loadProperty(2510)) -
file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an
attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2015-05-12 19:59:45,792 WARN  [main] conf.HiveConf
(HiveConf.java:in

Re: FAILED: LockException [Error 10280]: Error communicating with the metastore

2015-05-07 Thread Bhagwan S. Soni
Execute below property on hive prompt
set hive.support.concurrency=false;
and then try running your query, it should work. Let me know if it won't.

On Thu, May 7, 2015 at 8:29 PM, Grant Overby (groverby) 
wrote:

>  My environment has HDP 2.2 installed without hive. Hive 1.1 is installed
> independently of HDP. This is a new setup.
>
>  I can get a hive cli prompt, but when I run ‘show databases;’ I get
> ‘FAILED: LockException [Error 10280]: Error communicating with the
> metastore’. The metastore is running. If I stop the metastore, then the cli
> crashes before giving a prompt, so I believe that cli to metastore
> communication is happening to some extent.
>
>  The metastore is using MySQL. hive.in.test is set to true to cause hive
> to create the mysql schema.
>
>  Thoughts on how to address this?
>
>  These stack traces repeat in the hive.log:
>
>   2015-05-07 10:39:25,202 ERROR [pool-3-thread-25]:
> server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred
> during processing of message.
>
> java.lang.RuntimeException: Unable to set up transaction database for
> testing: Can't call rollback when autocommit=true
>
> at
> org.apache.hadoop.hive.metastore.txn.TxnHandler.checkQFileTestHack(TxnHandler.java:1147)
>
> at
> org.apache.hadoop.hive.metastore.txn.TxnHandler.(TxnHandler.java:117)
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTxnHandler(HiveMetaStore.java:568)
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_open_txns(HiveMetaStore.java:5382)
>
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)
>
> at com.sun.proxy.$Proxy1.get_open_txns(Unknown Source)
>
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_open_txns.getResult(ThriftHiveMetastore.java:11227)
>
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_open_txns.getResult(ThriftHiveMetastore.java:11212)
>
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>
> at
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
>
> at
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>
> at
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
>
> at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
> 2015-05-07 10:39:25,240 ERROR [main]: ql.Driver
> (SessionState.java:printError(861)) - FAILED: LockException [Error 10280]:
> Error communicating with the metastore
>
> org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with
> the metastore
>
> at
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidTxns(DbTxnManager.java:300)
>
> at org.apache.hadoop.hive.ql.Driver.recordValidTxns(Driver.java:927)
>
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:403)
>
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
>
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1112)
>
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1160)
>
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
>
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1039)
>
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
>
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
>
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
>
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:754)
>
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
>
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> Caused by: org.apache.thrift.transport.TTransportException
>
> at
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>
> at org.apache.thrift.transport.TTransport.readAll(TTransport.jav

Re: Hive : plan serialization format option

2015-05-05 Thread Bhagwan S. Soni
Please find attached error log for the same.

On Tue, May 5, 2015 at 11:36 PM, Jason Dere  wrote:

>  Looks like you are running into
> https://issues.apache.org/jira/browse/HIVE-8321, fixed in Hive-0.14.
> You might be stuck having to use Kryo, what are the issues you are having
> with Kryo?
>
>
>  Thanks,
> Jason
>
>  On May 5, 2015, at 4:28 AM, Bhagwan S. Soni 
> wrote:
>
>  Bottom on the log:
>
> at java.beans.Encoder.writeObject(Encoder.java:74)
>
> at java.beans.XMLEncoder.writeObject(XMLEncoder.java:327)
>
> at java.beans.Encoder.writeExpression(Encoder.java:330)
>
> at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:454)
>
> at
> java.beans.DefaultPersistenceDelegate.doProperty(DefaultPersistenceDelegate.java:194)
>
> at
> java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:256)
>
> ... 98 more
>
> Caused by: java.lang.NullPointerException
>
> at java.lang.StringBuilder.(StringBuilder.java:109)
>
> at
> org.apache.hadoop.hive.serde2.typeinfo.BaseCharTypeInfo.getQualifiedName(BaseCharTypeInfo.java:49)
>
> at
> org.apache.hadoop.hive.serde2.typeinfo.BaseCharTypeInfo.getQualifiedName(BaseCharTypeInfo.java:45)
>
> at
> org.apache.hadoop.hive.serde2.typeinfo.VarcharTypeInfo.getTypeName(VarcharTypeInfo.java:37)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>
> at java.beans.Statement.invokeInternal(Statement.java:292)
>
> at java.beans.Statement.access$000(Statement.java:58)
>
> at java.beans.Statement$2.run(Statement.java:185)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at java.beans.Statement.invoke(Statement.java:182)
>
> at java.beans.Expression.getValue(Expression.java:153)
>
> at
> java.beans.DefaultPersistenceDelegate.doProperty(DefaultPersistenceDelegate.java:193)
>
> at
> java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:256)
>
> ... 111 more
>
> Job Submission failed with exception
> 'java.lang.RuntimeException(java.lang.RuntimeException: Cannot serialize
> object)'
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
>
> On Tue, May 5, 2015 at 3:10 PM, Jason Dere  wrote:
>
>> kryo/javaXML are the only available options. What are the errors you see
>> with each setting?
>>
>>
>>  On May 1, 2015, at 9:41 AM, Bhagwan S. Soni 
>> wrote:
>>
>>   Hi Hive Users,
>>
>>  I'm using cloudera's hive 0.13 version which by default provide Kryo
>> plan serialization format.
>>  
>> hive.plan.serialization.format
>> *kryo*
>> 
>>
>>  As i'm facing issues with Kryo, can anyone help me identify the other
>> open options in place of Kryo for hive plan serialization format.
>>
>>  I know one option javaXML, but in my case it is not working.
>>
>>
>>
>>
>>
>>
>
>
2015-04-21 07:30:36,717 WARN  [main] conf.HiveConf 
(HiveConf.java:initialize(1491)) - DEPRECATED: Configuration property 
hive.metas\
tore.local no longer has any effect. Make sure to provide a valid value for 
hive.metastore.uris if you are connecting to a remote m\
etastore.

Logging initialized using configuration in 
jar:file:/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/jars/hive-common-0.13.1-cdh5.2\
.1.jar!/hive-log4j.properties
HJOBNAME=mdhdy001
LASTLOADDATE=17000101
hiveconf:LASTLOADDATE=17000101
RUNNING_MODE=
Total jobs = 5
Launching Job 1 out of 5
Launching Job 2 out of 5
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
In order to change the average load for a reducer (in bytes):
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
  set hive.exec.reducers.bytes.per.reducer=
In order to change the average load for a reducer (in bytes):
In order to limit the maximum number of reducers:
  se

Re: downloading RDBMS table data to Hive with Sqoop import

2015-05-05 Thread Bhagwan S. Soni
You have to write a different hql which will handle update and delete, you
can not do this direct from sqoop.

On Wed, May 6, 2015 at 12:02 AM, Divakar Reddy 
wrote:

> As per my knowledge Sqoop doesn't support updates and deletes.
>
> We are handling like:
>
> 1) drop particular data from *partitioned* (form partitioned Column)
> table and load it again with conditions in sqoop like --query "select *
> form xyz where date = '2015-04-02'
>
> Thanks,
> Divakar
>
> On Tue, May 5, 2015 at 10:41 AM, Ashok Kumar  wrote:
>
>> Hi gurus,
>>
>> I can use Sqoop import to get RDBMS data say Oracle to Hive first and
>> then use incremental append for new rows with PK and last value.
>>
>> However, how do you account for updates and deletes with Sqoop without
>> full load of table from RDBMS to Hive?
>>
>> Thanks
>>
>
>


Re: Hive : plan serialization format option

2015-05-05 Thread Bhagwan S. Soni
Bottom on the log:

at java.beans.Encoder.writeObject(Encoder.java:74)

at java.beans.XMLEncoder.writeObject(XMLEncoder.java:327)

at java.beans.Encoder.writeExpression(Encoder.java:330)

at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:454)

at
java.beans.DefaultPersistenceDelegate.doProperty(DefaultPersistenceDelegate.java:194)

at
java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:256)

... 98 more

Caused by: java.lang.NullPointerException

at java.lang.StringBuilder.(StringBuilder.java:109)

at
org.apache.hadoop.hive.serde2.typeinfo.BaseCharTypeInfo.getQualifiedName(BaseCharTypeInfo.java:49)

at
org.apache.hadoop.hive.serde2.typeinfo.BaseCharTypeInfo.getQualifiedName(BaseCharTypeInfo.java:45)

at
org.apache.hadoop.hive.serde2.typeinfo.VarcharTypeInfo.getTypeName(VarcharTypeInfo.java:37)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)

at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)

at java.beans.Statement.invokeInternal(Statement.java:292)

at java.beans.Statement.access$000(Statement.java:58)

at java.beans.Statement$2.run(Statement.java:185)

at java.security.AccessController.doPrivileged(Native Method)

at java.beans.Statement.invoke(Statement.java:182)

at java.beans.Expression.getValue(Expression.java:153)

at
java.beans.DefaultPersistenceDelegate.doProperty(DefaultPersistenceDelegate.java:193)

at
java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:256)

... 111 more

Job Submission failed with exception
'java.lang.RuntimeException(java.lang.RuntimeException: Cannot serialize
object)'

FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask

On Tue, May 5, 2015 at 3:10 PM, Jason Dere  wrote:

>  kryo/javaXML are the only available options. What are the errors you see
> with each setting?
>
>
>  On May 1, 2015, at 9:41 AM, Bhagwan S. Soni 
> wrote:
>
>   Hi Hive Users,
>
>  I'm using cloudera's hive 0.13 version which by default provide Kryo
> plan serialization format.
>  
> hive.plan.serialization.format
> *kryo*
> 
>
>  As i'm facing issues with Kryo, can anyone help me identify the other
> open options in place of Kryo for hive plan serialization format.
>
>  I know one option javaXML, but in my case it is not working.
>
>
>
>
>
>


Re: comparing timestamp columns in Hive

2015-05-03 Thread Bhagwan S. Soni
I Tried it, and it seems it is working for me.

In your case t and tmp are two tables, but what are the tmp.object_id ?

can you provide t and tmp sample data!

On Mon, May 4, 2015 at 2:53 AM, Mich Talebzadeh  wrote:

> Hi,
>
>
>
> Just wanted to raise this one.
>
>
>
> Sounds like equating two columns of timestamp does not work in Hive. In
> the following both *t.op_time* and *tmp* *maxDDLTime* columns are defined
> as timestamp. When I do the following
>
>
>
> select t.object_id, t.op_type, t.op_time, tmp.maxDDLTime
>
> from t, tmp
>
> where t.object_id = tmp.object_id and t.op_time = tmp.maxDDLTime;
>
>
>
> it returns zero rows and does not work!
>
>
>
> However, when I cast timestamp columns to bigint it works
>
>
>
> select t.object_id, t.op_type, t.op_time, cast(t.op_time as bigint),
> tmp.maxDDLTime, cast(tmp.maxDDLTime as bigint)
>
> from t, tmp
>
> where t.object_id = tmp.object_id and cast(t.op_time as bigint) =
> cast(tmp.maxDDLTime as bigint);
>
>
>
>
> +--+++-++-+--+
>
> | t.object_id  | t.op_type  |   t.op_time| _c3 |
> tmp.maxddltime | _c5 |
>
>
> +--+++-++-+--+
>
> | 3644834  | 2  | 2015-05-01 12:42:51.0  | 1430480571  |
> 2015-05-01 12:42:51.0  | 1430480571  |
>
> | 3636987  | 2  | 2015-05-01 12:42:51.0  | 1430480571  |
> 2015-05-01 12:42:51.0  | 1430480571  |
>
>
> +--+++-++-+--+
>
>
>
> Is this expected? In other words to equate timestamp columns do we need to
> cast them to bigint or numeric?
>
>
>
> Thanks,
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase** ASE
> 15", **ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4*
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
> one out shortly
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>


Hive : plan serialization format option

2015-05-01 Thread Bhagwan S. Soni
Hi Hive Users,

I'm using cloudera's hive 0.13 version which by default provide Kryo  plan
serialization format.

hive.plan.serialization.format
*kryo*


As i'm facing issues with Kryo, can anyone help me identify the other open
options in place of Kryo for hive plan serialization format.

I know one option javaXML, but in my case it is not working.


Re: parque table

2015-05-01 Thread Bhagwan S. Soni
Please mention partition as well, while loading data into a partitioned
table.

On Fri, May 1, 2015 at 8:22 PM, Sean Busbey  wrote:

> -user@hadoop to bcc
>
> Kumar,
>
> I'm copying your question over to the Apache Hive user list (
> user@hive.apache.org). Please keep your questions about using Hive there.
> The Hadoop user list (u...@hadoop.apache.org) is just for that project.
>
> On Fri, May 1, 2015 at 9:32 AM, Asit Parija 
> wrote:
>
>> Hi Kumar ,
>>   You can remove the stored as text file part and then try that out by
>> default it should be able to read the .gz files ( if they are comma
>> delimited csv files ) .
>>
>>
>> Thanks
>> Asit
>>
>> On Fri, May 1, 2015 at 10:55 AM, Kumar Jayapal 
>> wrote:
>>
>>> Hello Nitin,
>>>
>>> Dint understand what you mean. Are you telling me to  set
>>> COMPRESSION_CODEC=gzip ?
>>>
>>> thanks
>>> Jay
>>>
>>> On Thu, Apr 30, 2015 at 10:02 PM, Nitin Pawar 
>>> wrote:
>>>
 You loaded a gz file in a table stored as text file
 either define compression format or uncompress the file and load it

 On Fri, May 1, 2015 at 9:17 AM, Kumar Jayapal 
 wrote:

> Created table  CREATE TABLE raw (line STRING) PARTITIONED BY
> (FISCAL_YEAR  smallint, FISCAL_PERIOD smallint)
> STORED AS TEXTFILE;
>
> and loaded it with data.
>
> LOAD DATA LOCAL INPATH '/tmp/weblogs/20090603-access.log.gz' INTO
> TABLE raw;
>
> I have to load it to parque table
>
> when I say select * from raw it shows all null values.
>
>
> NULLNULLNULLNULLNULLNULLNULLNULL
> NULLNULLNULLNULLNULLNULLNULLNULL
> NULLNULLNULLNULLNULLNULLNULLNULL
> NULLNULLNULLNULLNULLNULLNULLNULL
> Why is not show showing the actual data in file. will it show once I
> load it to parque table?
>
> Please let me know if I am doing anything wrong.
>
>
> Thanks
> jay
>
>


 --
 Nitin Pawar

>>>
>>>
>>
>
>
> --
> Sean
>


Hive: Kryo Exception

2015-04-29 Thread Bhagwan S. Soni
Hi Hive Users,

I'm executing one of my HQL query which has join, union and insert
overwrite operation, which is working fine if i run it just once.
If i execute the same job second I'm facing this issue.
Can someone help me to identify  in which scenario we get this exception ?
Please find attachment to get the complete log for the same.

Error: java.lang.RuntimeException:
org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered
unregistered class ID: 107
Serialization trace:
rowSchema (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.UnionOperator)
childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
at
org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:364)
at
org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:275)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:254)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:440)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:433)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:587)
at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException:
Encountered unregistered class ID: 107
Serialization trace:
rowSchema (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.UnionOperator)
childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
2015-04-21 07:30:36,717 WARN  [main] conf.HiveConf 
(HiveConf.java:initialize(1491)) - DEPRECATED: Configuration property 
hive.metas\
tore.local no longer has any effect. Make sure to provide a valid value for 
hive.metastore.uris if you are connecting to a remote m\
etastore.

Logging initialized using configuration in 
jar:file:/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/jars/hive-common-0.13.1-cdh5.2\
.1.jar!/hive-log4j.properties
HJOBNAME=mdhdy001
LASTLOADDATE=17000101
hiveconf:LASTLOADDATE=17000101
RUNNING_MODE=
Total jobs = 5
Launching Job 1 out of 5
Launching Job 2 out of 5
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
In order to change the average load for a reducer (in bytes):
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
  set hive.exec.reducers.bytes.per.reducer=
In order to change the average load for a reducer (in bytes):
In order to limit the maximum number of reducers:
  set hive.exec.reducers.bytes.per.reducer=
  set hive.exec.reducers.max=
In order to limit the maximum number of reducers:
In order to set a constant number of reducers:
  set hive.exec.reducers.max=
  set mapreduce.job.reduces=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1429549389861_0673, Tracking URL = 
http://dkhc2603.dcsg.com:8088/proxy/application_1429549389861_0673/
Kill Command = 
/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/lib/hadoop/bin/hadoop job  
-kill job_1429549389861_0673
Starting Job = job_1429549389861_0674, Tracking URL = 
http://dkhc2603.dcsg.com:8088/proxy/application_1429549389861_0674/
Kill Command = 
/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/lib/hadoop/bin/hadoop job  
-kill job_1429549389861_0674
Hadoop job information for Stage-1: number of mappers: 5; number of reducers: 10
2015-04-21 07:31:00,430 Stage-1 map = 0%,  reduce = 0%
Hadoop job information for Stage-13: number of mappers: 5; number of reducers: 
10
2015-04-21 07:31:02,687 Stage-13 map = 0%,  reduce = 0%
2015-04-21 07:31:09,462 Stage-1 map = 20%,  reduce = 0%, Cumulative CPU 2.23 sec
2015-04-21 07:31:10,539 Stage-1 map = 60%,  reduce = 0%, Cumulative CPU 8.38 sec
2015-04-21 07:31:11,420 Stage-13 map = 20%,  reduce = 0%, Cumulative 

Re: create statement is not working

2015-04-24 Thread Bhagwan S. Soni
in my case it just appeared as a hang.
I understood the problem now i want to know how i can avoid this.

On Sat, Apr 25, 2015 at 3:24 AM, Eugene Koifman 
wrote:

>  No, I was suggesting a possible explanation for why Create Table may be
> blocked.
> that was the issue in HIVE-10242, but there were no exceptions – it just
> appeared as a hang.
>
>   From: Mich Talebzadeh 
> Reply-To: "user@hive.apache.org" 
> Date: Friday, April 24, 2015 at 2:49 PM
>
> To: "user@hive.apache.org" 
> Subject: RE: create statement is not working
>
>   Hi Eugene,
>
>
>
> What this with regard to the following thread of mine?
>
>
>
> *org.apache.hadoop.hive.ql.lockmgr.LockException: No record of lock could
> be found, may have timed out*
>
>
>
> Thanks
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase** ASE
> 15", **ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4*
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
> one out shortly
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>
> *From:* Eugene Koifman [mailto:ekoif...@hortonworks.com
> ]
> *Sent:* 24 April 2015 22:39
> *To:* user@hive.apache.org
> *Subject:* Re: create statement is not working
>
>
>
> Do you have hive.support.concurrency=true and long running insert
> overwrite statements running concurrently?
>
> If so, you may be hitting something something like
> https://issues.apache.org/jira/browse/HIVE-10242
>
>
>
> *From: *Mich Talebzadeh 
> *Reply-To: *"user@hive.apache.org" 
> *Date: *Friday, April 24, 2015 at 1:20 PM
> *To: *"user@hive.apache.org" 
> *Subject: *RE: create statement is not working
>
>
>
> That is a valid point
>
>
>
> I just create a database called *table* in hive which IMO does not make
> sense or should not be allowed
>
>
>
> hive> create database table;
>
> OK
>
> Time taken: 0.488 seconds
>
>
>
> Now try the same in Sybase given that *table *is keyword!
>
>
>
> 1> create database table on datadev01_HDD = 100 log on logdev01_HDD = 50
>
> 2> go
>
> Msg 156, Level 15, State 2:
>
> Server 'SYB_157', Line 1:
>
> Incorrect syntax near the keyword 'table'.
>
>
>
>
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase ASE
> 15", ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4*
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4,
> volume one out shortly
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>
> *From:* Bhagwan S. Soni [mailto:bhgwnsson...@gmail.com
> ]
> *Sent:* 24 April 2015 21:05
> *To:* user@hive.apache.org
> *Subject:* Re: create statement is not working
>
>
>
> This is the same which I'm trying to explain, there is nothing wrong in
> create statement. This issue is related to infrastructure which I'm trying
> to resolve and i need help if some has already faced this issue earlier.
>
> As crea

Re: create statement is not working

2015-04-24 Thread Bhagwan S. Soni
This is the same which I'm trying to explain, there is nothing wrong in
create statement. This issue is related to infrastructure which I'm trying
to resolve and i need help if some has already faced this issue earlier.

As create statement does not display any log so I'm not able to trace it.

On Sat, Apr 25, 2015 at 1:08 AM, Mich Talebzadeh 
wrote:

> Hm
>
>
>
> It looks OK. However, I personally would not call a database “process”. It
> could be reserved work somewhere or an application clashing with that word
> (outside of Hive)
>
>
>
>
>
> hive> create database process;
>
> OK
>
> Time taken: 0.415 seconds
>
> hive> CREATE EXTERNAL TABLE IF NOT EXISTS PROCESS.aggregated_rspns
>
> > (
>
> > id int,
>
> > dt string,
>
> > hour string,
>
> > rspns_count bigint,
>
> > highest_rspns_count bigint
>
> > )
>
> > ROW FORMAT DELIMITED
>
> > FIELDS TERMINATED BY '\001'
>
> > LOCATION '/xyz/pqr/aggregated_rspns';
>
> OK
>
> Time taken: 0.292 seconds
>
> hive> desc PROCESS.aggregated_rspns;
>
> OK
>
> id  int
>
> dt  string
>
> hourstring
>
> rspns_count bigint
>
> highest_rspns_count bigint
>
> Time taken: 0.12 seconds, Fetched: 5 row(s)
>
> hive> quit;
>
> hduser@rhes564::/home/hduser/dba/bin> hdfs dfs -ls /xyz/pqr/
>
> 15/04/24 20:43:30 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
>
> Found 1 items
>
> drwxr-xr-x   - hduser supergroup  0 2015-04-24 20:42
> /xyz/pqr/aggregated_rspns
>
>
>
> HTH
>
>
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase** ASE
> 15", **ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4*
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
> one out shortly
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>
> *From:* Bhagwan S. Soni [mailto:bhgwnsson...@gmail.com]
> *Sent:* 24 April 2015 17:26
> *To:* user@hive.apache.org
> *Subject:* create statement is not working
>
>
>
> Hi Hive Users,
>
> I'm using Hive's 13th Cloudera version.
>
> I'm facing an issue while running any of the create statement. Other
> operations like DML and drop, alter are working fine. below is the sample
> statement which i'm trying to run
>
> CREATE EXTERNAL TABLE IF NOT EXISTS PROCESS.aggregated_rspns
> (
> id int,
> dt string,
> hour string,
> rspns_count bigint,
> highest_rspns_count bigint
> )
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\001'
> LOCATION '/xyz/pqr/aggregated_rspns';
>
> Could someone help me resolve this issue.
>
> Please let me know if any further information required.
>


Re: create statement is not working

2015-04-24 Thread Bhagwan S. Soni
To be more specific, it happens occasionally .

On Fri, Apr 24, 2015 at 10:59 PM, Bhagwan S. Soni 
wrote:

> there is no syntax error, everything is in place which is required for
> this query. When I'm executing this statement it won't make any progress.
> some time it takes vary long time to complete it and sometime it just got
> hanged for undefined time.
>
> On Fri, Apr 24, 2015 at 10:10 PM, gabriel balan 
> wrote:
>
>>  Hi
>>
>> What's the error message?
>>
>> if you get "FAILED: SemanticException [Error 10072]: Database does not
>> exist: PROCESS"
>> then run
>>
>> create schema if not exists process;
>>
>> After that the DDL is accepted just fine on my hive-0.13.1-cdh5.3.0.
>>
>> hth
>> GB
>>
>>
>> On 4/24/2015 12:25 PM, Bhagwan S. Soni wrote:
>>
>>   Hi Hive Users,
>>
>>
>>  I'm using Hive's 13th Cloudera version.
>>  I'm facing an issue while running any of the create statement. Other
>> operations like DML and drop, alter are working fine. below is the sample
>> statement which i'm trying to run
>>
>> CREATE EXTERNAL TABLE IF NOT EXISTS PROCESS.aggregated_rspns
>> (
>> id int,
>> dt string,
>> hour string,
>> rspns_count bigint,
>> highest_rspns_count bigint
>> )
>> ROW FORMAT DELIMITED
>> FIELDS TERMINATED BY '\001'
>> LOCATION '/xyz/pqr/aggregated_rspns';
>>
>>  Could someone help me resolve this issue.
>>  Please let me know if any further information required.
>>
>>
>> --
>> The statements and opinions expressed here are my own and do not necessarily 
>> represent those of Oracle Corporation.
>>
>>
>


Re: create statement is not working

2015-04-24 Thread Bhagwan S. Soni
there is no syntax error, everything is in place which is required for this
query. When I'm executing this statement it won't make any progress. some
time it takes vary long time to complete it and sometime it just got hanged
for undefined time.

On Fri, Apr 24, 2015 at 10:10 PM, gabriel balan 
wrote:

>  Hi
>
> What's the error message?
>
> if you get "FAILED: SemanticException [Error 10072]: Database does not
> exist: PROCESS"
> then run
>
> create schema if not exists process;
>
> After that the DDL is accepted just fine on my hive-0.13.1-cdh5.3.0.
>
> hth
> GB
>
>
> On 4/24/2015 12:25 PM, Bhagwan S. Soni wrote:
>
>   Hi Hive Users,
>
>
>  I'm using Hive's 13th Cloudera version.
>  I'm facing an issue while running any of the create statement. Other
> operations like DML and drop, alter are working fine. below is the sample
> statement which i'm trying to run
>
> CREATE EXTERNAL TABLE IF NOT EXISTS PROCESS.aggregated_rspns
> (
> id int,
> dt string,
> hour string,
> rspns_count bigint,
> highest_rspns_count bigint
> )
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\001'
> LOCATION '/xyz/pqr/aggregated_rspns';
>
>  Could someone help me resolve this issue.
>  Please let me know if any further information required.
>
>
> --
> The statements and opinions expressed here are my own and do not necessarily 
> represent those of Oracle Corporation.
>
>


create statement is not working

2015-04-24 Thread Bhagwan S. Soni
Hi Hive Users,


I'm using Hive's 13th Cloudera version.
I'm facing an issue while running any of the create statement. Other
operations like DML and drop, alter are working fine. below is the sample
statement which i'm trying to run

CREATE EXTERNAL TABLE IF NOT EXISTS PROCESS.aggregated_rspns
(
id int,
dt string,
hour string,
rspns_count bigint,
highest_rspns_count bigint
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\001'
LOCATION '/xyz/pqr/aggregated_rspns';

Could someone help me resolve this issue.
Please let me know if any further information required.