[jira] [Created] (HIVE-15840) Webhcat test TestPig_5 failing with Pig on Tez at check for percent complete of job

2017-02-07 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-15840:
-

 Summary: Webhcat test TestPig_5 failing with Pig on Tez at check 
for percent complete of job
 Key: HIVE-15840
 URL: https://issues.apache.org/jira/browse/HIVE-15840
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai


TestPig_5 is failing at percentage check if the job is Pig on Tez:
check_job_percent_complete failed. got percentComplete , expected 100% complete

Test command:
curl -d user.name=daijy -d arg=-p -d arg=INPDIR=/tmp/templeton_test_data -d 
arg=-p -d arg=OUTDIR=/tmp/output -d file=loadstore.pig -X POST 
http://localhost:50111/templeton/v1/pig
curl 
http://localhost:50111/templeton/v1/jobs/job_1486502484681_0003?user.name=daijy

This is similar to HIVE-9351, which fixes Hive on Tez.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15935) ACL is not set in ATS data

2017-02-15 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-15935:
-

 Summary: ACL is not set in ATS data
 Key: HIVE-15935
 URL: https://issues.apache.org/jira/browse/HIVE-15935
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai


When publishing ATS info, Hive does not set ACL, that make Hive ATS entries 
visible to all users. On the other hand, Tez ATS entires is using Tez DAG ACL 
which limit both view/modify ACL to end user only. We shall make them 
consistent. In the Jira, I am going to limit ACL to end user for both Tez ATS 
and Hive ATS, also provide config "hive.view.acls" and "hive.modify.acls" if 
user need to overridden.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-15936) ConcurrentModificationException in ATSHook

2017-02-15 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-15936:
-

 Summary: ConcurrentModificationException in ATSHook
 Key: HIVE-15936
 URL: https://issues.apache.org/jira/browse/HIVE-15936
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai
 Attachments: HIVE-15936.1.patch

See ATSHook error:

{noformat}
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) 
~[?:1.8.0_112]
at java.util.HashMap$EntryIterator.next(HashMap.java:1471) 
~[?:1.8.0_112]
at java.util.HashMap$EntryIterator.next(HashMap.java:1469) 
~[?:1.8.0_112]
at java.util.AbstractCollection.toArray(AbstractCollection.java:196) 
~[?:1.8.0_112]
at com.google.common.collect.ImmutableMap.copyOf(ImmutableMap.java:290) 
~[guava-14.0.1.jar:?]
at 
org.apache.hadoop.hive.ql.log.PerfLogger.getEndTimes(PerfLogger.java:219) 
~[hive-common-2.1.0.2.6.0.0-457.jar:2.1.0.2.6.0.0-457]
at 
org.apache.hadoop.hive.ql.hooks.ATSHook.createPostHookEvent(ATSHook.java:347) 
~[hive-exec-2.1.0.2.6.0.0-457.jar:2.1.0.2.6.0.0-457]
at org.apache.hadoop.hive.ql.hooks.ATSHook$2.run(ATSHook.java:206) 
[hive-exec-2.1.0.2.6.0.0-457.jar:2.1.0.2.6.0.0-457]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_112]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{noformat}

According to [~jdere], ATSHook is currently accessing the PerfLogger on a 
separate thread, which means the main query thread can potentially write to the 
PerfLogger at the same time.
The ATSHook should access the PerfLogger on the main query thread, before it 
sends the execution to the ATS Logger thread.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16305) Additional Datanucleus ClassLoaderResolverImpl leaks causing HS2 OOM

2017-03-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16305:
-

 Summary: Additional Datanucleus ClassLoaderResolverImpl leaks 
causing HS2 OOM
 Key: HIVE-16305
 URL: https://issues.apache.org/jira/browse/HIVE-16305
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Daniel Dai
Assignee: Daniel Dai


This is a followup for HIVE-16160. We see additional ClassLoaderResolverImpl 
leaks even with the patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16323) HS2 JDOPersistenceManagerFactory.pmCache leaks after HIVE-14204

2017-03-28 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16323:
-

 Summary: HS2 JDOPersistenceManagerFactory.pmCache leaks after 
HIVE-14204
 Key: HIVE-16323
 URL: https://issues.apache.org/jira/browse/HIVE-16323
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Daniel Dai
Assignee: Daniel Dai


Hive.loadDynamicPartitions creates threads with new embedded rawstore, but 
never close them, thus we leak PersistenceManager one per such thread.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16520) Cache hive metadata in metastore

2017-04-24 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16520:
-

 Summary: Cache hive metadata in metastore
 Key: HIVE-16520
 URL: https://issues.apache.org/jira/browse/HIVE-16520
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


During Hive 2 benchmark, we find Hive metastore operation take a lot of time 
and thus slow down Hive compilation. In some extreme case, it takes much longer 
than the actual query run time. Especially, we find the latency of cloud db is 
very high and 90% of total query runtime is waiting for metastore SQL database 
operations. Based on this observation, the metastore operation performance will 
be greatly enhanced if we have a memory structure which cache the database 
query result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16586) Fix Unit test failures when CachedStore is enabled

2017-05-04 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16586:
-

 Summary: Fix Unit test failures when CachedStore is enabled
 Key: HIVE-16586
 URL: https://issues.apache.org/jira/browse/HIVE-16586
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


Though we don't plan to turn on CachedStore by default, we want to make sure 
unit tests pass with CachedStore. I turn on CachedStore in the patch in order 
to run unit tests with it, but I will turn off CachedStore when committing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16609) col='__HIVE_DEFAULT_PARTITION__' condition in select statement may produce wrong result

2017-05-08 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16609:
-

 Summary: col='__HIVE_DEFAULT_PARTITION__' condition in select 
statement may produce wrong result
 Key: HIVE-16609
 URL: https://issues.apache.org/jira/browse/HIVE-16609
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


A variation of drop_partitions_filter4.q produces wrong result:
{code}
create table ptestfilter (a string, b int) partitioned by (c string, d int);
INSERT OVERWRITE TABLE ptestfilter PARTITION (c,d) select 'Col1', 1, null, null;
INSERT OVERWRITE TABLE ptestfilter PARTITION (c,d) select 'Col2', 2, null, 2;
INSERT OVERWRITE TABLE ptestfilter PARTITION (c,d) select 'Col3', 3, 'Uganda', 
null;
select * from ptestfilter where c='__HIVE_DEFAULT_PARTITION__' or lower(c)='a';
{code}
The "select" statement does not produce the rows containing 
"__HIVE_DEFAULT_PARTITION__".

Note "select * from ptestfilter where c is null or lower(c)='a';" works fine.

In the query, c is a non-string partition column, we need another condition 
containing a udf so the condition is not recognized by 
PartFilterExprUtil.makeExpressionTree in ObjectStore. HIVE-11208/HIVE-15923 is 
addressing a similar issue in drop partition, however, select is not covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16633) username for ATS data shall always be the uid who submit the job

2017-05-09 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16633:
-

 Summary: username for ATS data shall always be the uid who submit 
the job
 Key: HIVE-16633
 URL: https://issues.apache.org/jira/browse/HIVE-16633
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai
 Attachments: HIVE-16633.1.patch

When submitting query via HS2, username for ATS data becomes HS2 process uid in 
case of 
hive.security.authenticator.manager=org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator.
 This should always be the real user id to make ATS data more secure and useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16638) Get rid of magic constant __HIVE_DEFAULT_PARTITION__ in syntax

2017-05-10 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16638:
-

 Summary: Get rid of magic constant __HIVE_DEFAULT_PARTITION__ in 
syntax
 Key: HIVE-16638
 URL: https://issues.apache.org/jira/browse/HIVE-16638
 Project: Hive
  Issue Type: Improvement
Reporter: Daniel Dai


As per discussion in HIVE-16609, we'd like to get rid of magic constant 
__HIVE_DEFAULT_PARTITION__ in syntax. There are two use cases I currently 
realize:
1. alter table t drop partition(p='__HIVE_DEFAULT_PARTITION__');
2. select * from t where p='__HIVE_DEFAULT_PARTITION__';

Currently we switch p='__HIVE_DEFAULT_PARTITION__' to "p is null" internally 
for processing. It would be good if we can promote to the syntax level and get 
rid of p='__HIVE_DEFAULT_PARTITION__' completely.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16662) Fix remaining unit test failures when CachedStore is enabled

2017-05-12 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16662:
-

 Summary: Fix remaining unit test failures when CachedStore is 
enabled
 Key: HIVE-16662
 URL: https://issues.apache.org/jira/browse/HIVE-16662
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


In HIVE-16586, I fixed most of UT failures for CachedStore. This ticket is for 
the remainings, and regressions when stats methods in CachedStore are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16779) CachedStore refresher leak PersistenceManager resources

2017-05-26 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16779:
-

 Summary: CachedStore refresher leak PersistenceManager resources
 Key: HIVE-16779
 URL: https://issues.apache.org/jira/browse/HIVE-16779
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


See OOM when running CachedStore. We didn't shutdown rawstore in refresh thread.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16848) NPE during CachedStore refresh

2017-06-07 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16848:
-

 Summary: NPE during CachedStore refresh
 Key: HIVE-16848
 URL: https://issues.apache.org/jira/browse/HIVE-16848
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


CachedStore refresh only happen once due to NPE. ScheduledExecutorService 
canceled subsequent refreshes:

{code}
java.lang.NullPointerException
at 
org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.updateTableColStats(CachedStore.java:458)
at 
org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.run(CachedStore.java:348)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16871) CachedStore.get_aggr_stats_for has side affect

2017-06-09 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-16871:
-

 Summary: CachedStore.get_aggr_stats_for has side affect
 Key: HIVE-16871
 URL: https://issues.apache.org/jira/browse/HIVE-16871
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


Every get_aggr_stats_for accumulates the stats and propagated to the first 
partition stats object. It accumulates and gives wrong result in the follow up 
invocations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-17007) NPE introduced by HIVE-16871

2017-06-30 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-17007:
-

 Summary: NPE introduced by HIVE-16871
 Key: HIVE-17007
 URL: https://issues.apache.org/jira/browse/HIVE-17007
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


Stack:
{code}
2017-06-30T02:39:43,739 ERROR [HiveServer2-Background-Pool: Thread-2873]: 
metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(200)) - 
MetaException(message:java.lang.NullPointerException)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:6066)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_core(HiveMetaStore.java:3993)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_with_environment_context(HiveMetaStore.java:3944)
at sun.reflect.GeneratedMethodAccessor142.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy32.alter_table_with_environment_context(Unknown 
Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table_with_environmentContext(HiveMetaStoreClient.java:397)
at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table_with_environmentContext(SessionHiveMetaStoreClient.java:325)
at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
at com.sun.proxy.$Proxy33.alter_table_with_environmentContext(Unknown 
Source)
at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2306)
at com.sun.proxy.$Proxy33.alter_table_with_environmentContext(Unknown 
Source)
at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:624)
at org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3490)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:383)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1905)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1607)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1354)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116)
at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:348)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.metastore.cache.SharedCache.getCachedTableColStats(SharedCache.java:140)
at 
org.apache.hadoop.hive.metastore.cache.CachedStore.getTableColumnStatistics(CachedStore.java:1409)
at sun.reflect.GeneratedMethodAccessor165.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy

[jira] [Created] (HIVE-17208) Repl dump should pass in db/table information to authorization API

2017-07-30 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-17208:
-

 Summary: Repl dump should pass in db/table information to 
authorization API
 Key: HIVE-17208
 URL: https://issues.apache.org/jira/browse/HIVE-17208
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Daniel Dai
Assignee: Daniel Dai


"repl dump" does not provide db/table information. That is necessary for 
authorization replication in ranger.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-17254) Skip updating AccessTime of recycled files in ReplChangeManager

2017-08-04 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-17254:
-

 Summary: Skip updating AccessTime of recycled files in 
ReplChangeManager
 Key: HIVE-17254
 URL: https://issues.apache.org/jira/browse/HIVE-17254
 Project: Hive
  Issue Type: Bug
  Components: repl
Reporter: Daniel Dai
Assignee: Daniel Dai


For recycled file, we update both ModifyTime and AccessTime:
fs.setTimes(path, now, now);
On some version of hdfs, this is now allowed when 
"dfs.namenode.accesstime.precision" is set to 0. Though the issue is solved in 
HDFS-9208, we don't use AccessTime in CM and this could be skipped so we don't 
have to fail on this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-17366) Constraint replication in bootstrap

2017-08-21 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-17366:
-

 Summary: Constraint replication in bootstrap
 Key: HIVE-17366
 URL: https://issues.apache.org/jira/browse/HIVE-17366
 Project: Hive
  Issue Type: Bug
  Components: repl
Reporter: Daniel Dai
Assignee: Daniel Dai


Incremental constraint replication is tracked in HIVE-15705. This is to track 
the bootstrap replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-17421) Clear incorrect stats after replication

2017-08-31 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-17421:
-

 Summary: Clear incorrect stats after replication
 Key: HIVE-17421
 URL: https://issues.apache.org/jira/browse/HIVE-17421
 Project: Hive
  Issue Type: Bug
  Components: repl
Reporter: Daniel Dai
Assignee: Daniel Dai


After replication, some stats summary are incorrect. If 
hive.compute.query.using.stats set to true, we will get wrong result on the 
destination side.

This will not happen with bootstrap replication. This is because stats summary 
are in table properties and will be replicated to the destination. However, in 
incremental replication, this won't work. When creating table, the stats 
summary are empty (eg, numRows=0). Later when we insert data, stats summary are 
updated with update_table_column_statistics/update_partition_column_statistics, 
however, both events are not captured in incremental replication. Thus on the 
destination side, we will get count(*)=0. The simple solution is to remove 
COLUMN_STATS_ACCURATE property after incremental replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-17497) Constraint import may fail during incremental replication

2017-09-10 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-17497:
-

 Summary: Constraint import may fail during incremental replication
 Key: HIVE-17497
 URL: https://issues.apache.org/jira/browse/HIVE-17497
 Project: Hive
  Issue Type: Bug
  Components: repl
Reporter: Daniel Dai
Assignee: Daniel Dai


During bootstrap repl dump, we may double export constraint in both bootstrap 
dump and increment dump. Consider the following sequence:
1. Get repl_id, dump table
2. During dump, constraint is added
3. This constraint will be in both bootstrap dump and incremental dump
4. incremental repl_id will be newer, so the constraint will be loaded during 
incremental replication
5. since constraint is already in bootstrap replication, we will have an 
exception



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-10819) SearchArgumentImpl for Timestamp is broken by HIVE-10286

2015-05-25 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-10819:
-

 Summary: SearchArgumentImpl for Timestamp is broken by HIVE-10286
 Key: HIVE-10819
 URL: https://issues.apache.org/jira/browse/HIVE-10819
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.2.1


The work around for kryo bug for Timestamp is accidentally removed by 
HIVE-10286. Need to bring it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10950) Unit test against HBase Metastore

2015-06-05 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-10950:
-

 Summary: Unit test against HBase Metastore
 Key: HIVE-10950
 URL: https://issues.apache.org/jira/browse/HIVE-10950
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch


We need to run the entire Hive UT against HBase Metastore and make sure they 
pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10951) Describe a non-partitioned table table

2015-06-05 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-10951:
-

 Summary: Describe a non-partitioned table table 
 Key: HIVE-10951
 URL: https://issues.apache.org/jira/browse/HIVE-10951
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
 Fix For: hbase-metastore-branch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10952) Describe a non-partitioned table fail

2015-06-05 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-10952:
-

 Summary: Describe a non-partitioned table fail
 Key: HIVE-10952
 URL: https://issues.apache.org/jira/browse/HIVE-10952
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
 Fix For: hbase-metastore-branch


This section of alter1.q fail:
create table alter1(a int, b int);
describe extended alter1;

Exception:
{code}
Trying to fetch a non-existent storage descriptor from hash 
iNVRGkfwwQDGK9oX0fo9XA==^M

at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1765)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getTableName(DDLSemanticAnalyzer.java:1807)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDescribeTable(DDLSemanticAnalyzer.java:1985)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:318)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1128)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1176)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1065)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1069)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1043)
at 
org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:139)
at 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
table alter1. java.lang.RuntimeException: Woh, bad!  Trying to fetch a 
non-existent storage descriptor from hash iNVRGkfwwQDGK9oX0fo9XA==^M

at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1121)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1068)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1055)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1747)
{code}
The partitioned counterpart alter2.q pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10953) Get partial stats instead of complete stats in some queries

2015-06-05 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-10953:
-

 Summary: Get partial stats instead of complete stats in some 
queries
 Key: HIVE-10953
 URL: https://issues.apache.org/jira/browse/HIVE-10953
 Project: Hive
  Issue Type: Sub-task
Reporter: Daniel Dai
Assignee: Vaibhav Gumashta
 Fix For: hbase-metastore-branch


In ppd_constant_where.q, the result is different than benchmark:

Result:
Statistics: Num rows: 0 Data size: 11624 Basic stats: PARTIAL Column stats: NONE

Benchmark:
Statistics: Num rows: 1000 Data size: 10624 Basic stats: COMPLETE Column stats: 
NONE

This might cause quite a few failures so we need to investigate it first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11085) Alter table fail with NPE if schema change

2015-06-23 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11085:
-

 Summary: Alter table fail with NPE if schema change
 Key: HIVE-11085
 URL: https://issues.apache.org/jira/browse/HIVE-11085
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
 Fix For: hbase-metastore-branch


alter1.q fail. Specifically, the following statement fail:
create table alter1(a int, b int);
add jar itests/test-serde/target/hive-it-test-serde-1.3.0-SNAPSHOT.jar;
alter table alter1 set serde 'org.apache.hadoop.hive.serde2.TestSerDe' with 
serdeproperties('s1'='9');

Error stack:
{code}
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. 
java.lang.NullPointerException
at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:498)
at org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3418)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:338)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1660)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1419)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1200)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1067)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1057)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1116)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1090)
at 
org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:146)
at 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: MetaException(message:java.lang.NullPointerException)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:5301)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_core(HiveMetaStore.java:3443)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_with_cascade(HiveMetaStore.java:3395)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:352)
at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table(SessionHiveMetaStoreClient.java:251)
at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:496)
... 36 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.metastore.HiveAlterHandler.updateTableColumnStatsForAlterTable(HiveAlterHandler.java:673)
at 
org.apache.hadoop.hive.metastore.HiveAlterHandler.alterTable(HiveAlterHandler.java:241)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_core(HiveMetaStore.java:3423)
{code}

If changing the alter statement to:
alter table alter1 set serde 'org.apache.hadoop.hive.ql.io.orc.OrcSerde

[jira] [Created] (HIVE-11422) Join a ACID table with non-ACID table fail with MR

2015-07-30 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11422:
-

 Summary: Join a ACID table with non-ACID table fail with MR
 Key: HIVE-11422
 URL: https://issues.apache.org/jira/browse/HIVE-11422
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Daniel Dai
 Fix For: 1.3.0, 2.0.0


The following script fail on MR mode:
{code}
CREATE TABLE orc_update_table (k1 INT, f1 STRING, op_code STRING) 
CLUSTERED BY (k1) INTO 2 BUCKETS 
STORED AS ORC TBLPROPERTIES("transactional"="true"); 
INSERT INTO TABLE orc_update_table VALUES (1, 'a', 'I');
CREATE TABLE orc_table (k1 INT, f1 STRING) 
CLUSTERED BY (k1) SORTED BY (k1) INTO 2 BUCKETS 
STORED AS ORC; 
INSERT OVERWRITE TABLE orc_table VALUES (1, 'x');
SET hive.execution.engine=mr; 
SET hive.auto.convert.join=false; 
SET hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
SELECT t1.*, t2.* FROM orc_table t1 
JOIN orc_update_table t2 ON t1.k1=t2.k1 ORDER BY t1.k1;
{code}
Stack:
{code}
Error: java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251)
at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:701)
at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.io.AcidUtils.deserializeDeltas(AcidUtils.java:368)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1211)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1129)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:249)
... 9 more
{code}

The script pass in 1.2.0 release however.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11438) Join a ACID table with non-ACID table fail with MR on 1.0.0

2015-08-02 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11438:
-

 Summary: Join a ACID table with non-ACID table fail with MR on 
1.0.0
 Key: HIVE-11438
 URL: https://issues.apache.org/jira/browse/HIVE-11438
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Transactions
Affects Versions: 1.0.0
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.0.1


The following script fail on MR mode:
Preparation:
{code}
CREATE TABLE orc_update_table (k1 INT, f1 STRING, op_code STRING) 
CLUSTERED BY (k1) INTO 2 BUCKETS 
STORED AS ORC TBLPROPERTIES("transactional"="true"); 
INSERT INTO TABLE orc_update_table VALUES (1, 'a', 'I');
CREATE TABLE orc_table (k1 INT, f1 STRING) 
CLUSTERED BY (k1) SORTED BY (k1) INTO 2 BUCKETS 
STORED AS ORC; 
INSERT OVERWRITE TABLE orc_table VALUES (1, 'x');
{code}
Then run the following script:
{code}
SET hive.execution.engine=mr; 
SET hive.auto.convert.join=false; 
SET hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
SELECT t1.*, t2.* FROM orc_table t1 
JOIN orc_update_table t2 ON t1.k1=t2.k1 ORDER BY t1.k1;
{code}
Stack:
{code}
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:265)
at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:272)
at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:509)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:585)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:580)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:580)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:571)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:429)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1606)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1367)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1179)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1006)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:996)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Job Submission failed with exception 'java.lang.NullPointerException(null)'
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
{code}

Note the query is the same as HIVE-11422. But in 1.0.0 for this Jira, it throw 
a different exeception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11441) No DDL allowed on table if user accidentally set table location wrong

2015-08-03 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11441:
-

 Summary: No DDL allowed on table if user accidentally set table 
location wrong
 Key: HIVE-11441
 URL: https://issues.apache.org/jira/browse/HIVE-11441
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0, 2.0.0


If user makes a mistake, hive should either correct it in the first place, or 
allow user a chance to correct it. 

STEPS TO REPRODUCE:

create table testwrongloc(id int);

alter table testwrongloc set location 
"hdfs://a-valid-hostname/tmp/testwrongloc";

--at this time, hive should throw error, as hdfs://a-valid-hostname is not a 
valid path, it either needs to be hdfs://namenode-hostname:8020/ or 
hdfs://hdfs-nameservice for HA

alter table testwrongloc set location 
"hdfs://correct-host:8020/tmp/testwrongloc"
or 
drop table testwrongloc;

upon this hive throws error, that host 'a-valid-hostname' is not reachable


{code}
2015-07-30 12:19:43,573 DEBUG [main]: transport.TSaslTransport 
(TSaslTransport.java:readFrame(429)) - CLIENT: reading data length: 293
2015-07-30 12:19:43,720 ERROR [main]: ql.Driver 
(SessionState.java:printError(833)) - FAILED: SemanticException Unable to fetch 
table testloc. java.net.ConnectException: Call From 
hdpsecb02.secb.hwxsup.com/172.25.16.178 to hdpsecb02.secb.hwxsup.com:8020 
failed on connection exception: java.net.ConnectException: Connection refused; 
For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
org.apache.hadoop.hive.ql.parse.SemanticException: Unable to fetch table 
testloc. java.net.ConnectException: Call From 
hdpsecb02.secb.hwxsup.com/172.25.16.178 to hdpsecb02.secb.hwxsup.com:8020 
failed on connection exception: java.net.ConnectException: Connection refused; 
For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.getTable(BaseSemanticAnalyzer.java:1323)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.getTable(BaseSemanticAnalyzer.java:1309)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addInputsOutputsAlterTable(DDLSemanticAnalyzer.java:1387)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterTableLocation(DDLSemanticAnalyzer.java:1452)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:295)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:417)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:305)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1069)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1131)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1006)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:996)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
table testloc. java.net.ConnectException: Call From 
hdpsecb02.secb.hwxsup.com/172.25.16.178 to hdpsecb02.secb.hwxsup.com:8020 
failed on connection exception: java.net.ConnectException: Connection refused; 
For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1072)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1019)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.getTable(BaseSemanticAnalyzer.java:1316)
... 23 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11442) Remove commons-configuration.jar from Hive distribution

2015-08-03 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11442:
-

 Summary: Remove commons-configuration.jar from Hive distribution
 Key: HIVE-11442
 URL: https://issues.apache.org/jira/browse/HIVE-11442
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0, 2.0.0


Some customer report version conflicting for Hive bundled 
commons-configuration.jar. Actually commons-configuration.jar is not needed by 
Hive. It is a transitive dependency of Hadoop/Accumulo. User should be able to 
pick those jars from Hadoop at runtime. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11621) Fix TestMiniTezCliDriver test failures when HBase Metastore is used

2015-08-21 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11621:
-

 Summary: Fix TestMiniTezCliDriver test failures when HBase 
Metastore is used
 Key: HIVE-11621
 URL: https://issues.apache.org/jira/browse/HIVE-11621
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore
Affects Versions: hbase-metastore-branch
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch


As a first step, Fix hbase-metastore unit tests with TestMiniTezCliDriver, so 
we can test LLAP and hbase-metastore together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11692) Fix UT regressions on hbase-metastore branch

2015-08-30 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11692:
-

 Summary: Fix UT regressions on hbase-metastore branch
 Key: HIVE-11692
 URL: https://issues.apache.org/jira/browse/HIVE-11692
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch


There are several unit test regressions on hbase-metastore:
TestWebHCatE2e (asm-5.0.jar conflict with jersey)
TestHBaseImport (leave some objects behind causing other tests fail)
TestMiniHBaseMetastoreCliDriver (the test itself shall not exist)
TestCliDriver(dynpart_sort_opt_vectorization.q,dynpart_sort_optimization.q)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11694) Exclude hbase-metastore for hadoop-2

2015-08-31 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11694:
-

 Summary: Exclude hbase-metastore for hadoop-2
 Key: HIVE-11694
 URL: https://issues.apache.org/jira/browse/HIVE-11694
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch


hbase-metastore doesn't compile for hadoop-1 and we don't have development plan 
to make it work with hadoop-1. Exclude hbase-metastore related file so hadoop-1 
still compiles.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11711) Merge hbase-metastore branch to trunk

2015-09-01 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11711:
-

 Summary: Merge hbase-metastore branch to trunk
 Key: HIVE-11711
 URL: https://issues.apache.org/jira/browse/HIVE-11711
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 2.0.0


Major development of hbase-metastore is done and it's time to merge the branch 
back into master.

Currently hbase-metastore is only invoked when running TestMiniTezCliDriver. 
The instruction for setting up hbase-metastore is captured in 
https://cwiki.apache.org/confluence/display/Hive/HBaseMetastoreDevelopmentGuide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11722) HBaseImport should import basic stats and column stats

2015-09-02 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11722:
-

 Summary: HBaseImport should import basic stats and column stats
 Key: HIVE-11722
 URL: https://issues.apache.org/jira/browse/HIVE-11722
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11731) Exclude hbase-metastore in itest for hadoop-1

2015-09-03 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11731:
-

 Summary: Exclude hbase-metastore in itest for hadoop-1
 Key: HIVE-11731
 URL: https://issues.apache.org/jira/browse/HIVE-11731
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch


This is a follow up of HIVE-11694. We need to further exclude hbase-metastore 
for hadoop-1 in itest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11743) HBase Port conflict for MiniHBaseCluster

2015-09-04 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11743:
-

 Summary: HBase Port conflict for MiniHBaseCluster
 Key: HIVE-11743
 URL: https://issues.apache.org/jira/browse/HIVE-11743
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: hbase-metastore-branch


HMaster http port conflict. Presumably a big in HBaseTestingUtility. It suppose 
not to use default port for everything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11935) Access HiveMetaStoreClient.currentMetaVars should be synchronized

2015-09-23 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11935:
-

 Summary: Access HiveMetaStoreClient.currentMetaVars should be 
synchronized
 Key: HIVE-11935
 URL: https://issues.apache.org/jira/browse/HIVE-11935
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 2.0.0


We saw intermittent failure of the following stack:
{code}
java.lang.NullPointerException
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.isCompatibleWith(HiveMetaStoreClient.java:287)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy9.isCompatibleWith(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:206)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.createHiveDB(BaseSemanticAnalyzer.java:205)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.(DDLSemanticAnalyzer.java:223)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzerFactory.get(SemanticAnalyzerFactory.java:259)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:409)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1116)
at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:110)
at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:181)
at 
org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:388)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:375)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at 
org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy20.executeStatementAsync(Unknown Source)
at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:274)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:486)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.thrift.server.TServlet.doPost(TServlet.java:83)
at 
org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:171)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:479)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:225)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:186)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at

[jira] [Created] (HIVE-11950) WebHCat status file doesn't show UTF8 character

2015-09-24 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-11950:
-

 Summary: WebHCat status file doesn't show UTF8 character
 Key: HIVE-11950
 URL: https://issues.apache.org/jira/browse/HIVE-11950
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 1.2.1
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0, 2.0.0


If we do a select on a UTF8 table and store the console output into the status 
file (enablelog=true), the UTF8 character is garbled. The reason is we don't 
specify encoding when opening stdout/stderr in statusdir. This will cause 
problem especially on Windows, when the default OS encoding is not UTF8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12006) Enable Columnar Pushdown for RC/ORC File for HCatLoader

2015-10-01 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-12006:
-

 Summary: Enable Columnar Pushdown for RC/ORC File for HCatLoader
 Key: HIVE-12006
 URL: https://issues.apache.org/jira/browse/HIVE-12006
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Affects Versions: 1.2.1
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0, 2.0.0


This initially enabled by HIVE-5193. However, HIVE-10752 reverted it since 
there is issue in original implementation.

We shall fix the issue an reenable it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12262) Operation log root cannot be created in some cases

2015-10-25 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-12262:
-

 Summary: Operation log root cannot be created in some cases
 Key: HIVE-12262
 URL: https://issues.apache.org/jira/browse/HIVE-12262
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0, 2.0.0


We see intermittent error in hiveserver2 log:
{code}
(HiveSessionImpl.java:setOperationLogSessionDir(211)) - Unable to create 
operation log session directory:xx
{code}
User are not able to retrieve their operation log through Hue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12279) Testcase to verify session temporary files are removed after HIVE-11768

2015-10-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-12279:
-

 Summary: Testcase to verify session temporary files are removed 
after HIVE-11768
 Key: HIVE-12279
 URL: https://issues.apache.org/jira/browse/HIVE-12279
 Project: Hive
  Issue Type: Test
  Components: HiveServer2, Test
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 2.0.0


We need to make sure HS2 session temporary files are removed after session ends.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12282) beeline - update command printing in verbose mode

2015-10-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-12282:
-

 Summary: beeline - update command printing in verbose mode
 Key: HIVE-12282
 URL: https://issues.apache.org/jira/browse/HIVE-12282
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 2.0.0


In verbose mode, beeline prints the password used in commandline to STDERR. 
This is not a good security practice. 

Issue is in BeeLine.java code -
{code}
if (url != null) {
  String com = "!connect "
  + url + " "
  + (user == null || user.length() == 0 ? "''" : user) + " "
  + (pass == null || pass.length() == 0 ? "''" : pass) + " "
  + (driver == null ? "" : driver);
  debug("issuing: " + com);
  dispatch(com);
}

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12327) WebHCat e2e tests TestJob_1 and TestJob_2 fail

2015-11-03 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-12327:
-

 Summary: WebHCat e2e tests TestJob_1 and TestJob_2 fail
 Key: HIVE-12327
 URL: https://issues.apache.org/jira/browse/HIVE-12327
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0, 2.0.0


The tests are added in HIVE-7035. Both are negative tests and check if the http 
status code is 400. The original patch capture the exception containing 
specific message. However, in latter version of Hadoop, the message change so 
the exception is not contained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12583) HS2 ShutdownHookManager holds extra of Driver instance

2015-12-03 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-12583:
-

 Summary: HS2 ShutdownHookManager holds extra of Driver instance 
 Key: HIVE-12583
 URL: https://issues.apache.org/jira/browse/HIVE-12583
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 1.3.0
Reporter: Daniel Dai
Assignee: Daniel Dai


HIVE-12266 add a shutdown hook for every Driver instance to release the lock th 
session holds in case Driver does not exist elegantly. However, that holds all 
Driver instances and HS2 may run out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13429) Tool to remove dangling scratch dir

2016-04-05 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13429:
-

 Summary: Tool to remove dangling scratch dir
 Key: HIVE-13429
 URL: https://issues.apache.org/jira/browse/HIVE-13429
 Project: Hive
  Issue Type: Improvement
Reporter: Daniel Dai
Assignee: Daniel Dai


We have seen in some cases, user will leave the scratch dir behind, and 
eventually eat out hdfs storage. This could happen when vm restarts and leave 
no chance for Hive to run shutdown hook. This is applicable for both HiveCli 
and HiveServer2. Here we provide an external tool to clear dead scratch dir as 
needed.

We need a way to identify which scratch dir is in use. We will rely on HDFS 
write lock for that. Here is how HDFS write lock works:
1. A HDFS client open HDFS file for write and only close at the time of shutdown
2. Cleanup process can try to open HDFS file for write. If the client holding 
this file is still running, we will get exception. Otherwise, we know the 
client is dead
3. If the HDFS client dies without closing the HDFS file, NN will reclaim the 
lease after 10 min, ie, the HDFS file hold by the dead client is writable again 
after 10 min

So here is how we remove dangling scratch directory in Hive:
1. HiveCli/HiveServer2 opens a well-named lock file in scratch directory and 
only close it when we about to drop scratch directory
2. A command line tool cleardanglingscratchdir  will check every scratch 
directory and try open the lock file for write. If it does not get exception, 
meaning the owner is dead and we can safely remove the scratch directory
3. The 10 min window means it is possible a HiveCli/HiveServer2 is dead but we 
still cannot reclaim the scratch directory for another 10 min. But this should 
be tolerable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13476) HS2 ShutdownHookManager holds extra of Driver instance in nested compile

2016-04-11 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13476:
-

 Summary: HS2 ShutdownHookManager holds extra of Driver instance in 
nested compile
 Key: HIVE-13476
 URL: https://issues.apache.org/jira/browse/HIVE-13476
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai


For some SQL statement, Hive will do nested compile. In this case, Hive will 
create a Driver instance to do the nested compile, but not calling destroy. 
That left Driver instance in the shudownhook:
{code}
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:402)
at 
org.apache.hadoop.hive.ql.optimizer.IndexUtils.createRootTask(IndexUtils.java:223)
at 
org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler.getIndexBuilderMapRedTask(CompactIndexHandler.java:151)
at 
org.apache.hadoop.hive.ql.index.TableBasedIndexHandler.getIndexBuilderMapRedTask(TableBasedIndexHandler.java:108)
at 
org.apache.hadoop.hive.ql.index.TableBasedIndexHandler.generateIndexBuildTaskList(TableBasedIndexHandler.java:92)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.getIndexBuilderMapRed(DDLSemanticAnalyzer.java:1228)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterIndexRebuild(DDLSemanticAnalyzer.java:1175)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:408)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:464)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:318)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1194)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1188)
at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:110)
at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:181)
at 
org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:419)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:406)
at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at 
org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy20.executeStatementAsync(Unknown Source)
at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:276)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:486)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13498) cleardanglingscratchdir does not work if scratchdir is not on defaultFs

2016-04-12 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13498:
-

 Summary: cleardanglingscratchdir does not work if scratchdir is 
not on defaultFs
 Key: HIVE-13498
 URL: https://issues.apache.org/jira/browse/HIVE-13498
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0, 2.1.0


The cleardanglingscratchdir utility need a fix to make it work if scratchdir is 
not on defaultFs, such as on Azure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13513) cleardanglingscratchdir does not work in some version of HDFS

2016-04-13 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13513:
-

 Summary: cleardanglingscratchdir does not work in some version of 
HDFS
 Key: HIVE-13513
 URL: https://issues.apache.org/jira/browse/HIVE-13513
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0, 2.1.0


On some Hadoop version, we keep getting "lease recovery" message at the time we 
check for scratchdir by opening for appending:
{code}
Failed to APPEND_FILE xxx for DFSClient_NONMAPREDUCE_785768631_1 on 10.0.0.18 
because lease recovery is in progress. Try again later.
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2917)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2677)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2984)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2953)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:655)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:421)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
{code}
and
{code}
16/04/14 04:51:56 ERROR hdfs.DFSClient: Failed to close inode 18963
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[10.0.0.12:30010,DS-b355ac2a-a23a-418a-af9b-4c1b4e26afe8,DISK]],
 
original=[DatanodeInfoWithStorage[10.0.0.12:30010,DS-b355ac2a-a23a-418a-af9b-4c1b4e26afe8,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:951)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1017)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1165)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:470)
{code}

The reason is not clear. However, if we remove hsync from SessionState, 
everything works as expected. Attach patch to remove hsync call for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13514) TestClearDanglingScratchDir fail on branch-1

2016-04-13 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13514:
-

 Summary: TestClearDanglingScratchDir fail on branch-1
 Key: HIVE-13514
 URL: https://issues.apache.org/jira/browse/HIVE-13514
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai


TestClearDanglingScratchDir fail on branch-1 due to branch-1 is using log4j. 
Attach a patch to let test pass on both master and branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13551) Make cleardanglingscratchdir work on Windows

2016-04-19 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13551:
-

 Summary: Make cleardanglingscratchdir work on Windows
 Key: HIVE-13551
 URL: https://issues.apache.org/jira/browse/HIVE-13551
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai
 Attachments: HIVE-13551.1.patch

See a couple of issues when running cleardanglingscratchdir on Windows, 
includes:
1. dfs.support.append is set to false in Azure cluster, need an alternative way 
when append is disabled
2. fix for cmd scripts
3. fix UT on Windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13560) Adding Omid as connection manager for HBase Metastore

2016-04-20 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13560:
-

 Summary: Adding Omid as connection manager for HBase Metastore
 Key: HIVE-13560
 URL: https://issues.apache.org/jira/browse/HIVE-13560
 Project: Hive
  Issue Type: Improvement
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


Adding Omid as a transaction manager to HBase Metastore. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13631) Support index in HBase Metastore

2016-04-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13631:
-

 Summary: Support index in HBase Metastore
 Key: HIVE-13631
 URL: https://issues.apache.org/jira/browse/HIVE-13631
 Project: Hive
  Issue Type: Improvement
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


Currently all index related methods in HBaseStore is not implemented. We need 
to add those missing methods and index support in hbaseimport tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13729) FileSystem$Cache leaks in FileUtils.checkFileAccessWithImpersonation

2016-05-10 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13729:
-

 Summary: FileSystem$Cache leaks in 
FileUtils.checkFileAccessWithImpersonation
 Key: HIVE-13729
 URL: https://issues.apache.org/jira/browse/HIVE-13729
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Daniel Dai
Assignee: Daniel Dai


Didn't invoke FileSystem.closeAllForUGI after checkFileAccess. This results 
leak in FileSystem$Cache and eventually OOM for HS2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13981) Operation.toSQLException eats full exception stack

2016-06-09 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-13981:
-

 Summary: Operation.toSQLException eats full exception stack
 Key: HIVE-13981
 URL: https://issues.apache.org/jira/browse/HIVE-13981
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai


Operation.toSQLException eats half of the exception stack and make debug hard. 
For example, we saw an exception:
{code}
org.apache.hive.service.cli.HiveSQL Exception : Error while compiling 
statement: FAILED : NullPointer Exception null
at org.apache.hive.service.cli.operation.Operation.toSQL Exception 
(Operation.java:336)
at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:113)
at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:182)
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:278)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:421)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:408)
at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:276)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:505)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:562)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang. NullPointer Exception
{code}
The real stack causing the NPE is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14097) Fix TestCliDriver for hbase metastore

2016-06-26 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14097:
-

 Summary: Fix TestCliDriver for hbase metastore
 Key: HIVE-14097
 URL: https://issues.apache.org/jira/browse/HIVE-14097
 Project: Hive
  Issue Type: Bug
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


More than half of TestCliDriver fail with hbasemetastore, we need to fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14101) Adding type/event notification/version/constraints to hbase metastore

2016-06-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14101:
-

 Summary: Adding type/event notification/version/constraints to 
hbase metastore
 Key: HIVE-14101
 URL: https://issues.apache.org/jira/browse/HIVE-14101
 Project: Hive
  Issue Type: Improvement
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


type/event notification/version/constraints are missing in hbase metastore, we 
need to add the missing piece.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14103) DateColumnStatsAggregator is missing in hbase metastore

2016-06-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14103:
-

 Summary: DateColumnStatsAggregator is missing in hbase metastore
 Key: HIVE-14103
 URL: https://issues.apache.org/jira/browse/HIVE-14103
 Project: Hive
  Issue Type: Improvement
Reporter: Daniel Dai
Assignee: Daniel Dai


Currently throws exception if getting aggregate stats of date column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14104) addPartitions with PartitionSpecProxy is not implemented in hbase metastore

2016-06-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14104:
-

 Summary: addPartitions with PartitionSpecProxy is not implemented 
in hbase metastore
 Key: HIVE-14104
 URL: https://issues.apache.org/jira/browse/HIVE-14104
 Project: Hive
  Issue Type: Improvement
Reporter: Daniel Dai
Assignee: Daniel Dai


This seems only be used by hcat. Need to implement properly in hbase metastore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14105) listTableNamesByFilter/listPartitionNamesByFilter is missing in hbase metastore

2016-06-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14105:
-

 Summary: listTableNamesByFilter/listPartitionNamesByFilter is 
missing in hbase metastore
 Key: HIVE-14105
 URL: https://issues.apache.org/jira/browse/HIVE-14105
 Project: Hive
  Issue Type: Improvement
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


Both take a JDO filter which is not relevant in hbase metastore. Need to 
revisit and find a solution for both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14106) Retry when hbase transaction conflict happen

2016-06-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14106:
-

 Summary: Retry when hbase transaction conflict happen
 Key: HIVE-14106
 URL: https://issues.apache.org/jira/browse/HIVE-14106
 Project: Hive
  Issue Type: Improvement
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


With HBase transaction manager (Omid), it is possible metastore operation 
aborted due to a new cause: transaction conflict. In this case, a concurrent 
transaction is underway and we need to implement retry logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14107) Complete HBaseStore.setConf

2016-06-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14107:
-

 Summary: Complete HBaseStore.setConf
 Key: HIVE-14107
 URL: https://issues.apache.org/jira/browse/HIVE-14107
 Project: Hive
  Issue Type: Improvement
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


HBaseStore.setConf currently has a barebone implementation and missing some 
features in ObjectStore.setConf. We need to review and complete it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14108) Add missing objects in hbaseimport

2016-06-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14108:
-

 Summary: Add missing objects in hbaseimport
 Key: HIVE-14108
 URL: https://issues.apache.org/jira/browse/HIVE-14108
 Project: Hive
  Issue Type: Improvement
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


The following objects are not imported with hbaseimport:
privs (table/partition/column)
column stats
type/constraint/version




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14110) Implement a better ObjectStore in hbase metastore

2016-06-27 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14110:
-

 Summary: Implement a better ObjectStore in hbase metastore
 Key: HIVE-14110
 URL: https://issues.apache.org/jira/browse/HIVE-14110
 Project: Hive
  Issue Type: Improvement
  Components: HBase Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


The ObjectStore in hbase metastore is very naive and we need to enhance it to a 
decent one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14152) datanucleus.autoStartMechanismMode should set to 'Ignored' to allow rolling downgrade

2016-07-01 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14152:
-

 Summary: datanucleus.autoStartMechanismMode should set to 
'Ignored' to allow rolling downgrade 
 Key: HIVE-14152
 URL: https://issues.apache.org/jira/browse/HIVE-14152
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Daniel Dai
Assignee: Daniel Dai


We see the following issue when downgrading metastore:
1. Run some query using new tables
2. Downgrade metastore
3. Restart metastore will complain the new table does not exist

In particular, constaints tables does not exist in branch-1. If we run Hive 2 
and create a constraint, then downgrade metastore to Hive 1, datanucleus will 
complain:
{code}
javax.jdo.JDOFatalUserException: Error starting up DataNucleus : a class 
"org.apache.hadoop.hive.metastore.model.MConstraint" was listed as being 
persisted previously in this datastore, yet the class wasnt found. Perhaps it 
is used by a different DataNucleus-enabled application in this datastore, or 
you have changed your class names.
at 
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:528)
at 
org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
at 
org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
at 
org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at 
javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:377)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:406)
at 
org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:299)
at 
org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:266)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at 
org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:60)
at 
org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:69)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:650)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:628)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:677)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:484)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:77)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:83)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5905)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5900)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6159)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6084)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{code}

Apparently datanucleus cache some trace about the new table and retry to 
reinstantiate later. This breaks downgrading we shall disable this behavior.

We need to set "datanucleus.autoStartMechanismMode" to "Ignored" to disable the 
check since it becomes a norm in downgrading case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14293) PerfLogger.openScopes should be transient

2016-07-19 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14293:
-

 Summary: PerfLogger.openScopes should be transient
 Key: HIVE-14293
 URL: https://issues.apache.org/jira/browse/HIVE-14293
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: Daniel Dai
Assignee: Daniel Dai
 Attachments: HIVE-14293.1.patch

See the following exception when running Hive e2e tests:
{code}
0: jdbc:hive2://nat-r6-ojss-hsihs2-1.openstac> SELECT s.name, s2.age, s.gpa, 
v.registration, v2.contributions FROM student s INNER JOIN voter v ON (s.name = 
v.name) INNER JOIN student s2 ON (s2.age = v.age and s.name = s2.name) INNER 
JOIN voter v2 ON (v2.name = s2.name and v2.age = s2.age) WHERE v2.age = s.age 
ORDER BY s.name, s2.age, s.gpa, v.registration, v2.contributions;
INFO  : Compiling 
command(queryId=hive_20160717224915_3a52719f-539f-4f82-a9cd-0c0af4e09ef8): 
SELECT s.name, s2.age, s.gpa, v.registration, v2.contributions FROM student s 
INNER JOIN voter v ON (s.name = v.name) INNER JOIN student s2 ON (s2.age = 
v.age and s.name = s2.name) INNER JOIN voter v2 ON (v2.name = s2.name and 
v2.age = s2.age) WHERE v2.age = s.age ORDER BY s.name, s2.age, s.gpa, 
v.registration, v2.contributions
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:s.name, 
type:string, comment:null), FieldSchema(name:s2.age, type:int, comment:null), 
FieldSchema(name:s.gpa, type:double, comment:null), 
FieldSchema(name:v.registration, type:string, comment:null), 
FieldSchema(name:v2.contributions, type:float, comment:null)], properties:null)
INFO  : Completed compiling 
command(queryId=hive_20160717224915_3a52719f-539f-4f82-a9cd-0c0af4e09ef8); Time 
taken: 1.165 seconds
INFO  : Executing 
command(queryId=hive_20160717224915_3a52719f-539f-4f82-a9cd-0c0af4e09ef8): 
SELECT s.name, s2.age, s.gpa, v.registration, v2.contributions FROM student s 
INNER JOIN voter v ON (s.name = v.name) INNER JOIN student s2 ON (s2.age = 
v.age and s.name = s2.name) INNER JOIN voter v2 ON (v2.name = s2.name and 
v2.age = s2.age) WHERE v2.age = s.age ORDER BY s.name, s2.age, s.gpa, 
v.registration, v2.contributions
INFO  : Query ID = hive_20160717224915_3a52719f-539f-4f82-a9cd-0c0af4e09ef8
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : Session is already open
INFO  : Dag name: SELECT s.name, s2.age, sv2.contributions(Stage-1)
ERROR : Failed to execute tez graph.
java.lang.RuntimeException: Error caching map.xml: 
org.apache.hive.com.esotericsoftware.kryo.KryoException: 
java.util.ConcurrentModificationException
Serialization trace:
classes (sun.misc.Launcher$AppClassLoader)
classloader (java.security.ProtectionDomain)
context (java.security.AccessControlContext)
acc (org.apache.hadoop.hive.ql.exec.UDFClassLoader)
classLoader (org.apache.hadoop.hive.conf.HiveConf)
conf (org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics)
metrics 
(org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics$CodahaleMetricsScope)
openScopes (org.apache.hadoop.hive.ql.log.PerfLogger)
perfLogger (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
at 
org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:582) 
~[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at 
org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:516) 
~[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at 
org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:601) 
~[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at 
org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:1147) 
~[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:390) 
~[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:164) 
[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1865) 
[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1009]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1569) 
[hive-exec-2.1.0.2.5.0.0-1009.jar:2.1.0.2.5.0.0-1

[jira] [Created] (HIVE-14399) Fix test flakiness of org.apache.hive.hcatalog.listener.TestDbNotificationListener.cleanupNotifs

2016-08-01 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14399:
-

 Summary: Fix test flakiness of 
org.apache.hive.hcatalog.listener.TestDbNotificationListener.cleanupNotifs
 Key: HIVE-14399
 URL: https://issues.apache.org/jira/browse/HIVE-14399
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai


We get intermittent test failure of TestDbNotificationListener.cleanupNotifs. 
We shall make it stable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14690) Query fail when hive.exec.parallel=true, with conflicting session dir

2016-09-01 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14690:
-

 Summary: Query fail when hive.exec.parallel=true, with conflicting 
session dir
 Key: HIVE-14690
 URL: https://issues.apache.org/jira/browse/HIVE-14690
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.1.0, 1.3.0
Reporter: Daniel Dai
Assignee: Daniel Dai


This happens when hive.scratchdir.lock=true. Error message:
{code}
/hive/scratch/343hdirdp/cab907fc-5e1d-4d69-aa72-d7b442495c7a/inuse.info (inode 
19537): File does not exist. [Lease.  Holder: 
DFSClient_NONMAPREDUCE_1572639975_1, pendingcreates: 2]
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3430)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3235)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3073)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3033)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1668)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)

at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:535)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14968) Fix compilation failure on branch-1

2016-10-14 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-14968:
-

 Summary: Fix compilation failure on branch-1
 Key: HIVE-14968
 URL: https://issues.apache.org/jira/browse/HIVE-14968
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 1.3.0


branch-1 compilation failure due to:
HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
Error: , expected at the end of 'decimal(9'" after enabling 
hive.optimize.skewjoin and with MR engine
HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays

1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15049) Fix unit test failures on branch-1

2016-10-24 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-15049:
-

 Summary: Fix unit test failures on branch-1
 Key: HIVE-15049
 URL: https://issues.apache.org/jira/browse/HIVE-15049
 Project: Hive
  Issue Type: Bug
  Components: Test
Affects Versions: 1.3.0
Reporter: Daniel Dai


When working on HIVE-14968, I notice there are 36 test failures and quite a few 
other tests did not produce a TEST-*.xml file. At least some of them are valid. 
Here is one of the stack:
{code}
java.lang.Exception: java.lang.RuntimeException: Error in configuring object
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.RuntimeException: Error in configuring object
at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:409)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
... 10 more
Caused by: java.lang.RuntimeException: Reduce operator initialization failed
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:157)
... 14 more
Caused by: java.lang.RuntimeException: Cannot find ExprNodeEvaluator for the 
exprNodeDesc = null
at 
org.apache.hadoop.hive.ql.exec.ExprNodeEvaluatorFactory.get(ExprNodeEvaluatorFactory.java:57)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:272)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:363)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:482)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:439)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:150)
... 14 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15068) Run ClearDanglingScratchDir periodically inside HS2

2016-10-26 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-15068:
-

 Summary: Run ClearDanglingScratchDir periodically inside HS2
 Key: HIVE-15068
 URL: https://issues.apache.org/jira/browse/HIVE-15068
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Daniel Dai
Assignee: Daniel Dai


In HIVE-13429, we introduce a tool which clear dangling scratch directory. In 
this ticket, we want to invoke the tool automatically on a Hive cluster. 
Options are:
1. cron job, which would involve manual cron job setup
2. As a metastore thread. However, it is possible we run metastore without hdfs 
in the future (eg, managing s3 files). ClearDanglingScratchDir needs support 
which only exists in hdfs, it won't work if the above scenario happens
3. As a HS2 thread. The downside is if no HS2 is running, the tool will not run 
automatically. But we expect HS2 will be a required component down the road

Here I choose approach 3 in the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15322) Skipping "hbase mapredcp" in hive script for certain services

2016-11-30 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-15322:
-

 Summary: Skipping "hbase mapredcp" in hive script for certain 
services
 Key: HIVE-15322
 URL: https://issues.apache.org/jira/browse/HIVE-15322
 Project: Hive
  Issue Type: Improvement
Reporter: Daniel Dai
Assignee: Daniel Dai


"hbase mapredcp" is intended to append hbase classpath to hive. However, the 
command can take some time when the system is heavy loaded. In some extreme 
cases, we saw ~20s delay due to it. For certain commands, such as "schemaTool", 
hbase classpath is certainly useless, and we can safely skip invoking it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-4578) Changes to Pig's test harness broke HCat e2e tests

2013-05-20 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662406#comment-13662406
 ] 

Daniel Dai commented on HIVE-4578:
--

+1

> Changes to Pig's test harness broke HCat e2e tests
> --
>
> Key: HIVE-4578
> URL: https://issues.apache.org/jira/browse/HIVE-4578
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.12.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Fix For: 0.12.0
>
> Attachments: HIVE-4578.patch
>
>
> HCatalog externs the test harness from Pig.  Pig recently made some changes 
> to the test harness to work better across Unix and Windows.  These changes 
> require new OS specific files.  HCatalog will also need these files in order 
> to work with the test harness.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4586) [HCatalog] WebHCat should return 400 error for undefined resource

2013-05-21 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-4586:


 Summary: [HCatalog] WebHCat should return 400 error for undefined 
resource
 Key: HIVE-4586
 URL: https://issues.apache.org/jira/browse/HIVE-4586
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4586) [HCatalog] WebHCat should return 400 error for undefined resource

2013-05-21 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13663611#comment-13663611
 ] 

Daniel Dai commented on HIVE-4586:
--

If user request a resource which is not exist, webhcat returns a 500 error:
eg, http://localhost:50111/templeton/v1/ddl/databae/abc?user.name=hcat, 
misspell "database"

This should be 400 instead.

> [HCatalog] WebHCat should return 400 error for undefined resource
> -
>
> Key: HIVE-4586
> URL: https://issues.apache.org/jira/browse/HIVE-4586
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4586) [HCatalog] WebHCat should return 400 error for undefined resource

2013-05-21 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4586:
-

Attachment: HIVE-4586-1.patch

Here is what WebHCat does after patch:
1. If misspell on the resource name, get 400
2. If database schema name is wrong, get 404
3. If some internal error (such as HCatException), get 500

> [HCatalog] WebHCat should return 400 error for undefined resource
> -
>
> Key: HIVE-4586
> URL: https://issues.apache.org/jira/browse/HIVE-4586
> Project: Hive
>  Issue Type: Bug
>    Reporter: Daniel Dai
> Attachments: HIVE-4586-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4586) [HCatalog] WebHCat should return 400 error for undefined resource

2013-05-21 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4586:
-

Attachment: (was: HIVE-4586-1.patch)

> [HCatalog] WebHCat should return 400 error for undefined resource
> -
>
> Key: HIVE-4586
> URL: https://issues.apache.org/jira/browse/HIVE-4586
> Project: Hive
>  Issue Type: Bug
>    Reporter: Daniel Dai
> Attachments: HIVE-4586-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4586) [HCatalog] WebHCat should return 400 error for undefined resource

2013-05-21 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4586:
-

Attachment: HIVE-4586-1.patch

> [HCatalog] WebHCat should return 400 error for undefined resource
> -
>
> Key: HIVE-4586
> URL: https://issues.apache.org/jira/browse/HIVE-4586
> Project: Hive
>  Issue Type: Bug
>    Reporter: Daniel Dai
> Attachments: HIVE-4586-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4586) [HCatalog] WebHCat should return 400 error for undefined resource

2013-05-21 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13663625#comment-13663625
 ] 

Daniel Dai commented on HIVE-4586:
--

Change case 1 to 404 as well.

> [HCatalog] WebHCat should return 400 error for undefined resource
> -
>
> Key: HIVE-4586
> URL: https://issues.apache.org/jira/browse/HIVE-4586
> Project: Hive
>  Issue Type: Bug
>    Reporter: Daniel Dai
> Attachments: HIVE-4586-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4586) [HCatalog] WebHCat should return 404 error for undefined resource

2013-05-21 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4586:
-

Summary: [HCatalog] WebHCat should return 404 error for undefined resource  
(was: [HCatalog] WebHCat should return 400 error for undefined resource)

> [HCatalog] WebHCat should return 404 error for undefined resource
> -
>
> Key: HIVE-4586
> URL: https://issues.apache.org/jira/browse/HIVE-4586
> Project: Hive
>  Issue Type: Bug
>    Reporter: Daniel Dai
> Attachments: HIVE-4586-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4531) [WebHCat] Collecting task logs to hdfs

2013-05-22 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4531:
-

Attachment: samplestatusdirwithlist.tar.gz
HIVE-4531-5.patch

Adding a list file to the logs. Attached samplestatusdirwithlist.tar.gz for a 
sample status directory. Here is a sample list file (list.txt):

job: job_201305221327_0068(name=PigLatin:73.pig,status=SUCCEEDED)
  
attempt:attempt_201305221327_0068_m_00_0(type=map,status=completed,starttime=22-May-2013
 17:10:26,endtime=22-May-2013 17:10:32)
  
attempt:attempt_201305221327_0068_m_02_0(type=setup,status=completed,starttime=22-May-2013
 17:10:17,endtime=22-May-2013 17:10:26)
  
attempt:attempt_201305221327_0068_m_01_0(type=cleanup,status=completed,starttime=22-May-2013
 17:10:32,endtime=22-May-2013 17:10:38)

job: job_201305221327_0069(name=PigLatin:73.pig,status=SUCCEEDED)
  
attempt:attempt_201305221327_0069_m_00_0(type=map,status=completed,starttime=22-May-2013
 17:10:53,endtime=22-May-2013 17:10:59)
  
attempt:attempt_201305221327_0069_r_00_0(type=reduce,status=completed,starttime=22-May-2013
 17:10:59,endtime=22-May-2013 17:11:11)
  
attempt:attempt_201305221327_0069_m_02_0(type=setup,status=completed,starttime=22-May-2013
 17:10:44,endtime=22-May-2013 17:10:53)
  
attempt:attempt_201305221327_0069_m_01_0(type=cleanup,status=completed,starttime=22-May-2013
 17:11:11,endtime=22-May-2013 17:11:17)

job: job_201305221327_0070(name=PigLatin:73.pig,status=SUCCEEDED)
  
attempt:attempt_201305221327_0070_m_00_0(type=map,status=completed,starttime=22-May-2013
 17:11:32,endtime=22-May-2013 17:11:38)
  
attempt:attempt_201305221327_0070_r_00_0(type=reduce,status=completed,starttime=22-May-2013
 17:11:38,endtime=22-May-2013 17:11:50)
  
attempt:attempt_201305221327_0070_m_02_0(type=setup,status=completed,starttime=22-May-2013
 17:11:23,endtime=22-May-2013 17:11:32)
  
attempt:attempt_201305221327_0070_m_01_0(type=cleanup,status=completed,starttime=22-May-2013
 17:11:50,endtime=22-May-2013 17:11:56)

job: job_201305221327_0071(name=PigLatin:73.pig,status=FAILED)
  
attempt:attempt_201305221327_0071_m_00_0(type=map,status=completed,starttime=22-May-2013
 17:12:11,endtime=22-May-2013 17:12:17)
  
attempt:attempt_201305221327_0071_m_01_0(type=map,status=completed,starttime=22-May-2013
 17:12:17,endtime=22-May-2013 17:12:23)
  
attempt:attempt_201305221327_0071_m_03_0(type=setup,status=completed,starttime=22-May-2013
 17:12:02,endtime=22-May-2013 17:12:11)
  
attempt:attempt_201305221327_0071_m_02_0(type=cleanup,status=completed,starttime=22-May-2013
 17:13:11,endtime=22-May-2013 17:13:17)
  
attempt:attempt_201305221327_0071_r_00_0(type=reduce,status=failed,starttime=22-May-2013
 17:12:17,endtime=22-May-2013 17:12:29)
  
attempt:attempt_201305221327_0071_r_00_1(type=reduce,status=failed,starttime=22-May-2013
 17:12:35,endtime=22-May-2013 17:12:33)
  
attempt:attempt_201305221327_0071_r_00_2(type=reduce,status=failed,starttime=22-May-2013
 17:12:47,endtime=22-May-2013 17:12:43)
  
attempt:attempt_201305221327_0071_r_00_3(type=reduce,status=failed,starttime=22-May-2013
 17:12:59,endtime=22-May-2013 17:12:46)


> [WebHCat] Collecting task logs to hdfs
> --
>
> Key: HIVE-4531
> URL: https://issues.apache.org/jira/browse/HIVE-4531
> Project: Hive
>  Issue Type: New Feature
>  Components: HCatalog
>        Reporter: Daniel Dai
> Attachments: HIVE-4531-1.patch, HIVE-4531-2.patch, HIVE-4531-3.patch, 
> HIVE-4531-4.patch, HIVE-4531-5.patch, samplestatusdirwithlist.tar.gz
>
>
> It would be nice we collect task logs after job finish. This is similar to 
> what Amazon EMR does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2670) A cluster test utility for Hive

2013-06-01 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13672204#comment-13672204
 ] 

Daniel Dai commented on HIVE-2670:
--

Checkin_2 fail on my Mac. Change hive_nightly.conf:41, from:
  'sortArgs' => ['-t', '   ', '+1', '-2'],
to
  'sortArgs' => ['-t', '', '-k', '2,2n'],
Solve the problem.

Otherwise +1.

> A cluster test utility for Hive
> ---
>
> Key: HIVE-2670
> URL: https://issues.apache.org/jira/browse/HIVE-2670
> Project: Hive
>  Issue Type: New Feature
>  Components: Testing Infrastructure
>Reporter: Alan Gates
>Assignee: Johnny Zhang
> Attachments: harness.tar, HIVE-2670_5.patch, 
> hive_cluster_test_2.patch, hive_cluster_test_3.patch, 
> hive_cluster_test_4.patch, hive_cluster_test.patch
>
>
> Hive has an extensive set of unit tests, but it does not have an 
> infrastructure for testing in a cluster environment.  Pig and HCatalog have 
> been using a test harness for cluster testing for some time.  We have written 
> Hive drivers and tests to run in this harness.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4677) [HCatalog] WebHCat e2e tests fail on Hadoop 2

2013-06-06 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-4677:


 Summary: [HCatalog] WebHCat e2e tests fail on Hadoop 2
 Key: HIVE-4677
 URL: https://issues.apache.org/jira/browse/HIVE-4677
 Project: Hive
  Issue Type: Bug
Reporter: Daniel Dai
 Attachments: HIVE-4677-1.patch

curl 
http://hor5n26.gq1.ygridcore.net:50111/templeton/v1/queue/job_1370377838831_0012?user.name=hrt_qa
{"error":"Does not contain a valid host:port authority: local"}

Here is the detailed stacktrace from the server:
{code}
WARN  | 04 Jun 2013 22:21:52,204 | org.apache.hadoop.conf.Configuration | 
mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
ERROR | 04 Jun 2013 22:21:52,204 | 
org.apache.hcatalog.templeton.CatchallExceptionMapper | Does not contain a 
valid host:port authority: local
java.lang.IllegalArgumentException: Does not contain a valid host:port 
authority: local
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
at 
org.apache.hcatalog.templeton.TempletonDelegator.getAddress(TempletonDelegator.java:41)
at 
org.apache.hcatalog.templeton.StatusDelegator.run(StatusDelegator.java:47)
at org.apache.hcatalog.templeton.Server.showQueueId(Server.java:688)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)

{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4677) [HCatalog] WebHCat e2e tests fail on Hadoop 2

2013-06-06 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4677:
-

Attachment: HIVE-4677-1.patch

> [HCatalog] WebHCat e2e tests fail on Hadoop 2
> -
>
> Key: HIVE-4677
> URL: https://issues.apache.org/jira/browse/HIVE-4677
> Project: Hive
>  Issue Type: Bug
>    Reporter: Daniel Dai
> Attachments: HIVE-4677-1.patch
>
>
> curl 
> http://hor5n26.gq1.ygridcore.net:50111/templeton/v1/queue/job_1370377838831_0012?user.name=hrt_qa
> {"error":"Does not contain a valid host:port authority: local"}
> Here is the detailed stacktrace from the server:
> {code}
> WARN  | 04 Jun 2013 22:21:52,204 | org.apache.hadoop.conf.Configuration | 
> mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
> ERROR | 04 Jun 2013 22:21:52,204 | 
> org.apache.hcatalog.templeton.CatchallExceptionMapper | Does not contain a 
> valid host:port authority: local
> java.lang.IllegalArgumentException: Does not contain a valid host:port 
> authority: local
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
> at 
> org.apache.hcatalog.templeton.TempletonDelegator.getAddress(TempletonDelegator.java:41)
> at 
> org.apache.hcatalog.templeton.StatusDelegator.run(StatusDelegator.java:47)
> at org.apache.hcatalog.templeton.Server.showQueueId(Server.java:688)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4784) ant testreport doesn't include any HCatalog tests

2013-06-28 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695584#comment-13695584
 ] 

Daniel Dai commented on HIVE-4784:
--

+1

> ant testreport doesn't include any HCatalog tests
> -
>
> Key: HIVE-4784
> URL: https://issues.apache.org/jira/browse/HIVE-4784
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.11.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-4784.patch
>
>
> run 
> 1. ant test -Dmodule=hcatalog
> 2. ant testreport
> no .html file is generated.  In particular, Apache builds don't show anything 
> about HCat test (so it's not even obvious if it's running them)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4784) ant testreport doesn't include any HCatalog tests

2013-06-28 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695597#comment-13695597
 ] 

Daniel Dai commented on HIVE-4784:
--

Patch committed to trunk.

> ant testreport doesn't include any HCatalog tests
> -
>
> Key: HIVE-4784
> URL: https://issues.apache.org/jira/browse/HIVE-4784
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.11.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-4784.patch
>
>
> run 
> 1. ant test -Dmodule=hcatalog
> 2. ant testreport
> no .html file is generated.  In particular, Apache builds don't show anything 
> about HCat test (so it's not even obvious if it's running them)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4591) Making changes to webhcat-site.xml have no effect

2013-06-28 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695599#comment-13695599
 ] 

Daniel Dai commented on HIVE-4591:
--

Patch committed to trunk.

> Making changes to webhcat-site.xml have no effect
> -
>
> Key: HIVE-4591
> URL: https://issues.apache.org/jira/browse/HIVE-4591
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.11.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-4591.patch
>
>   Original Estimate: 24h
>  Time Spent: 4h
>  Remaining Estimate: 20h
>
> Looks like WebHCat configuration is read as follows:
> Configuration: core-default.xml, core-site.xml, mapred-default.xml, 
> mapred-site.xml, 
> jar:file:/Users/ekoifman/dev/hive/build/dist/hcatalog/share/webhcat/svr/webhcat-0.12.0-SNAPSHOT.jar!/webhcat-default.xml
> creating 
> /Users/ekoifman/dev/hive/build/dist/hcatalog/etc/webhcat/webhcat-site.xml and 
> setting templeton.exec.timeout has no effect as can be seen in ExecServiceImpl
> Probably the webhcat_server.sh script is missing something

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4591) Making changes to webhcat-site.xml have no effect

2013-06-28 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4591:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Making changes to webhcat-site.xml have no effect
> -
>
> Key: HIVE-4591
> URL: https://issues.apache.org/jira/browse/HIVE-4591
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.11.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-4591.patch
>
>   Original Estimate: 24h
>  Time Spent: 4h
>  Remaining Estimate: 20h
>
> Looks like WebHCat configuration is read as follows:
> Configuration: core-default.xml, core-site.xml, mapred-default.xml, 
> mapred-site.xml, 
> jar:file:/Users/ekoifman/dev/hive/build/dist/hcatalog/share/webhcat/svr/webhcat-0.12.0-SNAPSHOT.jar!/webhcat-default.xml
> creating 
> /Users/ekoifman/dev/hive/build/dist/hcatalog/etc/webhcat/webhcat-site.xml and 
> setting templeton.exec.timeout has no effect as can be seen in ExecServiceImpl
> Probably the webhcat_server.sh script is missing something

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4784) ant testreport doesn't include any HCatalog tests

2013-06-28 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4784:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> ant testreport doesn't include any HCatalog tests
> -
>
> Key: HIVE-4784
> URL: https://issues.apache.org/jira/browse/HIVE-4784
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.11.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-4784.patch
>
>
> run 
> 1. ant test -Dmodule=hcatalog
> 2. ant testreport
> no .html file is generated.  In particular, Apache builds don't show anything 
> about HCat test (so it's not even obvious if it's running them)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4820) webhcat_config.sh should set default values for HIVE_HOME and HCAT_PREFIX that work with default build tree structure

2013-07-16 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4820:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks Eugene!

> webhcat_config.sh should set default values for HIVE_HOME and HCAT_PREFIX 
> that work with default build tree structure
> -
>
> Key: HIVE-4820
> URL: https://issues.apache.org/jira/browse/HIVE-4820
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: hive4820.2.patch, HIVE4820.patch
>
>
> Currently they are expected to be set by the user which makes development 
> inconvenient.
> It makes writing unit tests for WebHcat more difficult as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4893) [WebHCat] HTTP 500 errors should be mapped to 400 for bad request

2013-07-19 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-4893:
-

Attachment: HIVE-4893-1.patch

> [WebHCat] HTTP 500 errors should be mapped to 400 for bad request
> -
>
> Key: HIVE-4893
> URL: https://issues.apache.org/jira/browse/HIVE-4893
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>        Reporter: Daniel Dai
>    Assignee: Daniel Dai
> Fix For: 0.12.0
>
> Attachments: HIVE-4893-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4893) [WebHCat] HTTP 500 errors should be mapped to 400 for bad request

2013-07-19 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-4893:


 Summary: [WebHCat] HTTP 500 errors should be mapped to 400 for bad 
request
 Key: HIVE-4893
 URL: https://issues.apache.org/jira/browse/HIVE-4893
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0
 Attachments: HIVE-4893-1.patch



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4677) [HCatalog] WebHCat e2e tests fail on Hadoop 2

2013-07-22 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13716047#comment-13716047
 ] 

Daniel Dai commented on HIVE-4677:
--

Fixed, thanks.

> [HCatalog] WebHCat e2e tests fail on Hadoop 2
> -
>
> Key: HIVE-4677
> URL: https://issues.apache.org/jira/browse/HIVE-4677
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>    Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.12.0
>
> Attachments: HIVE-4677-1.patch
>
>
> curl 
> http://hor5n26.gq1.ygridcore.net:50111/templeton/v1/queue/job_1370377838831_0012?user.name=hrt_qa
> {"error":"Does not contain a valid host:port authority: local"}
> Here is the detailed stacktrace from the server:
> {code}
> WARN  | 04 Jun 2013 22:21:52,204 | org.apache.hadoop.conf.Configuration | 
> mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
> ERROR | 04 Jun 2013 22:21:52,204 | 
> org.apache.hcatalog.templeton.CatchallExceptionMapper | Does not contain a 
> valid host:port authority: local
> java.lang.IllegalArgumentException: Does not contain a valid host:port 
> authority: local
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
> at 
> org.apache.hcatalog.templeton.TempletonDelegator.getAddress(TempletonDelegator.java:41)
> at 
> org.apache.hcatalog.templeton.StatusDelegator.run(StatusDelegator.java:47)
> at org.apache.hcatalog.templeton.Server.showQueueId(Server.java:688)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5012) [HCatalog] Make HCatalog work on Windows

2013-08-06 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-5012:


 Summary: [HCatalog] Make HCatalog work on Windows
 Key: HIVE-5012
 URL: https://issues.apache.org/jira/browse/HIVE-5012
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0


This is an umbrella Jira equivalent to HCATALOG-511. The individual Jiras are 
updated to work with the current trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5013) [HCatalog] Create hcat.py, hcat_server.py to make HCatalog work on Windows

2013-08-06 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-5013:


 Summary: [HCatalog] Create hcat.py, hcat_server.py to make 
HCatalog work on Windows
 Key: HIVE-5013
 URL: https://issues.apache.org/jira/browse/HIVE-5013
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5014) [HCatalog] Fix HCatalog build issue on Windows

2013-08-06 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-5014:


 Summary: [HCatalog] Fix HCatalog build issue on Windows
 Key: HIVE-5014
 URL: https://issues.apache.org/jira/browse/HIVE-5014
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5014) [HCatalog] Fix HCatalog build issue on Windows

2013-08-06 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-5014:
-

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-5012

> [HCatalog] Fix HCatalog build issue on Windows
> --
>
> Key: HIVE-5014
> URL: https://issues.apache.org/jira/browse/HIVE-5014
> Project: Hive
>  Issue Type: Sub-task
>  Components: HCatalog
>        Reporter: Daniel Dai
>    Assignee: Daniel Dai
> Fix For: 0.12.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5015) [HCatalog] Fix HCatalog unit tests on Windows

2013-08-06 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-5015:


 Summary: [HCatalog] Fix HCatalog unit tests on Windows
 Key: HIVE-5015
 URL: https://issues.apache.org/jira/browse/HIVE-5015
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5013) [HCatalog] Create hcat.py, hcat_server.py to make HCatalog work on Windows

2013-08-06 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-5013:
-

Attachment: HIVE-5013-1.patch

> [HCatalog] Create hcat.py, hcat_server.py to make HCatalog work on Windows
> --
>
> Key: HIVE-5013
> URL: https://issues.apache.org/jira/browse/HIVE-5013
> Project: Hive
>  Issue Type: Sub-task
>  Components: HCatalog
>        Reporter: Daniel Dai
>    Assignee: Daniel Dai
> Fix For: 0.12.0
>
> Attachments: HIVE-5013-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5014) [HCatalog] Fix HCatalog build issue on Windows

2013-08-06 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-5014:
-

Attachment: HIVE-5014-1.patch

> [HCatalog] Fix HCatalog build issue on Windows
> --
>
> Key: HIVE-5014
> URL: https://issues.apache.org/jira/browse/HIVE-5014
> Project: Hive
>  Issue Type: Sub-task
>  Components: HCatalog
>        Reporter: Daniel Dai
>    Assignee: Daniel Dai
> Fix For: 0.12.0
>
> Attachments: HIVE-5014-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


<    1   2   3   4   5   6   >