Build failed in Hudson: Hive-trunk-h0.20 #491

2011-01-16 Thread Apache Hudson Server
See 

--
[...truncated 24584 lines...]
[junit] PREHOOK: Output: 
file:/tmp/hudson/hive_2011-01-16_11-04-17_648_6366922926584381024/-mr-1
[junit] POSTHOOK: query: create view testHiveJdbcDriverView comment 'Simple 
view' as select * from testHiveJdbcDriverTable
[junit] POSTHOOK: type: CREATEVIEW
[junit] POSTHOOK: Output: default@testHiveJdbcDriverView
[junit] POSTHOOK: Output: 
file:/tmp/hudson/hive_2011-01-16_11-04-17_648_6366922926584381024/-mr-1
[junit] OK
[junit] PREHOOK: query: select c1, c2, c3, c4, c5 as a, c6, c7, c8, c9, 
c10, c11, c12, c1*2, sentences(null, null, null) as b from testDataTypeTable 
limit 1
[junit] PREHOOK: type: QUERY
[junit] PREHOOK: Input: default@testdatatypetable@dt=20090619
[junit] PREHOOK: Output: 
file:/tmp/hudson/hive_2011-01-16_11-04-17_675_4693918222610335762/-mr-1
[junit] Total MapReduce jobs = 1
[junit] Launching Job 1 out of 1
[junit] Number of reduce tasks is set to 0 since there's no reduce operator
[junit] Job running in-process (local Hadoop)
[junit] 2011-01-16 11:04:20,500 null map = 100%,  reduce = 0%
[junit] Ended Job = job_local_0001
[junit] POSTHOOK: query: select c1, c2, c3, c4, c5 as a, c6, c7, c8, c9, 
c10, c11, c12, c1*2, sentences(null, null, null) as b from testDataTypeTable 
limit 1
[junit] POSTHOOK: type: QUERY
[junit] POSTHOOK: Input: default@testdatatypetable@dt=20090619
[junit] POSTHOOK: Output: 
file:/tmp/hudson/hive_2011-01-16_11-04-17_675_4693918222610335762/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testHiveJdbcDriverTable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivejdbcdrivertable
[junit] PREHOOK: Output: default@testhivejdbcdrivertable
[junit] POSTHOOK: query: drop table testHiveJdbcDriverTable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivejdbcdrivertable
[junit] POSTHOOK: Output: default@testhivejdbcdrivertable
[junit] OK
[junit] PREHOOK: query: drop table testHiveJdbcDriverPartitionedTable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivejdbcdriverpartitionedtable
[junit] PREHOOK: Output: default@testhivejdbcdriverpartitionedtable
[junit] POSTHOOK: query: drop table testHiveJdbcDriverPartitionedTable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivejdbcdriverpartitionedtable
[junit] POSTHOOK: Output: default@testhivejdbcdriverpartitionedtable
[junit] OK
[junit] PREHOOK: query: drop table testDataTypeTable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testdatatypetable
[junit] PREHOOK: Output: default@testdatatypetable
[junit] POSTHOOK: query: drop table testDataTypeTable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testdatatypetable
[junit] POSTHOOK: Output: default@testdatatypetable
[junit] OK
[junit] Hive history 
file=
[junit] Hive history 
file=
[junit] PREHOOK: query: drop table testHiveJdbcDriverTable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testHiveJdbcDriverTable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testHiveJdbcDriverTable (key int 
comment 'the key', value string) comment 'Simple table'
[junit] PREHOOK: type: CREATETABLE
[junit] POSTHOOK: query: create table testHiveJdbcDriverTable (key int 
comment 'the key', value string) comment 'Simple table'
[junit] POSTHOOK: type: CREATETABLE
[junit] POSTHOOK: Output: default@testHiveJdbcDriverTable
[junit] OK
[junit] PREHOOK: query: load data local inpath 
'
 into table testHiveJdbcDriverTable
[junit] PREHOOK: type: LOAD
[junit] Copying data from 

[junit] Loading data to table testhivejdbcdrivertable
[junit] POSTHOOK: query: load data local inpath 
'
 into table testHiveJdbcDriverTable
[junit] POSTHOOK: type: LOAD
[junit] POSTHOOK: Output: default@testhivejdbcdrivertable
[junit] OK
[junit] PREHOOK: query: drop table testHiveJdbcDriverPartitionedTable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testHiveJdbcDriverPartitionedTable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testHiveJdbcDriverPartitionedTable 

[jira] Commented: (HIVE-1579) showJobFailDebugInfo fails job if tasktracker does not respond

2011-01-16 Thread Richard Williamson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12982375#action_12982375
 ] 

Richard Williamson commented on HIVE-1579:
--

Same issue running Hive 0.6 against CDH3B2...  I encountered it after trying to 
re-run a failed query (GC out of memory orig error which I increased memory on 
2nd run).  I have tried re-running multiple times with same error and 
restarting tasktracker on node that keeps giving error (as well as restarting 
mapred on cluster.  The behavior is the query hangs for multiple hours without 
making progress then fails with this error (same node listed both times in 
error log).

> showJobFailDebugInfo fails job if tasktracker does not respond
> --
>
> Key: HIVE-1579
> URL: https://issues.apache.org/jira/browse/HIVE-1579
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Joydeep Sen Sarma
>Assignee: Paul Yang
>
> here's the stack trace:
> java.lang.RuntimeException: Error while reading from task log url
>   at 
> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130)
>   at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.showJobFailDebugInfo(ExecDriver.java:844)
>   at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:624)
>   at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:120)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:108)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:609)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:478)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:356)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:140)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:199)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:316)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.io.FileNotFoundException: 
> http://hadoop0062.snc3.facebook.com.:50060/tasklog?taskid=attempt_201008191557_26566\
> _m_01_3&all=true
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1239)
>   at java.net.URL.openStream(URL.java:1009)
>   at 
> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120)
>   ... 16 more
> Ended Job = job_201008191557_26566 with exception 
> 'java.lang.RuntimeException(Error while reading from task log url)'
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask
> this failed a multi hour script.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1731) Improve miscellaneous error messages

2011-01-16 Thread Adam Kramer (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12982419#action_12982419
 ] 

Adam Kramer commented on HIVE-1731:
---

FAILED: Error in semantic analysis: AS clause has an invalid number of aliases

...this should provide the line number and column number on which the invalid 
AS clause begins and ends. Subqueries mean there could be more than one.

> Improve miscellaneous error messages
> 
>
> Key: HIVE-1731
> URL: https://issues.apache.org/jira/browse/HIVE-1731
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: John Sichi
> Fix For: 0.7.0
>
>
> This is a place for accumulating error message improvements so that we can 
> update a bunch in batch.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Storage Handler using JDBC

2011-01-16 Thread Vijay
The storage handler mechanism seems like an excellent way to support
mixing hive with a traditional database using a generic JDBC storage
handler. While that may not always be the best thing to do, is there
any work targeted at this integration? Are there any issues or
problems preventing such an integration?

Thanks,
Vijay


[jira] Updated: (HIVE-1917) CTAS (create-table-as-select) throws exception when showing results

2011-01-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-1917:
-

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

Committed. Thanks Ning

> CTAS (create-table-as-select) throws exception when showing results
> ---
>
> Key: HIVE-1917
> URL: https://issues.apache.org/jira/browse/HIVE-1917
> Project: Hive
>  Issue Type: Bug
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Attachments: HIVE-1917.patch
>
>
> CTAS throws an exception in CliDriver when showing results at the end of a 
> query. CTAS should not show results because it is not a 'select' query or 
> 'desc'/explain etc. It should be the same as create table/view/index and 
> insert overwrite statements. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1915) authorization on database level is broken.

2011-01-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-1915:
-

Status: Open  (was: Patch Available)

> authorization on database level is broken.
> --
>
> Key: HIVE-1915
> URL: https://issues.apache.org/jira/browse/HIVE-1915
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Security
>Reporter: He Yongqiang
>Assignee: He Yongqiang
> Attachments: HIVE-1915.1.patch
>
>
> CREATE DATABASE IF NOT EXISTS test_db COMMENT 'Hive test database';
> SHOW DATABASES;
> grant `drop` on DATABASE test_db to user hive_test_user;
> grant `select` on DATABASE test_db to user hive_test_user;
> show grant user hive_test_user on DATABASE test_db;
> DROP DATABASE IF EXISTS test_db;
> will fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1915) authorization on database level is broken.

2011-01-16 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12982463#action_12982463
 ] 

Namit Jain commented on HIVE-1915:
--

All tables/parititons are dropped in QTestUtil at the end of a unit test.

Similarly, you should drop all databases/roles also in QTestUtil - I was getting
some intermittent errors, and traced the root cause to that

> authorization on database level is broken.
> --
>
> Key: HIVE-1915
> URL: https://issues.apache.org/jira/browse/HIVE-1915
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Security
>Reporter: He Yongqiang
>Assignee: He Yongqiang
> Attachments: HIVE-1915.1.patch
>
>
> CREATE DATABASE IF NOT EXISTS test_db COMMENT 'Hive test database';
> SHOW DATABASES;
> grant `drop` on DATABASE test_db to user hive_test_user;
> grant `select` on DATABASE test_db to user hive_test_user;
> show grant user hive_test_user on DATABASE test_db;
> DROP DATABASE IF EXISTS test_db;
> will fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1579) showJobFailDebugInfo fails job if tasktracker does not respond

2011-01-16 Thread Richard Williamson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12982475#action_12982475
 ] 

Richard Williamson commented on HIVE-1579:
--

I may have found root cause on this error - when increasing the memory for my 
failed runs, I left off the dash as follows:
set mapred.child.java.opts=Xmx1100M;
When correcting with:
set mapred.child.java.opts=-Xmx1100M;
It ran without errors...

> showJobFailDebugInfo fails job if tasktracker does not respond
> --
>
> Key: HIVE-1579
> URL: https://issues.apache.org/jira/browse/HIVE-1579
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Joydeep Sen Sarma
>Assignee: Paul Yang
>
> here's the stack trace:
> java.lang.RuntimeException: Error while reading from task log url
>   at 
> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130)
>   at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.showJobFailDebugInfo(ExecDriver.java:844)
>   at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:624)
>   at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:120)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:108)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:609)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:478)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:356)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:140)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:199)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:316)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.io.FileNotFoundException: 
> http://hadoop0062.snc3.facebook.com.:50060/tasklog?taskid=attempt_201008191557_26566\
> _m_01_3&all=true
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1239)
>   at java.net.URL.openStream(URL.java:1009)
>   at 
> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120)
>   ... 16 more
> Ended Job = job_201008191557_26566 with exception 
> 'java.lang.RuntimeException(Error while reading from task log url)'
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask
> this failed a multi hour script.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Storage Handler using JDBC

2011-01-16 Thread John Sichi
On Jan 15, 2011, at 12:36 PM, Vijay wrote:

> The storage handler mechanism seems like an excellent way to support
> mixing hive with a traditional database using a generic JDBC storage
> handler. While that may not always be the best thing to do, is there
> any work targeted at this integration? Are there any issues or
> problems preventing such an integration?


Hi Vijay,

This request is being tracked, but I'm not aware of anyone actively working on 
it at the moment:

https://issues.apache.org/jira/browse/HIVE-1555

JVS



Build failed in Hudson: Hive-trunk-h0.20 #492

2011-01-16 Thread Apache Hudson Server
See 

Changes:

[namit] HIVE-1917 CTAS (create-table-as-select) throws exception when showing
results (Ning Zhang via namit)

--
[...truncated 24582 lines...]
[junit] PREHOOK: Output: 
file:/tmp/hudson/hive_2011-01-16_23-23-12_353_4609221742616710084/-mr-1
[junit] POSTHOOK: query: create view testHiveJdbcDriverView comment 'Simple 
view' as select * from testHiveJdbcDriverTable
[junit] POSTHOOK: type: CREATEVIEW
[junit] POSTHOOK: Output: default@testHiveJdbcDriverView
[junit] POSTHOOK: Output: 
file:/tmp/hudson/hive_2011-01-16_23-23-12_353_4609221742616710084/-mr-1
[junit] OK
[junit] PREHOOK: query: select c1, c2, c3, c4, c5 as a, c6, c7, c8, c9, 
c10, c11, c12, c1*2, sentences(null, null, null) as b from testDataTypeTable 
limit 1
[junit] PREHOOK: type: QUERY
[junit] PREHOOK: Input: default@testdatatypetable@dt=20090619
[junit] PREHOOK: Output: 
file:/tmp/hudson/hive_2011-01-16_23-23-12_381_5138612469464623136/-mr-1
[junit] Total MapReduce jobs = 1
[junit] Launching Job 1 out of 1
[junit] Number of reduce tasks is set to 0 since there's no reduce operator
[junit] Job running in-process (local Hadoop)
[junit] 2011-01-16 23:23:15,204 null map = 100%,  reduce = 0%
[junit] Ended Job = job_local_0001
[junit] POSTHOOK: query: select c1, c2, c3, c4, c5 as a, c6, c7, c8, c9, 
c10, c11, c12, c1*2, sentences(null, null, null) as b from testDataTypeTable 
limit 1
[junit] POSTHOOK: type: QUERY
[junit] POSTHOOK: Input: default@testdatatypetable@dt=20090619
[junit] POSTHOOK: Output: 
file:/tmp/hudson/hive_2011-01-16_23-23-12_381_5138612469464623136/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testHiveJdbcDriverTable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivejdbcdrivertable
[junit] PREHOOK: Output: default@testhivejdbcdrivertable
[junit] POSTHOOK: query: drop table testHiveJdbcDriverTable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivejdbcdrivertable
[junit] POSTHOOK: Output: default@testhivejdbcdrivertable
[junit] OK
[junit] PREHOOK: query: drop table testHiveJdbcDriverPartitionedTable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivejdbcdriverpartitionedtable
[junit] PREHOOK: Output: default@testhivejdbcdriverpartitionedtable
[junit] POSTHOOK: query: drop table testHiveJdbcDriverPartitionedTable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivejdbcdriverpartitionedtable
[junit] POSTHOOK: Output: default@testhivejdbcdriverpartitionedtable
[junit] OK
[junit] PREHOOK: query: drop table testDataTypeTable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testdatatypetable
[junit] PREHOOK: Output: default@testdatatypetable
[junit] POSTHOOK: query: drop table testDataTypeTable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testdatatypetable
[junit] POSTHOOK: Output: default@testdatatypetable
[junit] OK
[junit] Hive history 
file=
[junit] Hive history 
file=
[junit] PREHOOK: query: drop table testHiveJdbcDriverTable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testHiveJdbcDriverTable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testHiveJdbcDriverTable (key int 
comment 'the key', value string) comment 'Simple table'
[junit] PREHOOK: type: CREATETABLE
[junit] POSTHOOK: query: create table testHiveJdbcDriverTable (key int 
comment 'the key', value string) comment 'Simple table'
[junit] POSTHOOK: type: CREATETABLE
[junit] POSTHOOK: Output: default@testHiveJdbcDriverTable
[junit] OK
[junit] PREHOOK: query: load data local inpath 
'
 into table testHiveJdbcDriverTable
[junit] PREHOOK: type: LOAD
[junit] Copying data from 

[junit] Loading data to table testhivejdbcdrivertable
[junit] POSTHOOK: query: load data local inpath 
'
 into table testHiveJdbcDriverTable
[junit] POSTHOOK: type: LOAD
[junit] POSTHOOK: Output: default@testhivejdbcdrivertable
[junit] OK
[junit] PREHOOK: query: drop table testHiveJdbcDriverPartitionedTable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testHiveJdbcDriverPartitionedTable