[jira] [Commented] (HIVE-5681) Validation doesn't catch SMBMapJoin

2013-10-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808780#comment-13808780
 ] 

Hive QA commented on HIVE-5681:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610940/HIVE-5681.1.patch

{color:green}SUCCESS:{color} +1 4516 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/8/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/8/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Validation doesn't catch SMBMapJoin
> ---
>
> Key: HIVE-5681
> URL: https://issues.apache.org/jira/browse/HIVE-5681
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5681.1.patch, HIVE-5681.1.patch
>
>
> SMBMapJoin is currently not supported, but validation doesn't catch it 
> because it has same OperatorType. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5668) path normalization in MapOperator is expensive

2013-10-29 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808775#comment-13808775
 ] 

Gunther Hagleitner commented on HIVE-5668:
--

[~thejas]. I think you're supposed to say: Committed to trunk. Thanks Thejas! 
:-P

> path normalization in MapOperator is expensive
> --
>
> Key: HIVE-5668
> URL: https://issues.apache.org/jira/browse/HIVE-5668
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5668.1.patch
>
>
> The conversion of paths in MapWork.getPathToAliases is happening multiple 
> times in MapOperator.cleanUpInputFileChangedOp. Caching the results of 
> conversion can improve the performance of hive map tasks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5691) Intermediate columns are incorrectly initialized for partitioned tables.

2013-10-29 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808695#comment-13808695
 ] 

Gunther Hagleitner commented on HIVE-5691:
--

LGTM +1

> Intermediate columns are incorrectly initialized for partitioned tables.
> 
>
> Key: HIVE-5691
> URL: https://issues.apache.org/jira/browse/HIVE-5691
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5691.1.patch, HIVE-5691.2.patch
>
>
> Intermediate columns are incorrectly initialized for partitioned tables. Same 
> tablescan operator can be used for multiple partitions. The vectorizer 
> doesn't initialize for all partition paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5691) Intermediate columns are incorrectly initialized for partitioned tables.

2013-10-29 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5691:
---

Attachment: HIVE-5691.2.patch

Patch with a test.

> Intermediate columns are incorrectly initialized for partitioned tables.
> 
>
> Key: HIVE-5691
> URL: https://issues.apache.org/jira/browse/HIVE-5691
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5691.1.patch, HIVE-5691.2.patch
>
>
> Intermediate columns are incorrectly initialized for partitioned tables. Same 
> tablescan operator can be used for multiple partitions. The vectorizer 
> doesn't initialize for all partition paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5691) Intermediate columns are incorrectly initialized for partitioned tables.

2013-10-29 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5691:
---

Status: Patch Available  (was: Open)

> Intermediate columns are incorrectly initialized for partitioned tables.
> 
>
> Key: HIVE-5691
> URL: https://issues.apache.org/jira/browse/HIVE-5691
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5691.1.patch, HIVE-5691.2.patch
>
>
> Intermediate columns are incorrectly initialized for partitioned tables. Same 
> tablescan operator can be used for multiple partitions. The vectorizer 
> doesn't initialize for all partition paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808693#comment-13808693
 ] 

Brock Noland commented on HIVE-5610:


Yeah the only thing I noted is that you are on OS X 10.8.5 and I am on 10.7.5.

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808692#comment-13808692
 ] 

Thejas M Nair commented on HIVE-5610:
-

I tried - rm -rf ~/.m2/ and upgrading java version to 1.6.0_65. That hasn't 
helped either.
[~ashutoshc] or somebody else, can you try building on your mac ? Next thing, I 
will probably try upgrading my mac OS!
The maven error message is not very useful in this case (I probably need to 
find the right way to get the errors out!)



> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5690) Support subquery for single sourced multi query

2013-10-29 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808691#comment-13808691
 ] 

Navis commented on HIVE-5690:
-

Adding new token would make changes in many of gold files. I'll update that 
tomorrow.

> Support subquery for single sourced multi query
> ---
>
> Key: HIVE-5690
> URL: https://issues.apache.org/jira/browse/HIVE-5690
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: D13791.1.patch
>
>
> Single sourced multi (insert) query is very useful for various ETL processes 
> but it does not allow subqueries included. For example, 
> {noformat}
> explain from src 
> insert overwrite table x1 select * from (select distinct key,value) b order 
> by key
> insert overwrite table x2 select * from (select distinct key,value) c order 
> by value;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5690) Support subquery for single sourced multi query

2013-10-29 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-5690:
--

Attachment: D13791.1.patch

navis requested code review of "HIVE-5690 [jira] Support subquery for single 
sourced multi query".

Reviewers: JIRA

logs

Single sourced multi (insert) query is very useful for various ETL processes 
but it does not allow subqueries included. For example,

explain from src
insert overwrite table x1 select * from (select distinct key,value) b order by 
key
insert overwrite table x2 select * from (select distinct key,value) c order by 
value;

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D13791

AFFECTED FILES
  build.properties
  ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientpositive/multi_insert_subquery.q
  ql/src/test/results/clientpositive/multi_insert_subquery.q.out

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/41733/

To: JIRA, navis


> Support subquery for single sourced multi query
> ---
>
> Key: HIVE-5690
> URL: https://issues.apache.org/jira/browse/HIVE-5690
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: D13791.1.patch
>
>
> Single sourced multi (insert) query is very useful for various ETL processes 
> but it does not allow subqueries included. For example, 
> {noformat}
> explain from src 
> insert overwrite table x1 select * from (select distinct key,value) b order 
> by key
> insert overwrite table x2 select * from (select distinct key,value) c order 
> by value;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5589) perflogger output is hard to associate with queries

2013-10-29 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808682#comment-13808682
 ] 

Gunther Hagleitner commented on HIVE-5589:
--

Test failure is unrelated. LGTM +1

[~sershe] can you add a link to the jira you created?

> perflogger output is hard to associate with queries
> ---
>
> Key: HIVE-5589
> URL: https://issues.apache.org/jira/browse/HIVE-5589
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5589.01.patch, HIVE-5589.02.patch
>
>
> It would be nice to dump the query somewhere in output.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5691) Intermediate columns are incorrectly initialized for partitioned tables.

2013-10-29 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5691:
---

Attachment: HIVE-5691.1.patch

Patch fixes the issue. I will upload another patch with a test.

> Intermediate columns are incorrectly initialized for partitioned tables.
> 
>
> Key: HIVE-5691
> URL: https://issues.apache.org/jira/browse/HIVE-5691
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5691.1.patch
>
>
> Intermediate columns are incorrectly initialized for partitioned tables. Same 
> tablescan operator can be used for multiple partitions. The vectorizer 
> doesn't initialize for all partition paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5691) Intermediate columns are incorrectly initialized for partitioned tables.

2013-10-29 Thread Jitendra Nath Pandey (JIRA)
Jitendra Nath Pandey created HIVE-5691:
--

 Summary: Intermediate columns are incorrectly initialized for 
partitioned tables.
 Key: HIVE-5691
 URL: https://issues.apache.org/jira/browse/HIVE-5691
 Project: Hive
  Issue Type: Sub-task
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey


Intermediate columns are incorrectly initialized for partitioned tables. Same 
tablescan operator can be used for multiple partitions. The vectorizer doesn't 
initialize for all partition paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5690) Support subquery for single sourced multi query

2013-10-29 Thread Navis (JIRA)
Navis created HIVE-5690:
---

 Summary: Support subquery for single sourced multi query
 Key: HIVE-5690
 URL: https://issues.apache.org/jira/browse/HIVE-5690
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor


Single sourced multi (insert) query is very useful for various ETL processes 
but it does not allow subqueries included. For example, 
{noformat}
explain from src 
insert overwrite table x1 select * from (select distinct key,value) b order by 
key
insert overwrite table x2 select * from (select distinct key,value) c order by 
value;
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5688) TestCliDriver compilation fails on tez branch.

2013-10-29 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-5688:
-

Attachment: HIVE-5688.1.patch

> TestCliDriver compilation fails on tez branch.
> --
>
> Key: HIVE-5688
> URL: https://issues.apache.org/jira/browse/HIVE-5688
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-5688.1.patch
>
>
> On the tez branch, the test cli driver tests fail to compile after HIVE-5543.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3190) allow INTEGER as a type name in a column/cast expression (per ISO-SQL 2011)

2013-10-29 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808676#comment-13808676
 ] 

Jason Dere commented on HIVE-3190:
--

How about the following aliases:

INTEGER = INT
BLOB/BINARY LARGE OBJECT = BINARY
CLOB/CHARACTER LARGE OBJECT = STRING
NUMERIC = DECIMAL
REAL = FLOAT
DOUBLE PRECISION = DOUBLE

> allow INTEGER as a type name in a column/cast expression (per ISO-SQL 2011)
> ---
>
> Key: HIVE-3190
> URL: https://issues.apache.org/jira/browse/HIVE-3190
> Project: Hive
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 0.8.0
>Reporter: N Campbell
>
> Just extend the parser to allow INTEGER instead of making folks use INT
> select cast('10' as integer) from cert.tversion tversion
> FAILED: Parse Error: line 1:20 cannot recognize input near 'integer' ')' 
> 'from' in primitive type specification



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808674#comment-13808674
 ] 

Brock Noland commented on HIVE-5610:


bq, It is working for me on linux (RHEL 6, java 1.6.0_31, maven 3.1.1), but not 
on mac.

Ok good to know it's something environmental.

I would try clearing $HOME/.m2/repository/org/apache/hive in case the ant built 
version is conflicting as we had to move some classes around.

Mac details:

{noformat}
$ mvn -version
Apache Maven 3.0.3 (r1075438; 2011-02-28 11:31:09-0600)
Maven home: /usr/share/maven
Java version: 1.6.0_65, vendor: Apple Inc.
Java home: /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
Default locale: en_US, platform encoding: MacRoman
OS name: "mac os x", version: "10.7.5", arch: "x86_64", family: "mac"
{noformat} 


> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808670#comment-13808670
 ] 

Thejas M Nair commented on HIVE-5610:
-

I tried v5 of the patch on trunk. It is working for me on linux (RHEL 6, java  
1.6.0_31, maven 3.1.1), but not on mac. I also tried upgrading mvn on mac from 
3.0.4 to 3.1.1, and it didn't help. Looks like something specific to the 
versions on my mac. Brock, can you share the details of the versions, including 
mac version ?

On my mac -
{code}
[hive_git18:44]$ mvn -v
Apache Maven 3.0.4 (r1232337; 2012-01-17 00:44:56-0800)
Maven home: /usr/share/maven
Java version: 1.6.0_51, vendor: Apple Inc.
Java home: /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
Default locale: en_US, platform encoding: MacRoman
OS name: "mac os x", version: "10.8.5", arch: "x86_64", family: "mac"
{code}

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808672#comment-13808672
 ] 

Thejas M Nair commented on HIVE-5610:
-

With mvn 3.1.1 on mac-

{code}
Apache Maven 3.1.1 (0728685237757ffbf44136acec0402957f723d9a; 2013-09-17 
08:22:22-0700)
Maven home: /Users/thejas/bin/apache-maven-3.1.1
Java version: 1.6.0_51, vendor: Apple Inc.
Java home: /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
Default locale: en_US, platform encoding: MacRoman
OS name: "mac os x", version: "10.8.5", arch: "x86_64", family: "mac"
{code}

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808662#comment-13808662
 ] 

Brock Noland commented on HIVE-5610:


As a side note I have tested v5 + trunk on Mac with java 6 and RHEL 6 with both 
Java 6 and 7 all using maven 3.

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-29 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808660#comment-13808660
 ] 

Xuefu Zhang commented on HIVE-5354:
---

The test failures seem having nothing to do with the patch here.

> Decimal precision/scale support in ORC file
> ---
>
> Key: HIVE-5354
> URL: https://issues.apache.org/jira/browse/HIVE-5354
> Project: Hive
>  Issue Type: Task
>  Components: Serializers/Deserializers
>Affects Versions: 0.10.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5354.1.patch, HIVE-5354.2.patch, HIVE-5354.3.patch, 
> HIVE-5354.4.patch, HIVE-5354.patch
>
>
> A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808657#comment-13808657
 ] 

Brock Noland commented on HIVE-5610:


Thejas, the maven branch is a little out of date since we decided to apply this 
as a patch.

Can you try v5 of the patch on trunk?


> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808651#comment-13808651
 ] 

Thejas M Nair commented on HIVE-5610:
-

I am having some trouble getting the build on maven branch to work. Please let 
me know if I am missing some step -

 ./maven-rollforward.sh
mvn clean package -DskipTests -e -X

I added -e -X in above commands as suggested by maven error message, to get 
more error information
It fails with - 
{code}
[INFO] Hive .. SUCCESS [0.695s]
[INFO] Hive Ant Utilities  SUCCESS [1.678s]
[INFO] Hive Shims Common . SUCCESS [0.631s]
[INFO] Hive Shims 0.20 ... SUCCESS [0.447s]
[INFO] Hive Shims Secure Common .. SUCCESS [0.624s]
[INFO] Hive Shims 0.20S .. SUCCESS [0.309s]
[INFO] Hive Shims 0.23 ... SUCCESS [0.691s]
[INFO] Hive Shims  SUCCESS [0.764s]
[INFO] Hive Common ... SUCCESS [1.774s]
[INFO] Hive Serde  SUCCESS [3.191s]
[INFO] Hive Metastore  SUCCESS [21.136s]
[INFO] Hive TestUtils  SUCCESS [0.109s]
[INFO] Hive Query Language ... FAILURE [7.506s]
[INFO] Hive Service .. SKIPPED
[INFO] Hive JDBC . SKIPPED
[INFO] Hive Beeline .. SKIPPED
[INFO] Hive CLI .. SKIPPED
[INFO] Hive Contrib .. SKIPPED
[INFO] Hive HBase Handler  SKIPPED
[INFO] Hive HCatalog . SKIPPED
[INFO] Hive HCatalog Core  SKIPPED
[INFO] Hive HCatalog Pig Adapter . SKIPPED
[INFO] Hive HCatalog Server Extensions ... SKIPPED
[INFO] Hive HCatalog Webhcat Java Client . SKIPPED
[INFO] Hive HCatalog Webhcat . SKIPPED
[INFO] Hive HCatalog HBase Storage Handler ... SKIPPED
[INFO] Hive HWI .. SKIPPED
[INFO] Hive ODBC . SKIPPED
[INFO] Hive Packaging  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 40.050s
[INFO] Finished at: Tue Oct 29 18:24:12 PDT 2013
[INFO] Final Memory: 36M/123M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hive-exec: Compilation failure
[ERROR] An unknown compilation problem occurred
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hive-exec: Compilation failure
An unknown compilation problem occurred

at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launche

[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808648#comment-13808648
 ] 

Carl Steinbach commented on HIVE-5610:
--

+1

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808644#comment-13808644
 ] 

Hive QA commented on HIVE-5354:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610888/HIVE-5354.4.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4515 tests executed
*Failed tests:*
{noformat}
org.apache.hcatalog.listener.TestNotificationListener.testAMQListener
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/7/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/7/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

> Decimal precision/scale support in ORC file
> ---
>
> Key: HIVE-5354
> URL: https://issues.apache.org/jira/browse/HIVE-5354
> Project: Hive
>  Issue Type: Task
>  Components: Serializers/Deserializers
>Affects Versions: 0.10.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5354.1.patch, HIVE-5354.2.patch, HIVE-5354.3.patch, 
> HIVE-5354.4.patch, HIVE-5354.patch
>
>
> A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5689) Add some simple MRR tests

2013-10-29 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-5689.
--

Resolution: Fixed

Committed to branch.

> Add some simple MRR tests
> -
>
> Key: HIVE-5689
> URL: https://issues.apache.org/jira/browse/HIVE-5689
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: tez-branch
>
> Attachments: HIVE-5689.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5685) partition column type validation doesn't work in some cases

2013-10-29 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808631#comment-13808631
 ] 

Ashutosh Chauhan commented on HIVE-5685:


+1

> partition column type validation doesn't work in some cases
> ---
>
> Key: HIVE-5685
> URL: https://issues.apache.org/jira/browse/HIVE-5685
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Vikram Dixit K
> Attachments: HIVE-5685.1.patch
>
>
> It seems like it works if there's more than one partition column, and doesn't 
> work if there's just one. At least that's the case that I found. The 
> situation for different types is the same.
> {noformat}
> hive> create table zzz(c string) partitioned by (i int);
> OK
> Time taken: 0.41 seconds
> hive> alter table zzz add partition (i='foo');
> OK
> Time taken: 0.185 seconds
> hive> create table (c string) partitioned by (i int,j int); 
> OK
> Time taken: 0.085 seconds
> hive> alter table  add partition (i='foo',j=5);
> FAILED: SemanticException [Error 10248]: Cannot add partition column i of 
> type string as it cannot be converted to type int
> hive> alter table  add partition (i=5,j='foo');
> FAILED: SemanticException [Error 10248]: Cannot add partition column j of 
> type string as it cannot be converted to type int
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5679) add date support to metastore JDO/SQL

2013-10-29 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808629#comment-13808629
 ] 

Sergey Shelukhin commented on HIVE-5679:


[~ashutoshc] what do you think?

> add date support to metastore JDO/SQL
> -
>
> Key: HIVE-5679
> URL: https://issues.apache.org/jira/browse/HIVE-5679
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> Metastore supports strings and integral types in filters.
> It could also support dates.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5679) add date support to metastore JDO/SQL

2013-10-29 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808628#comment-13808628
 ] 

Sergey Shelukhin commented on HIVE-5679:


discussed a little bit here... it looks like support will have to be added to 
Filter.g.
I had an idea to change the code that converts hive expression to 
filter-g-expression in metastore to bypass the string stage and just go 
directly from one tree to another; that way noone would need to bother with 
Filter.g for new features. But that would mean this feature (and others on top) 
won't work for getPartitionsByFilter, so not for HCat or Pig or whoever uses 
that one.
So filter.g will have support for "date 'foo'" and cast syntax.
>From that, or by looking at column type in case of a string literal (the same 
>way hive parser does), Metastore would figure out what compares are to be done 
>by date. Both mysql and postgres support "date 'string'" syntax, so that can 
>be used. But, given how we store dates in the table, we can just use string 
>compares on metastore side. It will also make it usable by JDO, otherwise JDO 
>pushdown cannot be added. That way the only problem is to make sure the date 
>actually gets to metastore. We might not even need to validate it because if 
>it's an invalid literal parser would catch it.



> add date support to metastore JDO/SQL
> -
>
> Key: HIVE-5679
> URL: https://issues.apache.org/jira/browse/HIVE-5679
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> Metastore supports strings and integral types in filters.
> It could also support dates.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5689) Add some simple MRR tests

2013-10-29 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5689:
-

Attachment: HIVE-5689.1.patch

> Add some simple MRR tests
> -
>
> Key: HIVE-5689
> URL: https://issues.apache.org/jira/browse/HIVE-5689
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: tez-branch
>
> Attachments: HIVE-5689.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5689) Add some simple MRR tests

2013-10-29 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-5689:


 Summary: Add some simple MRR tests
 Key: HIVE-5689
 URL: https://issues.apache.org/jira/browse/HIVE-5689
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5688) TestCliDriver compilation fails on tez branch.

2013-10-29 Thread Vikram Dixit K (JIRA)
Vikram Dixit K created HIVE-5688:


 Summary: TestCliDriver compilation fails on tez branch.
 Key: HIVE-5688
 URL: https://issues.apache.org/jira/browse/HIVE-5688
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: tez-branch
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K


On the tez branch, the test cli driver tests fail to compile after HIVE-5543.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5436) Hive's casting behavior needs to be consistent

2013-10-29 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808625#comment-13808625
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-5436:
-

[~xuefu] Reasons why I thought of fixing the consistency first:
1. I wanted to see how the intermediate results are handled in case of 
numericals. For example, for tiny int (1228+1228)/20 will lead to a in-range 
result, where as the intermediate result 1228+1228 will be a non tiny int.  
This scenario will be very common in case of exponential notation.
2. HIVE-5382 will need a baseline to compare the string cast results with 
non-string cast results. My plan was to use testcases like this :
select cast('-1.5e2' as int)-cast(-1.5e2 as int) from tmp and verify that the 
result is always 0. This will ensure consistency across cast from 
string->numericals (and will expose any existing bugs which is fixed in future 
for only one of the cast types since the non-string cast and string cast are 
handled separately).

> Hive's casting behavior needs to be consistent
> --
>
> Key: HIVE-5436
> URL: https://issues.apache.org/jira/browse/HIVE-5436
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>Priority: Critical
>
> Hive's casting behavior is inconsistent and the behavior of casting from one 
> type to another undocumented as of now when the casted value is out of range. 
> For example, casting out of range values from one type to another can result 
> in incorrect results.
> Eg: 
> 1. select cast('1000'  as tinyint) from t1;
> NULL
> 2. select 1000Y from t1;
> FAILED: SemanticException [Error 10029]: Line 1:7 Invalid numerical constant 
> '1000Y'
> 3.  select cast(1000 as tinyint) from t1;
> -24
> 4.select cast(1.1e3-1000/0 as tinyint) from t1;
> 0
> 5. select cast(10/0 as tinyint) from pw18; 
> -1
> The hive user can accidently try to typecast an out of range value. For 
> example in the e.g. 4/5 even though the final result is NaN, Hive can 
> typecast to a random result. Either we should document that the end user 
> should take care of  overflow, underflow, division by 0, etc.  by 
> himself/herself or we should return NULLs when the final result is out of 
> range.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4523) round() function with specified decimal places not consistent with mysql

2013-10-29 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-4523:
--

Attachment: HIVE-4523.2.patch

Patch #2 rebased with latest trunk. Still need additional test cases.

> round() function with specified decimal places not consistent with mysql 
> -
>
> Key: HIVE-4523
> URL: https://issues.apache.org/jira/browse/HIVE-4523
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Affects Versions: 0.7.1
>Reporter: Fred Desing
>Assignee: Xuefu Zhang
>Priority: Minor
> Attachments: HIVE-4523.1.patch, HIVE-4523.2.patch, HIVE-4523.patch
>
>
> // hive
> hive> select round(150.000, 2) from temp limit 1;
> 150.0
> hive> select round(150, 2) from temp limit 1;
> 150.0
> // mysql
> mysql> select round(150.000, 2) from DUAL limit 1;
> round(150.000, 2)
> 150.00
> mysql> select round(150, 2) from DUAL limit 1;
> round(150, 2)
> 150
> http://dev.mysql.com/doc/refman/5.1/en/mathematical-functions.html#function_round



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5685) partition column type validation doesn't work in some cases

2013-10-29 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808603#comment-13808603
 ] 

Sergey Shelukhin commented on HIVE-5685:


+1

> partition column type validation doesn't work in some cases
> ---
>
> Key: HIVE-5685
> URL: https://issues.apache.org/jira/browse/HIVE-5685
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Vikram Dixit K
> Attachments: HIVE-5685.1.patch
>
>
> It seems like it works if there's more than one partition column, and doesn't 
> work if there's just one. At least that's the case that I found. The 
> situation for different types is the same.
> {noformat}
> hive> create table zzz(c string) partitioned by (i int);
> OK
> Time taken: 0.41 seconds
> hive> alter table zzz add partition (i='foo');
> OK
> Time taken: 0.185 seconds
> hive> create table (c string) partitioned by (i int,j int); 
> OK
> Time taken: 0.085 seconds
> hive> alter table  add partition (i='foo',j=5);
> FAILED: SemanticException [Error 10248]: Cannot add partition column i of 
> type string as it cannot be converted to type int
> hive> alter table  add partition (i=5,j='foo');
> FAILED: SemanticException [Error 10248]: Cannot add partition column j of 
> type string as it cannot be converted to type int
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5685) partition column type validation doesn't work in some cases

2013-10-29 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-5685:
-

Attachment: HIVE-5685.1.patch

> partition column type validation doesn't work in some cases
> ---
>
> Key: HIVE-5685
> URL: https://issues.apache.org/jira/browse/HIVE-5685
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Vikram Dixit K
> Attachments: HIVE-5685.1.patch
>
>
> It seems like it works if there's more than one partition column, and doesn't 
> work if there's just one. At least that's the case that I found. The 
> situation for different types is the same.
> {noformat}
> hive> create table zzz(c string) partitioned by (i int);
> OK
> Time taken: 0.41 seconds
> hive> alter table zzz add partition (i='foo');
> OK
> Time taken: 0.185 seconds
> hive> create table (c string) partitioned by (i int,j int); 
> OK
> Time taken: 0.085 seconds
> hive> alter table  add partition (i='foo',j=5);
> FAILED: SemanticException [Error 10248]: Cannot add partition column i of 
> type string as it cannot be converted to type int
> hive> alter table  add partition (i=5,j='foo');
> FAILED: SemanticException [Error 10248]: Cannot add partition column j of 
> type string as it cannot be converted to type int
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5685) partition column type validation doesn't work in some cases

2013-10-29 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-5685:
-

Status: Patch Available  (was: Open)

> partition column type validation doesn't work in some cases
> ---
>
> Key: HIVE-5685
> URL: https://issues.apache.org/jira/browse/HIVE-5685
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Vikram Dixit K
> Attachments: HIVE-5685.1.patch
>
>
> It seems like it works if there's more than one partition column, and doesn't 
> work if there's just one. At least that's the case that I found. The 
> situation for different types is the same.
> {noformat}
> hive> create table zzz(c string) partitioned by (i int);
> OK
> Time taken: 0.41 seconds
> hive> alter table zzz add partition (i='foo');
> OK
> Time taken: 0.185 seconds
> hive> create table (c string) partitioned by (i int,j int); 
> OK
> Time taken: 0.085 seconds
> hive> alter table  add partition (i='foo',j=5);
> FAILED: SemanticException [Error 10248]: Cannot add partition column i of 
> type string as it cannot be converted to type int
> hive> alter table  add partition (i=5,j='foo');
> FAILED: SemanticException [Error 10248]: Cannot add partition column j of 
> type string as it cannot be converted to type int
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 15069: HIVE-5685: partition column type validation doesn't work in some cases

2013-10-29 Thread Vikram Dixit Kumaraswamy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15069/
---

Review request for hive.


Bugs: HIVE-5685
https://issues.apache.org/jira/browse/HIVE-5685


Repository: hive-git


Description
---

HIVE-5685: partition column type validation doesn't work in some cases


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 46d1fac 
  ql/src/test/queries/clientnegative/illegal_partition_type3.q PRE-CREATION 
  ql/src/test/results/clientnegative/illegal_partition_type3.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/15069/diff/


Testing
---


Thanks,

Vikram Dixit Kumaraswamy



[jira] [Commented] (HIVE-5685) partition column type validation doesn't work in some cases

2013-10-29 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808592#comment-13808592
 ] 

Vikram Dixit K commented on HIVE-5685:
--

https://reviews.apache.org/r/15069/

> partition column type validation doesn't work in some cases
> ---
>
> Key: HIVE-5685
> URL: https://issues.apache.org/jira/browse/HIVE-5685
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Vikram Dixit K
> Attachments: HIVE-5685.1.patch
>
>
> It seems like it works if there's more than one partition column, and doesn't 
> work if there's just one. At least that's the case that I found. The 
> situation for different types is the same.
> {noformat}
> hive> create table zzz(c string) partitioned by (i int);
> OK
> Time taken: 0.41 seconds
> hive> alter table zzz add partition (i='foo');
> OK
> Time taken: 0.185 seconds
> hive> create table (c string) partitioned by (i int,j int); 
> OK
> Time taken: 0.085 seconds
> hive> alter table  add partition (i='foo',j=5);
> FAILED: SemanticException [Error 10248]: Cannot add partition column i of 
> type string as it cannot be converted to type int
> hive> alter table  add partition (i=5,j='foo');
> FAILED: SemanticException [Error 10248]: Cannot add partition column j of 
> type string as it cannot be converted to type int
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5564) Need to accomodate table decimal columns that were defined prior to HIVE-3976

2013-10-29 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5564:
--

Status: Patch Available  (was: Open)

> Need to accomodate table decimal columns that were defined prior to HIVE-3976
> -
>
> Key: HIVE-5564
> URL: https://issues.apache.org/jira/browse/HIVE-5564
> Project: Hive
>  Issue Type: Task
>  Components: Types
>Affects Versions: 0.13.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5564.1.patch, HIVE-5564.patch
>
>
> With HIVE-3976, decimal columns are stored with precision/scale, such as 
> decimal(17,5), as the type name. However, such columns defined in hive prior 
> to HIVE-3976 have a name as "decimal". Those columns need to continue to work 
> with a precision/scale as (10,0), per the functional doc. With patch in 
> HIVE-3976, we may get the following error message in such case:
> {code}
> 0: jdbc:hive2://localhost:1> desc dec;
> Error: Error while processing statement: FAILED: RuntimeException Decimal 
> type is specified without length: decimal:int (state=42000,code=4)
> {code}
> This issue will be addressed in this JIRA as a follow-up task.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5564) Need to accomodate table decimal columns that were defined prior to HIVE-3976

2013-10-29 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5564:
--

Attachment: HIVE-5564.1.patch

Patch #1 rebased with latest trunk.

> Need to accomodate table decimal columns that were defined prior to HIVE-3976
> -
>
> Key: HIVE-5564
> URL: https://issues.apache.org/jira/browse/HIVE-5564
> Project: Hive
>  Issue Type: Task
>  Components: Types
>Affects Versions: 0.13.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5564.1.patch, HIVE-5564.patch
>
>
> With HIVE-3976, decimal columns are stored with precision/scale, such as 
> decimal(17,5), as the type name. However, such columns defined in hive prior 
> to HIVE-3976 have a name as "decimal". Those columns need to continue to work 
> with a precision/scale as (10,0), per the functional doc. With patch in 
> HIVE-3976, we may get the following error message in such case:
> {code}
> 0: jdbc:hive2://localhost:1> desc dec;
> Error: Error while processing statement: FAILED: RuntimeException Decimal 
> type is specified without length: decimal:int (state=42000,code=4)
> {code}
> This issue will be addressed in this JIRA as a follow-up task.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5543) Running the mini tez cluster for tez unit tests

2013-10-29 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-5543.
--

Resolution: Fixed

> Running the mini tez cluster for tez unit tests
> ---
>
> Key: HIVE-5543
> URL: https://issues.apache.org/jira/browse/HIVE-5543
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-5543.1.patch, HIVE-5543.2.patch, HIVE-5543.3.patch
>
>
> In order to simulate the tez execution in hive tests, we need to work with 
> MiniTezCluster. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5543) Running the mini tez cluster for tez unit tests

2013-10-29 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808578#comment-13808578
 ] 

Gunther Hagleitner commented on HIVE-5543:
--

Committed to branch. Thanks Vikram!

> Running the mini tez cluster for tez unit tests
> ---
>
> Key: HIVE-5543
> URL: https://issues.apache.org/jira/browse/HIVE-5543
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-5543.1.patch, HIVE-5543.2.patch, HIVE-5543.3.patch
>
>
> In order to simulate the tez execution in hive tests, we need to work with 
> MiniTezCluster. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5191) Add char data type

2013-10-29 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808576#comment-13808576
 ] 

Xuefu Zhang commented on HIVE-5191:
---

Hi [~jdere], I have some comments on rb on your latest patch. After I played 
with your patch, I felt that neither characterLength nor maxLength is  
necessary. The reason for no need of maxLength is that you will always have 
type info because the grammar prevents one from specifying a type of char w/o 
maxLength. This is different from HiveDecimal, where UDF can specifies a return 
type w/o precision/scale, for which some default values have to be assumed.  
Please let me know what you think. Thanks.

> Add char data type
> --
>
> Key: HIVE-5191
> URL: https://issues.apache.org/jira/browse/HIVE-5191
> Project: Hive
>  Issue Type: New Feature
>  Components: Types
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-5191.1.patch, HIVE-5191.2.patch
>
>
> Separate task for char type, since HIVE-4844 only adds varchar



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5681) Validation doesn't catch SMBMapJoin

2013-10-29 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5681:
---

Status: Patch Available  (was: Open)

> Validation doesn't catch SMBMapJoin
> ---
>
> Key: HIVE-5681
> URL: https://issues.apache.org/jira/browse/HIVE-5681
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5681.1.patch, HIVE-5681.1.patch
>
>
> SMBMapJoin is currently not supported, but validation doesn't catch it 
> because it has same OperatorType. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5681) Validation doesn't catch SMBMapJoin

2013-10-29 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5681:
---

Attachment: HIVE-5681.1.patch

Uploading same patch again to trigger the build.

> Validation doesn't catch SMBMapJoin
> ---
>
> Key: HIVE-5681
> URL: https://issues.apache.org/jira/browse/HIVE-5681
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5681.1.patch, HIVE-5681.1.patch
>
>
> SMBMapJoin is currently not supported, but validation doesn't catch it 
> because it has same OperatorType. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5681) Validation doesn't catch SMBMapJoin

2013-10-29 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5681:
---

Status: Open  (was: Patch Available)

> Validation doesn't catch SMBMapJoin
> ---
>
> Key: HIVE-5681
> URL: https://issues.apache.org/jira/browse/HIVE-5681
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5681.1.patch
>
>
> SMBMapJoin is currently not supported, but validation doesn't catch it 
> because it has same OperatorType. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5436) Hive's casting behavior needs to be consistent

2013-10-29 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808563#comment-13808563
 ] 

Xuefu Zhang commented on HIVE-5436:
---

[~hsubramaniyan] I haven't started working on HIVE-5660. However, I expect that 
the work will be mostly on numeric UDFs, such as UDFOPPlus. As part of 
HIVE-3976 and its child task HIVE-5356, these UDFs will be re-written. Thus, 
I'm afraid that any work done on those UDFs will be thrown away. HIVE-5356 is 
in progress.

Could you please explain why you need this for HIVE-5382? I expect HIVE-5660 
will be in 0.13. Let me know if you need it before that and see how we can 
coordinate.

> Hive's casting behavior needs to be consistent
> --
>
> Key: HIVE-5436
> URL: https://issues.apache.org/jira/browse/HIVE-5436
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>Priority: Critical
>
> Hive's casting behavior is inconsistent and the behavior of casting from one 
> type to another undocumented as of now when the casted value is out of range. 
> For example, casting out of range values from one type to another can result 
> in incorrect results.
> Eg: 
> 1. select cast('1000'  as tinyint) from t1;
> NULL
> 2. select 1000Y from t1;
> FAILED: SemanticException [Error 10029]: Line 1:7 Invalid numerical constant 
> '1000Y'
> 3.  select cast(1000 as tinyint) from t1;
> -24
> 4.select cast(1.1e3-1000/0 as tinyint) from t1;
> 0
> 5. select cast(10/0 as tinyint) from pw18; 
> -1
> The hive user can accidently try to typecast an out of range value. For 
> example in the e.g. 4/5 even though the final result is NaN, Hive can 
> typecast to a random result. Either we should document that the end user 
> should take care of  overflow, underflow, division by 0, etc.  by 
> himself/herself or we should return NULLs when the final result is out of 
> range.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5626) enable metastore direct SQL for drop/similar queries

2013-10-29 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808558#comment-13808558
 ] 

Sergey Shelukhin commented on HIVE-5626:


https://reviews.apache.org/r/15067/

> enable metastore direct SQL for drop/similar queries
> 
>
> Key: HIVE-5626
> URL: https://issues.apache.org/jira/browse/HIVE-5626
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5626.01.patch, HIVE-5626.patch
>
>
> Metastore direct SQL is currently disabled for any queries running inside 
> external transaction (i.e. all modification queries, like dropping stuff).
> This was done to keep the strictly performance-optimization behavior when 
> using Postgres, which unlike other RDBMS-es fails the tx on any syntax error; 
> so, if direct SQL is broken there's no way to fall back. So, it is disabled 
> for these cases.
> It is not as important because drop commands are rare, but we might want to 
> address that. Either by some config setting or by making it work on 
> non-postgres DBs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5626) enable metastore direct SQL for drop/similar queries

2013-10-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5626:
---

Attachment: HIVE-5626.01.patch

Fix the issue where tx-es are rolled back where they should not be. Refactor 
all of this common code into a helper class.
It could be much cleaner if there were first-class functions in Java so I could 
pass them to it...

> enable metastore direct SQL for drop/similar queries
> 
>
> Key: HIVE-5626
> URL: https://issues.apache.org/jira/browse/HIVE-5626
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5626.01.patch, HIVE-5626.patch
>
>
> Metastore direct SQL is currently disabled for any queries running inside 
> external transaction (i.e. all modification queries, like dropping stuff).
> This was done to keep the strictly performance-optimization behavior when 
> using Postgres, which unlike other RDBMS-es fails the tx on any syntax error; 
> so, if direct SQL is broken there's no way to fall back. So, it is disabled 
> for these cases.
> It is not as important because drop commands are rare, but we might want to 
> address that. Either by some config setting or by making it work on 
> non-postgres DBs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5436) Hive's casting behavior needs to be consistent

2013-10-29 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808550#comment-13808550
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-5436:
-

Thanks [~xuefuz] for the update. Can you please let me know if HIVE-5660 is in 
progress? Else, I will look at it. This is required before I make further 
changes to HIVE-5382 ( to cover all the edge cases). 

> Hive's casting behavior needs to be consistent
> --
>
> Key: HIVE-5436
> URL: https://issues.apache.org/jira/browse/HIVE-5436
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>Priority: Critical
>
> Hive's casting behavior is inconsistent and the behavior of casting from one 
> type to another undocumented as of now when the casted value is out of range. 
> For example, casting out of range values from one type to another can result 
> in incorrect results.
> Eg: 
> 1. select cast('1000'  as tinyint) from t1;
> NULL
> 2. select 1000Y from t1;
> FAILED: SemanticException [Error 10029]: Line 1:7 Invalid numerical constant 
> '1000Y'
> 3.  select cast(1000 as tinyint) from t1;
> -24
> 4.select cast(1.1e3-1000/0 as tinyint) from t1;
> 0
> 5. select cast(10/0 as tinyint) from pw18; 
> -1
> The hive user can accidently try to typecast an out of range value. For 
> example in the e.g. 4/5 even though the final result is NaN, Hive can 
> typecast to a random result. Either we should document that the end user 
> should take care of  overflow, underflow, division by 0, etc.  by 
> himself/herself or we should return NULLs when the final result is out of 
> range.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4196) Support for Streaming Partitions in Hive

2013-10-29 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808548#comment-13808548
 ] 

Roshan Naik commented on HIVE-4196:
---

Moving the streaming work to a new jira HIVE-5687 since it will be based on a 
different design.

> Support for Streaming Partitions in Hive
> 
>
> Key: HIVE-4196
> URL: https://issues.apache.org/jira/browse/HIVE-4196
> Project: Hive
>  Issue Type: New Feature
>  Components: Database/Schema, HCatalog
>Affects Versions: 0.10.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: HCatalogStreamingIngestFunctionalSpecificationandDesign- 
> apr 29- patch1.docx, HCatalogStreamingIngestFunctionalSpecificationandDesign- 
> apr 29- patch1.pdf, HIVE-4196.v1.patch
>
>
> Motivation: Allow Hive users to immediately query data streaming in through 
> clients such as Flume.
> Currently Hive partitions must be created after all the data for the 
> partition is available. Thereafter, data in the partitions is considered 
> immutable. 
> This proposal introduces the notion of a streaming partition into which new 
> files an be committed periodically and made available for queries before the 
> partition is closed and converted into a standard partition.
> The admin enables streaming partition on a table using DDL. He provides the 
> following pieces of information:
> - Name of the partition in the table on which streaming is enabled
> - Frequency at which the streaming partition should be closed and converted 
> into a standard partition.
> Tables with streaming partition enabled will be partitioned by one and only 
> one column. It is assumed that this column will contain a timestamp.
> Closing the current streaming partition converts it into a standard 
> partition. Based on the specified frequency, the current streaming partition  
> is closed and a new one created for future writes. This is referred to as 
> 'rolling the partition'.
> A streaming partition's life cycle is as follows:
>  - A new streaming partition is instantiated for writes
>  - Streaming clients request (via webhcat) for a HDFS file name into which 
> they can write a chunk of records for a specific table.
>  - Streaming clients write a chunk (via webhdfs) to that file and commit 
> it(via webhcat). Committing merely indicates that the chunk has been written 
> completely and ready for serving queries.  
>  - When the partition is rolled, all committed chunks are swept into single 
> directory and a standard partition pointing to that directory is created. The 
> streaming partition is closed and new streaming partition is created. Rolling 
> the partition is atomic. Streaming clients are agnostic of partition rolling. 
>  
>  - Hive queries will be able to query the partition that is currently open 
> for streaming. only committed chunks will be visible. read consistency will 
> be ensured so that repeated reads of the same partition will be idempotent 
> for the lifespan of the query.
> Partition rolling requires an active agent/thread running to check when it is 
> time to roll and trigger the roll. This could be either be achieved by using 
> an external agent such as Oozie (preferably) or an internal agent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-5687) Streaming support in Hive

2013-10-29 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik reassigned HIVE-5687:
-

Assignee: Roshan Naik

> Streaming support in Hive
> -
>
> Key: HIVE-5687
> URL: https://issues.apache.org/jira/browse/HIVE-5687
> Project: Hive
>  Issue Type: Bug
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>
> Implement support for Streaming data into HIVE.
> - Provide a client streaming API 
> - Transaction support: Clients should be able to periodically commit a batch 
> of records atomically
> - Immediate visibility: Records should be immediately visible to queries on 
> commit
> - Should not overload HDFS with too many small files
> Use Cases:
>  - Streaming logs into HIVE via Flume
>  - Streaming results of computations from Storm



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5687) Streaming support in Hive

2013-10-29 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik updated HIVE-5687:
--

Description: 
Implement support for Streaming data into HIVE.
- Provide a client streaming API 
- Transaction support: Clients should be able to periodically commit a batch of 
records atomically
- Immediate visibility: Records should be immediately visible to queries on 
commit
- Should not overload HDFS with too many small files

Use Cases:
 - Streaming logs into HIVE via Flume
 - Streaming results of computations from Storm

  was:
Implement support for Streaming data into HIVE.
- Provide a client streaming API 
- Transaction support: Clients should be able to periodically commit a batch of 
records atomically
- Immediate visibility: Records should be immediately visible to queries on 
commit
- Should not overload HDFS with too many small files

Use Cases:
 - Streaming logs into HIVE via Flume
 - Streaming results of computational from Storm


> Streaming support in Hive
> -
>
> Key: HIVE-5687
> URL: https://issues.apache.org/jira/browse/HIVE-5687
> Project: Hive
>  Issue Type: Bug
>Reporter: Roshan Naik
>
> Implement support for Streaming data into HIVE.
> - Provide a client streaming API 
> - Transaction support: Clients should be able to periodically commit a batch 
> of records atomically
> - Immediate visibility: Records should be immediately visible to queries on 
> commit
> - Should not overload HDFS with too many small files
> Use Cases:
>  - Streaming logs into HIVE via Flume
>  - Streaming results of computations from Storm



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5687) Streaming support in Hive

2013-10-29 Thread Roshan Naik (JIRA)
Roshan Naik created HIVE-5687:
-

 Summary: Streaming support in Hive
 Key: HIVE-5687
 URL: https://issues.apache.org/jira/browse/HIVE-5687
 Project: Hive
  Issue Type: Bug
Reporter: Roshan Naik


Implement support for Streaming data into HIVE.
- Provide a client streaming API 
- Transaction support: Clients should be able to periodically commit a batch of 
records atomically
- Immediate visibility: Records should be immediately visible to queries on 
commit
- Should not overload HDFS with too many small files

Use Cases:
 - Streaming logs into HIVE via Flume
 - Streaming results of computational from Storm



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5686) partition column type validation doesn't quite work for dates

2013-10-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5686:
---

Description: 
Another interesting issue...
{noformat}
hive> create table z(c string) partitioned by (i date,j date);
OK
Time taken: 0.099 seconds
hive> alter table z add partition (i='2012-01-01', j='foo');  
FAILED: SemanticException [Error 10248]: Cannot add partition column j of type 
string as it cannot be converted to type date
hive> alter table z add partition (i='2012-01-01', j=date 'foo');
OK
Time taken: 0.119 seconds
{noformat}

The fake date is caught in normal queries:
{noformat}
hive> select * from z where j == date 'foo';
FAILED: SemanticException Unable to convert date literal string to date value.
{noformat}

  was:
Another interesting issue...
{noformat}
hive> create table z(c string) partitioned by (i date,j date);
OK
Time taken: 0.099 seconds
hive> alter table z add partition (i='2012-01-01', j='foo');  
FAILED: SemanticException [Error 10248]: Cannot add partition column j of type 
string as it cannot be converted to type date
hive> alter table z add partition (i='2012-01-01', j=date 'foo');
OK
Time taken: 0.119 seconds
{noformat}


> partition column type validation doesn't quite work for dates
> -
>
> Key: HIVE-5686
> URL: https://issues.apache.org/jira/browse/HIVE-5686
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Vikram Dixit K
>
> Another interesting issue...
> {noformat}
> hive> create table z(c string) partitioned by (i date,j date);
> OK
> Time taken: 0.099 seconds
> hive> alter table z add partition (i='2012-01-01', j='foo');  
> FAILED: SemanticException [Error 10248]: Cannot add partition column j of 
> type string as it cannot be converted to type date
> hive> alter table z add partition (i='2012-01-01', j=date 'foo');
> OK
> Time taken: 0.119 seconds
> {noformat}
> The fake date is caught in normal queries:
> {noformat}
> hive> select * from z where j == date 'foo';
> FAILED: SemanticException Unable to convert date literal string to date value.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5519) Use paging mechanism for templeton get requests.

2013-10-29 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5519:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to trunk.
Thanks for the contribution Hari!


> Use paging mechanism for templeton get requests.
> 
>
> Key: HIVE-5519
> URL: https://issues.apache.org/jira/browse/HIVE-5519
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 0.13.0
>
> Attachments: HIVE-5519.1.patch.txt, HIVE-5519.2.patch.txt, 
> HIVE-5519.3.patch.txt
>
>
> Issuing a command to retrieve the jobs field using
> "https://mwinkledemo.azurehdinsight.net:563/templeton/v1/queue/?user.name=admin&fields=*"
>  --user u:p
> will result in timeout in windows machine. The issue happens because of the 
> amount of data that needs to be fetched. The proposal is to use paging based 
> encoding scheme so that we flush the contents regularly and the client does 
> not time out.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5436) Hive's casting behavior needs to be consistent

2013-10-29 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808538#comment-13808538
 ] 

Xuefu Zhang commented on HIVE-5436:
---

As a fyi 
#1 and #3 will be addressed in a global context in HIVE-5660.
#4 and #5 should be address in a global context in HIVE-5655.

> Hive's casting behavior needs to be consistent
> --
>
> Key: HIVE-5436
> URL: https://issues.apache.org/jira/browse/HIVE-5436
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>Priority: Critical
>
> Hive's casting behavior is inconsistent and the behavior of casting from one 
> type to another undocumented as of now when the casted value is out of range. 
> For example, casting out of range values from one type to another can result 
> in incorrect results.
> Eg: 
> 1. select cast('1000'  as tinyint) from t1;
> NULL
> 2. select 1000Y from t1;
> FAILED: SemanticException [Error 10029]: Line 1:7 Invalid numerical constant 
> '1000Y'
> 3.  select cast(1000 as tinyint) from t1;
> -24
> 4.select cast(1.1e3-1000/0 as tinyint) from t1;
> 0
> 5. select cast(10/0 as tinyint) from pw18; 
> -1
> The hive user can accidently try to typecast an out of range value. For 
> example in the e.g. 4/5 even though the final result is NaN, Hive can 
> typecast to a random result. Either we should document that the end user 
> should take care of  overflow, underflow, division by 0, etc.  by 
> himself/herself or we should return NULLs when the final result is out of 
> range.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5686) partition column type validation doesn't quite work for dates

2013-10-29 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-5686:
--

 Summary: partition column type validation doesn't quite work for 
dates
 Key: HIVE-5686
 URL: https://issues.apache.org/jira/browse/HIVE-5686
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Vikram Dixit K


Another interesting issue...
{noformat}
hive> create table z(c string) partitioned by (i date,j date);
OK
Time taken: 0.099 seconds
hive> alter table z add partition (i='2012-01-01', j='foo');  
FAILED: SemanticException [Error 10248]: Cannot add partition column j of type 
string as it cannot be converted to type date
hive> alter table z add partition (i='2012-01-01', j=date 'foo');
OK
Time taken: 0.119 seconds
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5685) partition column type validation doesn't work in some cases

2013-10-29 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-5685:
--

 Summary: partition column type validation doesn't work in some 
cases
 Key: HIVE-5685
 URL: https://issues.apache.org/jira/browse/HIVE-5685
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Vikram Dixit K


It seems like it works if there's more than one partition column, and doesn't 
work if there's just one. At least that's the case that I found. The situation 
for different types is the same.

{noformat}
hive> create table zzz(c string) partitioned by (i int);
OK
Time taken: 0.41 seconds
hive> alter table zzz add partition (i='foo');
OK
Time taken: 0.185 seconds
hive> create table (c string) partitioned by (i int,j int); 
OK
Time taken: 0.085 seconds
hive> alter table  add partition (i='foo',j=5);
FAILED: SemanticException [Error 10248]: Cannot add partition column i of type 
string as it cannot be converted to type int
hive> alter table  add partition (i=5,j='foo');
FAILED: SemanticException [Error 10248]: Cannot add partition column j of type 
string as it cannot be converted to type int
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5672) Insert with custom separator not supported for non-local directory

2013-10-29 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5672:
--

Summary: Insert with custom separator not supported for non-local directory 
 (was: Insert with custom separator not supported for local directory)

> Insert with custom separator not supported for non-local directory
> --
>
> Key: HIVE-5672
> URL: https://issues.apache.org/jira/browse/HIVE-5672
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Romain Rigaux
>Assignee: Xuefu Zhang
>
> https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
> directory don't seem to be supported:
> {code}
> insert overwrite directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select description FROM sample_07
> {code}
> {code}
> Error while compiling statement: FAILED: ParseException line 2:0 cannot 
> recognize input near 'row' 'format' 'delimited' in select clause
> {code}
> This works (with 'local'):
> {code}
> insert overwrite local directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select code, description FROM sample_07
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-4196) Support for Streaming Partitions in Hive

2013-10-29 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik resolved HIVE-4196.
---

Resolution: Won't Fix

In view of the HIVE-5317 which brings in insert/update/delete support to Hive, 
the need for introducing streaming partitions is no longer necessary. Streaming 
support can be provided with a far less complexity by leveraging HIVE-5317

> Support for Streaming Partitions in Hive
> 
>
> Key: HIVE-4196
> URL: https://issues.apache.org/jira/browse/HIVE-4196
> Project: Hive
>  Issue Type: New Feature
>  Components: Database/Schema, HCatalog
>Affects Versions: 0.10.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: HCatalogStreamingIngestFunctionalSpecificationandDesign- 
> apr 29- patch1.docx, HCatalogStreamingIngestFunctionalSpecificationandDesign- 
> apr 29- patch1.pdf, HIVE-4196.v1.patch
>
>
> Motivation: Allow Hive users to immediately query data streaming in through 
> clients such as Flume.
> Currently Hive partitions must be created after all the data for the 
> partition is available. Thereafter, data in the partitions is considered 
> immutable. 
> This proposal introduces the notion of a streaming partition into which new 
> files an be committed periodically and made available for queries before the 
> partition is closed and converted into a standard partition.
> The admin enables streaming partition on a table using DDL. He provides the 
> following pieces of information:
> - Name of the partition in the table on which streaming is enabled
> - Frequency at which the streaming partition should be closed and converted 
> into a standard partition.
> Tables with streaming partition enabled will be partitioned by one and only 
> one column. It is assumed that this column will contain a timestamp.
> Closing the current streaming partition converts it into a standard 
> partition. Based on the specified frequency, the current streaming partition  
> is closed and a new one created for future writes. This is referred to as 
> 'rolling the partition'.
> A streaming partition's life cycle is as follows:
>  - A new streaming partition is instantiated for writes
>  - Streaming clients request (via webhcat) for a HDFS file name into which 
> they can write a chunk of records for a specific table.
>  - Streaming clients write a chunk (via webhdfs) to that file and commit 
> it(via webhcat). Committing merely indicates that the chunk has been written 
> completely and ready for serving queries.  
>  - When the partition is rolled, all committed chunks are swept into single 
> directory and a standard partition pointing to that directory is created. The 
> streaming partition is closed and new streaming partition is created. Rolling 
> the partition is atomic. Streaming clients are agnostic of partition rolling. 
>  
>  - Hive queries will be able to query the partition that is currently open 
> for streaming. only committed chunks will be visible. read consistency will 
> be ensured so that repeated reads of the same partition will be idempotent 
> for the lifespan of the query.
> Partition rolling requires an active agent/thread running to check when it is 
> time to roll and trigger the roll. This could be either be achieved by using 
> an external agent such as Oozie (preferably) or an internal agent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5042) Allow MiniMr tests to be run on MiniTezCluster

2013-10-29 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-5042.
--

Resolution: Duplicate

> Allow MiniMr tests to be run on MiniTezCluster
> --
>
> Key: HIVE-5042
> URL: https://issues.apache.org/jira/browse/HIVE-5042
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>
> Tez has a MiniTezCluster component to run tests. Works similar to 
> MiniMR/MiniYarn cluster. We need to enable the mini mr tests for tez.
> NO PRECOMMIT TESTS (this is wip for the tez branch)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5547) webhcat pig job submission should ship hive tar if -usehcatalog is specified

2013-10-29 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808523#comment-13808523
 ] 

Thejas M Nair commented on HIVE-5547:
-

Added comments on reviewboard.


> webhcat pig job submission should ship hive tar if -usehcatalog is specified
> 
>
> Key: HIVE-5547
> URL: https://issues.apache.org/jira/browse/HIVE-5547
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-5547.2.patch, HIVE-5547.patch
>
>
> Currently when when a Pig job is submitted through WebHCat and the Pig script 
> uses HCatalog, that means that Hive should be installed on the node in the 
> cluster which ends up executing the job.  For large clusters is this a 
> manageability issue so we should use DistributedCache to ship the Hive tar 
> file to the target node as part of job submission
> TestPig_11 in hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf has 
> the test case for this



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-5672) Insert with custom separator not supported for local directory

2013-10-29 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-5672:
-

Assignee: Xuefu Zhang

> Insert with custom separator not supported for local directory
> --
>
> Key: HIVE-5672
> URL: https://issues.apache.org/jira/browse/HIVE-5672
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Romain Rigaux
>Assignee: Xuefu Zhang
>
> https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
> directory don't seem to be supported:
> {code}
> insert overwrite directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select description FROM sample_07
> {code}
> {code}
> Error while compiling statement: FAILED: ParseException line 2:0 cannot 
> recognize input near 'row' 'format' 'delimited' in select clause
> {code}
> This works (with 'local'):
> {code}
> insert overwrite local directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select code, description FROM sample_07
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-5436) Hive's casting behavior needs to be consistent

2013-10-29 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan reassigned HIVE-5436:
---

Assignee: Hari Sankar Sivarama Subramaniyan

> Hive's casting behavior needs to be consistent
> --
>
> Key: HIVE-5436
> URL: https://issues.apache.org/jira/browse/HIVE-5436
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>Priority: Critical
>
> Hive's casting behavior is inconsistent and the behavior of casting from one 
> type to another undocumented as of now when the casted value is out of range. 
> For example, casting out of range values from one type to another can result 
> in incorrect results.
> Eg: 
> 1. select cast('1000'  as tinyint) from t1;
> NULL
> 2. select 1000Y from t1;
> FAILED: SemanticException [Error 10029]: Line 1:7 Invalid numerical constant 
> '1000Y'
> 3.  select cast(1000 as tinyint) from t1;
> -24
> 4.select cast(1.1e3-1000/0 as tinyint) from t1;
> 0
> 5. select cast(10/0 as tinyint) from pw18; 
> -1
> The hive user can accidently try to typecast an out of range value. For 
> example in the e.g. 4/5 even though the final result is NaN, Hive can 
> typecast to a random result. Either we should document that the end user 
> should take care of  overflow, underflow, division by 0, etc.  by 
> himself/herself or we should return NULLs when the final result is out of 
> range.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5672) Insert with custom separator not supported for local directory

2013-10-29 Thread Romain Rigaux (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808494#comment-13808494
 ] 

Romain Rigaux commented on HIVE-5672:
-

Feel free to take it, thanks!

> Insert with custom separator not supported for local directory
> --
>
> Key: HIVE-5672
> URL: https://issues.apache.org/jira/browse/HIVE-5672
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Romain Rigaux
>
> https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
> directory don't seem to be supported:
> {code}
> insert overwrite directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select description FROM sample_07
> {code}
> {code}
> Error while compiling statement: FAILED: ParseException line 2:0 cannot 
> recognize input near 'row' 'format' 'delimited' in select clause
> {code}
> This works (with 'local'):
> {code}
> insert overwrite local directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select code, description FROM sample_07
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5547) webhcat pig job submission should ship hive tar if -usehcatalog is specified

2013-10-29 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808488#comment-13808488
 ] 

Eugene Koifman commented on HIVE-5547:
--

https://reviews.facebook.net/D13779

> webhcat pig job submission should ship hive tar if -usehcatalog is specified
> 
>
> Key: HIVE-5547
> URL: https://issues.apache.org/jira/browse/HIVE-5547
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-5547.2.patch, HIVE-5547.patch
>
>
> Currently when when a Pig job is submitted through WebHCat and the Pig script 
> uses HCatalog, that means that Hive should be installed on the node in the 
> cluster which ends up executing the job.  For large clusters is this a 
> manageability issue so we should use DistributedCache to ship the Hive tar 
> file to the target node as part of job submission
> TestPig_11 in hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf has 
> the test case for this



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5617) Add webhcat e2e tests using 1. jobs (GET) 2. jobs/:jobid (GET) 3. jobs/:jobid (DELETE) apis

2013-10-29 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-5617:


Fix Version/s: 0.13.0
   Status: Patch Available  (was: Open)

Replacing deprecated webhcat APIs with the ones specified in 
https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference

> Add webhcat e2e tests using 1. jobs (GET) 2. jobs/:jobid (GET) 3. jobs/:jobid 
> (DELETE) apis
> ---
>
> Key: HIVE-5617
> URL: https://issues.apache.org/jira/browse/HIVE-5617
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 0.13.0
>
> Attachments: HIVE-5617.1.patch.txt
>
>
> The current e2e test driver module(TestDriverCurl.pm) uses the deprecated API 
> for hive 0.12. Use the jobs api introduced in Hive 0.12 for killing a job, 
> getting the status of a job, etc. The reference is 
> https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5617) Add webhcat e2e tests using 1. jobs (GET) 2. jobs/:jobid (GET) 3. jobs/:jobid (DELETE) apis

2013-10-29 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-5617:


Attachment: HIVE-5617.1.patch.txt

> Add webhcat e2e tests using 1. jobs (GET) 2. jobs/:jobid (GET) 3. jobs/:jobid 
> (DELETE) apis
> ---
>
> Key: HIVE-5617
> URL: https://issues.apache.org/jira/browse/HIVE-5617
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 0.13.0
>
> Attachments: HIVE-5617.1.patch.txt
>
>
> The current e2e test driver module(TestDriverCurl.pm) uses the deprecated API 
> for hive 0.12. Use the jobs api introduced in Hive 0.12 for killing a job, 
> getting the status of a job, etc. The reference is 
> https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 15055: HIVE-5557: Push down qualifying Where clause predicates as join conditions

2013-10-29 Thread Harish Butani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15055/
---

(Updated Oct. 29, 2013, 9:20 p.m.)


Review request for hive, Ashutosh Chauhan and Vikram Dixit Kumaraswamy.


Bugs: hive-5557
https://issues.apache.org/jira/browse/hive-5557


Repository: hive-git


Description
---

Step 2 of HIVE-.
Depends on HIVE-5556


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java cf0c895 
  ql/src/test/queries/clientpositive/join_cond_pushdown_3.q PRE-CREATION 
  ql/src/test/queries/clientpositive/join_cond_pushdown_4.q PRE-CREATION 
  ql/src/test/results/clientpositive/join_cond_pushdown_3.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/join_cond_pushdown_4.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/15055/diff/


Testing
---

ran all join tests
added 2 new tests join_cond_pushdown3.q, join_cond_pushdown4.q


Thanks,

Harish Butani



Re: Review Request 14953: Pushdown join conditions

2013-10-29 Thread Harish Butani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14953/
---

(Updated Oct. 29, 2013, 9:19 p.m.)


Review request for hive, Ashutosh Chauhan and Vikram Dixit Kumaraswamy.


Bugs: hive-5556
https://issues.apache.org/jira/browse/hive-5556


Repository: hive-git


Description
---

Step 1 to support Alternate Join Syntax: HIVE-

This patch also contains fixes to merging of QBJoinTrees


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java 9c8cac1 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java cf0c895 
  ql/src/test/org/apache/hadoop/hive/ql/parse/TestQBJoinTreeApplyPredicate.java 
PRE-CREATION 
  ql/src/test/queries/clientpositive/join_cond_pushdown_1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/join_cond_pushdown_2.q PRE-CREATION 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out 865627b 
  ql/src/test/results/clientpositive/join_cond_pushdown_1.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/join_cond_pushdown_2.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/14953/diff/


Testing
---

ran all join .q files
added join_cond_pushdown_1.q, join_cond_pushdown_2.q .q tests
added TestQBJoinTreeApplyPredicate unit test to test pushdown functionality


Thanks,

Harish Butani



Review Request 15055: HIVE-5557: Push down qualifying Where clause predicates as join conditions

2013-10-29 Thread Harish Butani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15055/
---

Review request for hive and Ashutosh Chauhan.


Bugs: hive-5557
https://issues.apache.org/jira/browse/hive-5557


Repository: hive-git


Description
---

Step 2 of HIVE-.
Depends on HIVE-5556


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java cf0c895 
  ql/src/test/queries/clientpositive/join_cond_pushdown_3.q PRE-CREATION 
  ql/src/test/queries/clientpositive/join_cond_pushdown_4.q PRE-CREATION 
  ql/src/test/results/clientpositive/join_cond_pushdown_3.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/join_cond_pushdown_4.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/15055/diff/


Testing
---

ran all join tests
added 2 new tests join_cond_pushdown3.q, join_cond_pushdown4.q


Thanks,

Harish Butani



Re: Should we turn off the apache jenkins/huson builds?

2013-10-29 Thread Brock Noland
FYI since there was no feedback I removed the "Update the relevant JIRA"
from the non-Ptest builds.

>From here on out the PTest builds execute on the BigTop Jenkins which makes
them unable to update the JIRA.

Bigtop Jenkins: http://bigtop01.cloudera.org:8080


On Wed, Oct 16, 2013 at 1:17 PM, Brock Noland  wrote:

> I'd be +1 for turning off the "integration" notice.  Ideally before we
> turn off the non-ptest builds we finish:
>
> https://issues.apache.org/jira/browse/HIVE-4941
>
> On Sun, Oct 13, 2013 at 2:55 PM, Edward Capriolo 
> wrote:
> > They seem very unreliable at this point. It seems they almost never pass
> > FAILURE: Integrated in Hive-trunk-hadoop2 #498 (See [
> > https://builds.apache.org/job/Hive-trunk-hadoop2/498/])
> > HIVE-5252 - Add ql syntax for inline java code creation (Edward Capriolo
> > via Brock Noland) (brock:
> > http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1531549)
> > * /hive/trunk/common/src/java/
> > org/apache/hadoop/hive/conf/HiveConf.java
> > * /hive/trunk/ivy/libraries.properties
> > * /hive/trunk/ql/ivy.xml
> > *
> >
> /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorFactory.java
> > *
> >
> /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/CompileProcessor.java
> > *
> >
> /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/HiveCommand.java
> > * /hive/trunk/ql/src/test/queries/clientnegative/compile_processor.q
> > * /hive/trunk/ql/src/test/queries/clientpositive/compile_processor.q
> > * /hive/trunk/ql/src/test/results/clientnegative/compile_processor.q.out
> > * /hive/trunk/ql/src/test/results/clientpositive/compile_processor.q.out
> >
> > It is also very annoying they post back to the ticket what almost surely
> is
> > a false negative test result.
>
>
>
> --
> Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
>



-- 
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Updated] (HIVE-4388) HBase tests fail against Hadoop 2

2013-10-29 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-4388:
---

Status: Open  (was: Patch Available)

> HBase tests fail against Hadoop 2
> -
>
> Key: HIVE-4388
> URL: https://issues.apache.org/jira/browse/HIVE-4388
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Gunther Hagleitner
>Assignee: Brock Noland
> Attachments: HIVE-4388.10.patch, HIVE-4388.11.patch, 
> HIVE-4388.12.patch, HIVE-4388.13.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388-wip.txt
>
>
> Currently we're building by default against 0.92. When you run against hadoop 
> 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963.
> HIVE-3861 upgrades the version of hbase used. This will get you past the 
> problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4388) HBase tests fail against Hadoop 2

2013-10-29 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-4388:
---

Attachment: HIVE-4388.13.patch

One more attempt. :)

> HBase tests fail against Hadoop 2
> -
>
> Key: HIVE-4388
> URL: https://issues.apache.org/jira/browse/HIVE-4388
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Gunther Hagleitner
>Assignee: Brock Noland
> Attachments: HIVE-4388.10.patch, HIVE-4388.11.patch, 
> HIVE-4388.12.patch, HIVE-4388.13.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388-wip.txt
>
>
> Currently we're building by default against 0.92. When you run against hadoop 
> 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963.
> HIVE-3861 upgrades the version of hbase used. This will get you past the 
> problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4388) HBase tests fail against Hadoop 2

2013-10-29 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-4388:
---

Status: Patch Available  (was: Open)

> HBase tests fail against Hadoop 2
> -
>
> Key: HIVE-4388
> URL: https://issues.apache.org/jira/browse/HIVE-4388
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Gunther Hagleitner
>Assignee: Brock Noland
> Attachments: HIVE-4388.10.patch, HIVE-4388.11.patch, 
> HIVE-4388.12.patch, HIVE-4388.13.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388-wip.txt
>
>
> Currently we're building by default against 0.92. When you run against hadoop 
> 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963.
> HIVE-3861 upgrades the version of hbase used. This will get you past the 
> problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5519) Use paging mechanism for templeton get requests.

2013-10-29 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-5519:


Fix Version/s: 0.13.0

> Use paging mechanism for templeton get requests.
> 
>
> Key: HIVE-5519
> URL: https://issues.apache.org/jira/browse/HIVE-5519
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 0.13.0
>
> Attachments: HIVE-5519.1.patch.txt, HIVE-5519.2.patch.txt, 
> HIVE-5519.3.patch.txt
>
>
> Issuing a command to retrieve the jobs field using
> "https://mwinkledemo.azurehdinsight.net:563/templeton/v1/queue/?user.name=admin&fields=*"
>  --user u:p
> will result in timeout in windows machine. The issue happens because of the 
> amount of data that needs to be fetched. The proposal is to use paging based 
> encoding scheme so that we flush the contents regularly and the client does 
> not time out.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5547) webhcat pig job submission should ship hive tar if -usehcatalog is specified

2013-10-29 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808434#comment-13808434
 ] 

Thejas M Nair commented on HIVE-5547:
-

Eugene, can you please include a reviewboard link ?


> webhcat pig job submission should ship hive tar if -usehcatalog is specified
> 
>
> Key: HIVE-5547
> URL: https://issues.apache.org/jira/browse/HIVE-5547
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-5547.2.patch, HIVE-5547.patch
>
>
> Currently when when a Pig job is submitted through WebHCat and the Pig script 
> uses HCatalog, that means that Hive should be installed on the node in the 
> cluster which ends up executing the job.  For large clusters is this a 
> manageability issue so we should use DistributedCache to ship the Hive tar 
> file to the target node as part of job submission
> TestPig_11 in hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf has 
> the test case for this



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5519) Use paging mechanism for templeton get requests.

2013-10-29 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808432#comment-13808432
 ] 

Thejas M Nair commented on HIVE-5519:
-

Looks good. Nice work on the javadoc and comments! +1 . 
Can you also please update the wiki page once this is in (mark it as a feature 
that will be available in next release (0.13)?


> Use paging mechanism for templeton get requests.
> 
>
> Key: HIVE-5519
> URL: https://issues.apache.org/jira/browse/HIVE-5519
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-5519.1.patch.txt, HIVE-5519.2.patch.txt, 
> HIVE-5519.3.patch.txt
>
>
> Issuing a command to retrieve the jobs field using
> "https://mwinkledemo.azurehdinsight.net:563/templeton/v1/queue/?user.name=admin&fields=*"
>  --user u:p
> will result in timeout in windows machine. The issue happens because of the 
> amount of data that needs to be fetched. The proposal is to use paging based 
> encoding scheme so that we flush the contents regularly and the client does 
> not time out.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808411#comment-13808411
 ] 

Brock Noland commented on HIVE-5610:


Nice, all tests pass!  Note that this patch could get stale very quickly.

bq.  we can compare in order to verify that we aren't dropping any tests?

As show above 4529 tests execute with maven while 
[4512|https://issues.apache.org/jira/browse/HIVE-5602?focusedCommentId=13807940&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13807940]
 execute under ant.

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808406#comment-13808406
 ] 

Hive QA commented on HIVE-5610:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610886/HIVE-5610.5-maven.patch

{color:green}SUCCESS:{color} +1 4529 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/6/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/6/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5610.1-for-commit.patch, 
> HIVE-5610.1-for-reading.patch, HIVE-5610.1-maven.patch, 
> HIVE-5610.2-for-commit.patch, HIVE-5610.2-for-reading.patch, 
> HIVE-5610.2-maven.patch, HIVE-5610.4-for-commit.patch, 
> HIVE-5610.4-for-reading.patch, HIVE-5610.4-maven.patch, 
> HIVE-5610.5-for-commit.patch, HIVE-5610.5-for-reading.patch, 
> HIVE-5610.5-maven.patch
>
>
> With HIVE-5566  complete we are ready to merge the maven branch to trunk. The 
> following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> The merge process will be as follows:
> 1) Disable the precommit build
> 2) Apply patch
> 3) Commit result
> {noformat}
> svn status
> svn add 
> ..
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (patch)"
> {noformat}
> 4) Modify maven-rollforward.sh to use svn mv not mv:
> {noformat}
> perl -i -pe 's@^  mv @  svn mv @g' maven-rollforward.sh
> {noformat}
> 5) Execute maven-rollforward.sh and commit result 
> {noformat}
> bash ./maven-rollforward.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (maven rollforward)"
> {noformat}
> 6) Modify maven-delete-ant.sh to use svn rm as opposed to rm:
> {noformat}
> perl -i -pe 's@^  rm -rf @  svn rm @g' maven-delete-ant.sh
> {noformat}
> 7) Execute maven-delete-ant.sh and commit result
> {noformat}
> bash ./maven-delete-ant.sh
> svn status
> ...
> svn commit -m "HIVE-5610 - Merge maven branch into trunk (delete ant)"
> {noformat}
> 8) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> 9) Enable the precommit build
> h3. Notes:
> h4. On this jira I will upload three patches:
> {noformat}
> HIVE-5610.${VERSION}-for-reading.patch
> HIVE-5610.${VERSION}-for-commit.patch
> HIVE-5610.${VERSION}-maven.patch
> {noformat}
> * for-reading has no qfiles updates so it's easier to read
> * for-commit has the qfile updates and is for commit
> * maven is the patch in a "rollfoward" state for testing purposes
> h4. To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5670) annoying ZK exceptions are annoying

2013-10-29 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808386#comment-13808386
 ] 

Brock Noland commented on HIVE-5670:


+1

> annoying ZK exceptions are annoying
> ---
>
> Key: HIVE-5670
> URL: https://issues.apache.org/jira/browse/HIVE-5670
> Project: Hive
>  Issue Type: Task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5670.patch
>
>
> when I run tests locally (or on cluster IIRC) there are bunch of ZK-related 
> exceptions in Hive log, such as
> {noformat}
> 2013-10-28 09:50:50,851 ERROR zookeeper.ClientCnxn 
> (ClientCnxn.java:processEvent(523)) - Error while calling watcher 
> java.lang.NullPointerException
>at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
>
>2013-10-28 09:51:05,747 DEBUG server.NIOServerCnxn 
> (NIOServerCnxn.java:closeSock(1024)) - ignoring exception during input 
> shutdown
> java.net.SocketException: Socket is not connected
>at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:633)
>at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>at 
> org.apache.zookeeper.server.NIOServerCnxn.closeSock(NIOServerCnxn.java:1020)
>at org.apache.zookeeper.server.NIOServerCnxn.close(NIOServerCnxn.java:977)
>at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:347)
>at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
>at java.lang.Thread.run(Thread.java:680)
> {noformat}
> They are annoying when you look for actual problems in logs.
> Those on DEBUG level should be silenced via log levels for ZK classes by 
> default. Not sure what to do with ERROR level one(s?), I'd need to look if 
> they can be silenced/logged as DEBUG on hive side, or maybe file a bug for 
> ZK...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5670) annoying ZK exceptions are annoying

2013-10-29 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808384#comment-13808384
 ] 

Ashutosh Chauhan commented on HIVE-5670:


+1

> annoying ZK exceptions are annoying
> ---
>
> Key: HIVE-5670
> URL: https://issues.apache.org/jira/browse/HIVE-5670
> Project: Hive
>  Issue Type: Task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5670.patch
>
>
> when I run tests locally (or on cluster IIRC) there are bunch of ZK-related 
> exceptions in Hive log, such as
> {noformat}
> 2013-10-28 09:50:50,851 ERROR zookeeper.ClientCnxn 
> (ClientCnxn.java:processEvent(523)) - Error while calling watcher 
> java.lang.NullPointerException
>at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
>
>2013-10-28 09:51:05,747 DEBUG server.NIOServerCnxn 
> (NIOServerCnxn.java:closeSock(1024)) - ignoring exception during input 
> shutdown
> java.net.SocketException: Socket is not connected
>at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:633)
>at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>at 
> org.apache.zookeeper.server.NIOServerCnxn.closeSock(NIOServerCnxn.java:1020)
>at org.apache.zookeeper.server.NIOServerCnxn.close(NIOServerCnxn.java:977)
>at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:347)
>at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
>at java.lang.Thread.run(Thread.java:680)
> {noformat}
> They are annoying when you look for actual problems in logs.
> Those on DEBUG level should be silenced via log levels for ZK classes by 
> default. Not sure what to do with ERROR level one(s?), I'd need to look if 
> they can be silenced/logged as DEBUG on hive side, or maybe file a bug for 
> ZK...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5670) annoying ZK exceptions are annoying

2013-10-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5670:
---

Attachment: HIVE-5670.patch

Fix the first one by supplying the non-null watcher.
Fix the 2nd one by log configuration.

There's ZK SASL exception still logged (w/o stack), at warn, but I guess it's 
too dangerous to suppress.

> annoying ZK exceptions are annoying
> ---
>
> Key: HIVE-5670
> URL: https://issues.apache.org/jira/browse/HIVE-5670
> Project: Hive
>  Issue Type: Task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5670.patch
>
>
> when I run tests locally (or on cluster IIRC) there are bunch of ZK-related 
> exceptions in Hive log, such as
> {noformat}
> 2013-10-28 09:50:50,851 ERROR zookeeper.ClientCnxn 
> (ClientCnxn.java:processEvent(523)) - Error while calling watcher 
> java.lang.NullPointerException
>at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
>
>2013-10-28 09:51:05,747 DEBUG server.NIOServerCnxn 
> (NIOServerCnxn.java:closeSock(1024)) - ignoring exception during input 
> shutdown
> java.net.SocketException: Socket is not connected
>at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:633)
>at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>at 
> org.apache.zookeeper.server.NIOServerCnxn.closeSock(NIOServerCnxn.java:1020)
>at org.apache.zookeeper.server.NIOServerCnxn.close(NIOServerCnxn.java:977)
>at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:347)
>at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
>at java.lang.Thread.run(Thread.java:680)
> {noformat}
> They are annoying when you look for actual problems in logs.
> Those on DEBUG level should be silenced via log levels for ZK classes by 
> default. Not sure what to do with ERROR level one(s?), I'd need to look if 
> they can be silenced/logged as DEBUG on hive side, or maybe file a bug for 
> ZK...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5670) annoying ZK exceptions are annoying

2013-10-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5670:
---

Status: Patch Available  (was: Open)

> annoying ZK exceptions are annoying
> ---
>
> Key: HIVE-5670
> URL: https://issues.apache.org/jira/browse/HIVE-5670
> Project: Hive
>  Issue Type: Task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5670.patch
>
>
> when I run tests locally (or on cluster IIRC) there are bunch of ZK-related 
> exceptions in Hive log, such as
> {noformat}
> 2013-10-28 09:50:50,851 ERROR zookeeper.ClientCnxn 
> (ClientCnxn.java:processEvent(523)) - Error while calling watcher 
> java.lang.NullPointerException
>at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
>
>2013-10-28 09:51:05,747 DEBUG server.NIOServerCnxn 
> (NIOServerCnxn.java:closeSock(1024)) - ignoring exception during input 
> shutdown
> java.net.SocketException: Socket is not connected
>at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:633)
>at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>at 
> org.apache.zookeeper.server.NIOServerCnxn.closeSock(NIOServerCnxn.java:1020)
>at org.apache.zookeeper.server.NIOServerCnxn.close(NIOServerCnxn.java:977)
>at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:347)
>at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
>at java.lang.Thread.run(Thread.java:680)
> {noformat}
> They are annoying when you look for actual problems in logs.
> Those on DEBUG level should be silenced via log levels for ZK classes by 
> default. Not sure what to do with ERROR level one(s?), I'd need to look if 
> they can be silenced/logged as DEBUG on hive side, or maybe file a bug for 
> ZK...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5683) JDBC support for char

2013-10-29 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-5683:
-

Component/s: Types
 JDBC

> JDBC support for char
> -
>
> Key: HIVE-5683
> URL: https://issues.apache.org/jira/browse/HIVE-5683
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC, Types
>Reporter: Jason Dere
>Assignee: Jason Dere
>
> Support char type in JDBC, including char length in result set metadata.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5684) Serde support for char

2013-10-29 Thread Jason Dere (JIRA)
Jason Dere created HIVE-5684:


 Summary: Serde support for char
 Key: HIVE-5684
 URL: https://issues.apache.org/jira/browse/HIVE-5684
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers, Types
Reporter: Jason Dere
Assignee: Jason Dere


Update some of the SerDe's with char support



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5683) JDBC support for char

2013-10-29 Thread Jason Dere (JIRA)
Jason Dere created HIVE-5683:


 Summary: JDBC support for char
 Key: HIVE-5683
 URL: https://issues.apache.org/jira/browse/HIVE-5683
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere


Support char type in JDBC, including char length in result set metadata.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5672) Insert with custom separator not supported for local directory

2013-10-29 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808371#comment-13808371
 ] 

Xuefu Zhang commented on HIVE-5672:
---

Hi [~romainr], do you plan to work on this? If not, I can give it a try. 
Thanks. 

> Insert with custom separator not supported for local directory
> --
>
> Key: HIVE-5672
> URL: https://issues.apache.org/jira/browse/HIVE-5672
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Romain Rigaux
>
> https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
> directory don't seem to be supported:
> {code}
> insert overwrite directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select description FROM sample_07
> {code}
> {code}
> Error while compiling statement: FAILED: ParseException line 2:0 cannot 
> recognize input near 'row' 'format' 'delimited' in select clause
> {code}
> This works (with 'local'):
> {code}
> insert overwrite local directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select code, description FROM sample_07
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5191) Add char data type

2013-10-29 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-5191:
-

Attachment: HIVE-5191.2.patch

Patch v2, changes based on Xuefu's comments

> Add char data type
> --
>
> Key: HIVE-5191
> URL: https://issues.apache.org/jira/browse/HIVE-5191
> Project: Hive
>  Issue Type: New Feature
>  Components: Types
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-5191.1.patch, HIVE-5191.2.patch
>
>
> Separate task for char type, since HIVE-4844 only adds varchar



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5668) path normalization in MapOperator is expensive

2013-10-29 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5668:


   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk.
Thanks for the review Gunther!


> path normalization in MapOperator is expensive
> --
>
> Key: HIVE-5668
> URL: https://issues.apache.org/jira/browse/HIVE-5668
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5668.1.patch
>
>
> The conversion of paths in MapWork.getPathToAliases is happening multiple 
> times in MapOperator.cleanUpInputFileChangedOp. Caching the results of 
> conversion can improve the performance of hive map tasks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-29 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808360#comment-13808360
 ] 

Brock Noland commented on HIVE-5354:


My fault. I'll resubmit. I am moving the builds over to the bigtop jenkins.

> Decimal precision/scale support in ORC file
> ---
>
> Key: HIVE-5354
> URL: https://issues.apache.org/jira/browse/HIVE-5354
> Project: Hive
>  Issue Type: Task
>  Components: Serializers/Deserializers
>Affects Versions: 0.10.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5354.1.patch, HIVE-5354.2.patch, HIVE-5354.3.patch, 
> HIVE-5354.4.patch, HIVE-5354.patch
>
>
> A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5602) Micro optimize select operator

2013-10-29 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808331#comment-13808331
 ] 

Edward Capriolo commented on HIVE-5602:
---

Thanks for looking.

> Micro optimize select operator
> --
>
> Key: HIVE-5602
> URL: https://issues.apache.org/jira/browse/HIVE-5602
> Project: Hive
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5602.2.patch.txt, HIVE-5602.patch.1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5613) Subquery support: disallow nesting of SubQueries

2013-10-29 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808310#comment-13808310
 ] 

Ashutosh Chauhan commented on HIVE-5613:


+1

> Subquery support: disallow nesting of SubQueries
> 
>
> Key: HIVE-5613
> URL: https://issues.apache.org/jira/browse/HIVE-5613
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-5613.1.patch, HIVE-5613.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5499) can not show chinese comments

2013-10-29 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808309#comment-13808309
 ] 

Xuefu Zhang commented on HIVE-5499:
---

[~lvxin_1986] Just curious. The JIRA is marked as "fixed", so the patches here 
are committed already?

> can not show chinese comments
> -
>
> Key: HIVE-5499
> URL: https://issues.apache.org/jira/browse/HIVE-5499
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Query Processor
>Affects Versions: 0.11.0
> Environment: hadoop-cdh3u6
>Reporter: alex.lv
> Attachments: HIVE-5499-Column-Comment.patch, 
> HIVE-5949-Table-Commemt.patch
>
>
> desc formatted tablename1
> can not show  chinese comments and the result is messy code



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-3844) Unix timestamps don't seem to be read correctly from HDFS as Timestamp column

2013-10-29 Thread Mark Grover (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Grover reassigned HIVE-3844:
-

Assignee: Venki Korukanti  (was: Mark Grover)

Venki,
I am not. Assigned it to you.

> Unix timestamps don't seem to be read correctly from HDFS as Timestamp column
> -
>
> Key: HIVE-3844
> URL: https://issues.apache.org/jira/browse/HIVE-3844
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.8.0
>Reporter: Mark Grover
>Assignee: Venki Korukanti
>
> Serega Shepak pointed out that something like
> {code}
> select cast(date_occurrence as timestamp) from xvlr_data limit 10
> {code}
> where  date_occurrence has BIGINT type (timestamp in milliseconds) works. But 
> it doesn't work if the declared type is TIMESTAMP on column. The data in the 
> date_occurence column in unix timestamp in millis.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3844) Unix timestamps don't seem to be read correctly from HDFS as Timestamp column

2013-10-29 Thread Venki Korukanti (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13808293#comment-13808293
 ] 

Venki Korukanti commented on HIVE-3844:
---

{~mgrover} I am wondering whether you are working on this issue. If not, I 
would be happy to work on this issue.

> Unix timestamps don't seem to be read correctly from HDFS as Timestamp column
> -
>
> Key: HIVE-3844
> URL: https://issues.apache.org/jira/browse/HIVE-3844
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.8.0
>Reporter: Mark Grover
>Assignee: Mark Grover
>
> Serega Shepak pointed out that something like
> {code}
> select cast(date_occurrence as timestamp) from xvlr_data limit 10
> {code}
> where  date_occurrence has BIGINT type (timestamp in milliseconds) works. But 
> it doesn't work if the declared type is TIMESTAMP on column. The data in the 
> date_occurence column in unix timestamp in millis.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5613) Subquery support: disallow nesting of SubQueries

2013-10-29 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-5613:


Status: Patch Available  (was: Open)

> Subquery support: disallow nesting of SubQueries
> 
>
> Key: HIVE-5613
> URL: https://issues.apache.org/jira/browse/HIVE-5613
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-5613.1.patch, HIVE-5613.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5664) Drop cascade database fails when the db has any tables with indexes

2013-10-29 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5664:
--

Status: Patch Available  (was: Open)

> Drop cascade database fails when the db has any tables with indexes
> ---
>
> Key: HIVE-5664
> URL: https://issues.apache.org/jira/browse/HIVE-5664
> Project: Hive
>  Issue Type: Bug
>  Components: Indexing, Metastore
>Affects Versions: 0.12.0, 0.11.0, 0.10.0
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 0.13.0
>
> Attachments: HIVE-5664.1.patch.txt
>
>
> {code}
> CREATE DATABASE db2; 
> USE db2; 
> CREATE TABLE tab1 (id int, name string); 
> CREATE INDEX idx1 ON TABLE tab1(id) as 'COMPACT' with DEFERRED REBUILD IN 
> TABLE tab1_indx; 
> DROP DATABASE db2 CASCADE;
> {code}
> Last DDL fails with the following error:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist: db2
> Hive.log has following exception
> 2013-10-27 20:46:16,629 ERROR exec.DDLTask (DDLTask.java:execute(434)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: db2
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3473)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:231)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1441)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1219)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1047)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: NoSuchObjectException(message:db2.tab1_indx table not found)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
> at com.sun.proxy.$Proxy7.get_table(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:890)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:660)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:652)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:546)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
> at com.sun.proxy.$Proxy8.dropDatabase(Unknown Source)
> at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:284)
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3470)
> ... 18 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5664) Drop cascade database fails when the db has any tables with indexes

2013-10-29 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5664:
--

Attachment: HIVE-5664.1.patch.txt

> Drop cascade database fails when the db has any tables with indexes
> ---
>
> Key: HIVE-5664
> URL: https://issues.apache.org/jira/browse/HIVE-5664
> Project: Hive
>  Issue Type: Bug
>  Components: Indexing, Metastore
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 0.13.0
>
> Attachments: HIVE-5664.1.patch.txt
>
>
> {code}
> CREATE DATABASE db2; 
> USE db2; 
> CREATE TABLE tab1 (id int, name string); 
> CREATE INDEX idx1 ON TABLE tab1(id) as 'COMPACT' with DEFERRED REBUILD IN 
> TABLE tab1_indx; 
> DROP DATABASE db2 CASCADE;
> {code}
> Last DDL fails with the following error:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist: db2
> Hive.log has following exception
> 2013-10-27 20:46:16,629 ERROR exec.DDLTask (DDLTask.java:execute(434)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: db2
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3473)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:231)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1441)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1219)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1047)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: NoSuchObjectException(message:db2.tab1_indx table not found)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
> at com.sun.proxy.$Proxy7.get_table(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:890)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:660)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:652)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:546)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
> at com.sun.proxy.$Proxy8.dropDatabase(Unknown Source)
> at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:284)
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3470)
> ... 18 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >