[jira] Updated: (HIVE-1630) bug in NO_DROP

2010-09-10 Thread Siying Dong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siying Dong updated HIVE-1630:
--

Attachment: HIVE-1630.2.patch

> bug in NO_DROP
> --
>
> Key: HIVE-1630
> URL: https://issues.apache.org/jira/browse/HIVE-1630
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Siying Dong
> Fix For: 0.7.0
>
> Attachments: HIVE-1630.2.patch
>
>
> If the table is marked NO_DROP, we should still be able to drop old 
> partitions.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1630) bug in NO_DROP

2010-09-10 Thread Siying Dong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siying Dong updated HIVE-1630:
--

Attachment: (was: HIVE-1630.1.patch)

> bug in NO_DROP
> --
>
> Key: HIVE-1630
> URL: https://issues.apache.org/jira/browse/HIVE-1630
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Siying Dong
> Fix For: 0.7.0
>
>
> If the table is marked NO_DROP, we should still be able to drop old 
> partitions.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1630) bug in NO_DROP

2010-09-10 Thread Siying Dong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siying Dong updated HIVE-1630:
--

Attachment: HIVE-1630.1.patch

table of "NO_DROP" doesn't block dropping partitions of it.

> bug in NO_DROP
> --
>
> Key: HIVE-1630
> URL: https://issues.apache.org/jira/browse/HIVE-1630
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Siying Dong
> Fix For: 0.7.0
>
>
> If the table is marked NO_DROP, we should still be able to drop old 
> partitions.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



RE: Help with setting up Eclipse for Hive

2010-09-10 Thread Steven Wong
Adding 
-Dtest.warehouse.dir="${workspace_loc:trunk}/build/ql/test/data/warehouse" to 
the VM arguments in run config solves the problem.


From: Steven Wong
Sent: Thursday, August 19, 2010 10:44 AM
To: hive-dev@hadoop.apache.org
Subject: Help with setting up Eclipse for Hive

I followed http://wiki.apache.org/hadoop/Hive/GettingStarted/EclipseSetup and, 
when I tried to run the unit tests, Eclipse spit out the error "Variable 
references non-existent resource : ${workspace_loc:trunk}".

I think the reason may be that the project is in /somepath/hive/trunk but the 
workspace is NOT in /somepath/hive, because when I started over with the 
workspace in /somepath/hive, the above error is gone. If this is indeed the 
reason, the wiki should be clarified.

Then I ran into:

java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path 
in absolute URI: file:$%7Btest.warehouse.dir%7D

Please suggest how I can fix it. Code is r984947.

FWIW, "ant test" on the command line ran successfully except for 2 failed test 
cases.

Thanks.
Steven


PS: Here's the complete console log:

10/08/18 16:58:52 INFO metastore.HiveMetaStore: 0: Opening raw store with 
implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
10/08/18 16:58:52 INFO metastore.ObjectStore: ObjectStore, initialize called
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.ui.ide" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.ui.views" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.jface.text" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.ui.workbench.texteditor" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.ui.editors" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.ui" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.core.expressions" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.core.resources" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.debug.core" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.debug.ui" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.jdt.core" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.jdt.ui" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.core.runtime" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.jdt.launching" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.jdt.debug.ui" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.jdt.junit.runtime" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.compare" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.ltk.core.refactoring" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.core.variables" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.ltk.ui.refactoring" but it cannot be resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.equinox.simpleconfigurator.manipulator" but it cannot be 
resolved.
10/08/18 16:58:52 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.junit" 
requires "org.eclipse.equinox.frameworkadmin" but it cannot be resolved.
10/08/18 16:58:52 INFO DataNucleus.Persistence: Property 
datanucleus.cache.level2 unknown - will be ignored
10/08/18 16:58:52 INFO DataNucleus.Persistence: Property 
javax.jdo.option.NonTransactionalRead unknown - will be ignored
10/08/18 16:58:52 INFO DataNucleus.Persistence: = Persistence 
Configuration ===
10/08/18 16:58:52 INFO DataNucleus.Persistence: DataNucleus Persistence Factory 
- Vendor: "DataNucleus"  Version: "2.0.3"
10/08/18 16:58:52 INFO DataNucleus.Persistence: DataNucleus Persistence Factory 
initialised for datastore 
URL="jdbc:derby:;da

[jira] Created: (HIVE-1631) JDBC driver returns wrong precision, scale, or column size for some data types

2010-09-10 Thread Steven Wong (JIRA)
JDBC driver returns wrong precision, scale, or column size for some data types
--

 Key: HIVE-1631
 URL: https://issues.apache.org/jira/browse/HIVE-1631
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Drivers
Affects Versions: 0.7.0
Reporter: Steven Wong
Priority: Minor


For some data types, these methods return values that do not conform to the 
JDBC spec:

org.apache.hadoop.hive.jdbc.HiveResultSetMetaData.getPrecision(int)
org.apache.hadoop.hive.jdbc.HiveResultSetMetaData.getScale(int)
org.apache.hadoop.hive.jdbc.HiveResultSetMetaData.getColumnDisplaySize(int)
org.apache.hadoop.hive.jdbc.JdbcColumn.getColumnSize()


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1378) Return value for map, array, and struct needs to return a string

2010-09-10 Thread Steven Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Wong updated HIVE-1378:
--

Summary: Return value for map, array, and struct needs to return a 
string   (was: Return value for map, array, or UDF (that returns map/array) 
needs to return a string )
Description: In order to be able to select/display any data from JDBC Hive 
driver, return value for map, array, and struct needs to return a string  (was: 
In order to be able to select/display any data from JDBC Hive driver, return 
value for map, array, or UDF (that returns map/array) needs to return a string)

> Return value for map, array, and struct needs to return a string 
> -
>
> Key: HIVE-1378
> URL: https://issues.apache.org/jira/browse/HIVE-1378
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: Drivers
>Reporter: Jerome Boulon
>Assignee: Steven Wong
>
> In order to be able to select/display any data from JDBC Hive driver, return 
> value for map, array, and struct needs to return a string

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1630) bug in NO_DROP

2010-09-10 Thread Siying Dong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12908241#action_12908241
 ] 

Siying Dong commented on HIVE-1630:
---

I think it is a mistake when I design the semantic of no_drop. no_drop on table 
level should not block us from dropping partitions. I'll fix that.

> bug in NO_DROP
> --
>
> Key: HIVE-1630
> URL: https://issues.apache.org/jira/browse/HIVE-1630
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Siying Dong
> Fix For: 0.7.0
>
>
> If the table is marked NO_DROP, we should still be able to drop old 
> partitions.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1630) bug in NO_DROP

2010-09-10 Thread Namit Jain (JIRA)
bug in NO_DROP
--

 Key: HIVE-1630
 URL: https://issues.apache.org/jira/browse/HIVE-1630
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Siying Dong
 Fix For: 0.7.0


If the table is marked NO_DROP, we should still be able to drop old partitions.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1629) Patch to fix hashCode method in DoubleWritable class

2010-09-10 Thread Vaibhav Aggarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12908210#action_12908210
 ] 

Vaibhav Aggarwal commented on HIVE-1629:


Hi

The doubleToLongBits converts the double value into IEEE 754 floating-point 
"double format" bit layout.
Furthermore the XOR operator prevents returning 0 for values less than 2^32.

This is the hashCode function used by standard java implementation.

I was noticing unexpected delay in one of the operations related to double data 
types.
After some debugging I realized that the HashMap puts and gets were extremely 
slow.
That pointed me to the hashCode implementatoin in DoubleWritable which turned 
out to be the cause of slow HashMap IO.

That is why I propose to use the standard java implmenetation of HashCode for 
double type.

Thanks
Vaibhav

> Patch to fix hashCode method in DoubleWritable class
> 
>
> Key: HIVE-1629
> URL: https://issues.apache.org/jira/browse/HIVE-1629
> Project: Hadoop Hive
>  Issue Type: Bug
>Reporter: Vaibhav Aggarwal
> Attachments: HIVE-1629.patch
>
>
> A patch to fix the hashCode() method of DoubleWritable class of Hive.
> It prevents the HashMap (of type DoubleWritable) from behaving as LinkedList.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1226) support filter pushdown against non-native tables

2010-09-10 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-1226:
-

Attachment: HIVE-1226.2.patch

Almost there...but just realized that I haven't included the predicate analysis 
in getSplits, and we need it there (e.g. in the HBase case to avoid generating 
splits for regions which can't possibly contain the key).  So one more patch 
coming after this and then it's ready for review.


> support filter pushdown against non-native tables
> -
>
> Key: HIVE-1226
> URL: https://issues.apache.org/jira/browse/HIVE-1226
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: HBase Handler, Query Processor
>Affects Versions: 0.6.0
>Reporter: John Sichi
>Assignee: John Sichi
> Fix For: 0.7.0
>
> Attachments: HIVE-1226.1.patch, HIVE-1226.2.patch
>
>
> For example, HBase's scan object can take filters.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1629) Patch to fix hashCode method in DoubleWritable class

2010-09-10 Thread Ning Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12908194#action_12908194
 ] 

Ning Zhang commented on HIVE-1629:
--

+long v = Double.doubleToLongBits(value);
+return (int) (v ^ (v >>> 32));

won't this return 0 for all long values less than 2^32?

Search on the web and it seems the following 64 bit to 32 bit hash is a good one

http://www.cris.com/~ttwang/tech/inthash.htm

> Patch to fix hashCode method in DoubleWritable class
> 
>
> Key: HIVE-1629
> URL: https://issues.apache.org/jira/browse/HIVE-1629
> Project: Hadoop Hive
>  Issue Type: Bug
>Reporter: Vaibhav Aggarwal
> Attachments: HIVE-1629.patch
>
>
> A patch to fix the hashCode() method of DoubleWritable class of Hive.
> It prevents the HashMap (of type DoubleWritable) from behaving as LinkedList.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1629) Patch to fix hashCode method in DoubleWritable class

2010-09-10 Thread Vaibhav Aggarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Aggarwal updated HIVE-1629:
---

Attachment: HIVE-1629.patch

> Patch to fix hashCode method in DoubleWritable class
> 
>
> Key: HIVE-1629
> URL: https://issues.apache.org/jira/browse/HIVE-1629
> Project: Hadoop Hive
>  Issue Type: Bug
>Reporter: Vaibhav Aggarwal
> Attachments: HIVE-1629.patch
>
>
> A patch to fix the hashCode() method of DoubleWritable class of Hive.
> It prevents the HashMap (of type DoubleWritable) from behaving as LinkedList.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1629) Patch to fix hashCode method in DoubleWritable class

2010-09-10 Thread Vaibhav Aggarwal (JIRA)
Patch to fix hashCode method in DoubleWritable class


 Key: HIVE-1629
 URL: https://issues.apache.org/jira/browse/HIVE-1629
 Project: Hadoop Hive
  Issue Type: Bug
Reporter: Vaibhav Aggarwal


A patch to fix the hashCode() method of DoubleWritable class of Hive.
It prevents the HashMap (of type DoubleWritable) from behaving as LinkedList.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1628) Fix Base64TextInputFormat to be compatible with commons codec 1.4

2010-09-10 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HIVE-1628:
--

Status: Patch Available  (was: Open)

> Fix Base64TextInputFormat to be compatible with commons codec 1.4
> -
>
> Key: HIVE-1628
> URL: https://issues.apache.org/jira/browse/HIVE-1628
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Contrib
>Affects Versions: 0.6.0, 0.7.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hive-1628-0.5.txt, hive-1628-0.5.txt, hive-1628.txt, 
> hive-1628.txt
>
>
> Commons-codec 1.4 made an incompatible change to the Base64 class that made 
> line-wrapping default (boo!). This breaks the Base64TextInputFormat in 
> contrib. This patch adds some simple reflection to use the new constructor 
> that uses the old behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1628) Fix Base64TextInputFormat to be compatible with commons codec 1.4

2010-09-10 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HIVE-1628:
--

Attachment: hive-1628.txt
hive-1628-0.5.txt

Here are the correct patches.

> Fix Base64TextInputFormat to be compatible with commons codec 1.4
> -
>
> Key: HIVE-1628
> URL: https://issues.apache.org/jira/browse/HIVE-1628
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Contrib
>Affects Versions: 0.6.0, 0.7.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hive-1628-0.5.txt, hive-1628-0.5.txt, hive-1628.txt, 
> hive-1628.txt
>
>
> Commons-codec 1.4 made an incompatible change to the Base64 class that made 
> line-wrapping default (boo!). This breaks the Base64TextInputFormat in 
> contrib. This patch adds some simple reflection to use the new constructor 
> that uses the old behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1628) Fix Base64TextInputFormat to be compatible with commons codec 1.4

2010-09-10 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HIVE-1628:
--

Status: Open  (was: Patch Available)

Oops, I just noticed I posted the wrong patch! sorry, one sec...

> Fix Base64TextInputFormat to be compatible with commons codec 1.4
> -
>
> Key: HIVE-1628
> URL: https://issues.apache.org/jira/browse/HIVE-1628
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Contrib
>Affects Versions: 0.6.0, 0.7.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hive-1628-0.5.txt, hive-1628.txt
>
>
> Commons-codec 1.4 made an incompatible change to the Base64 class that made 
> line-wrapping default (boo!). This breaks the Base64TextInputFormat in 
> contrib. This patch adds some simple reflection to use the new constructor 
> that uses the old behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1628) Fix Base64TextInputFormat to be compatible with commons codec 1.4

2010-09-10 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HIVE-1628:
--

Attachment: hive-1628.txt
hive-1628-0.5.txt

> Fix Base64TextInputFormat to be compatible with commons codec 1.4
> -
>
> Key: HIVE-1628
> URL: https://issues.apache.org/jira/browse/HIVE-1628
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Contrib
>Affects Versions: 0.6.0, 0.7.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hive-1628-0.5.txt, hive-1628.txt
>
>
> Commons-codec 1.4 made an incompatible change to the Base64 class that made 
> line-wrapping default (boo!). This breaks the Base64TextInputFormat in 
> contrib. This patch adds some simple reflection to use the new constructor 
> that uses the old behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1628) Fix Base64TextInputFormat to be compatible with commons codec 1.4

2010-09-10 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HIVE-1628:
--

Status: Patch Available  (was: Open)

> Fix Base64TextInputFormat to be compatible with commons codec 1.4
> -
>
> Key: HIVE-1628
> URL: https://issues.apache.org/jira/browse/HIVE-1628
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Contrib
>Affects Versions: 0.6.0, 0.7.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hive-1628-0.5.txt, hive-1628.txt
>
>
> Commons-codec 1.4 made an incompatible change to the Base64 class that made 
> line-wrapping default (boo!). This breaks the Base64TextInputFormat in 
> contrib. This patch adds some simple reflection to use the new constructor 
> that uses the old behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1628) Fix Base64TextInputFormat to be compatible with commons codec 1.4

2010-09-10 Thread Todd Lipcon (JIRA)
Fix Base64TextInputFormat to be compatible with commons codec 1.4
-

 Key: HIVE-1628
 URL: https://issues.apache.org/jira/browse/HIVE-1628
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Contrib
Affects Versions: 0.6.0, 0.7.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon


Commons-codec 1.4 made an incompatible change to the Base64 class that made 
line-wrapping default (boo!). This breaks the Base64TextInputFormat in contrib. 
This patch adds some simple reflection to use the new constructor that uses the 
old behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1627) Hive Join returns incorrect results if the join is (bigint = string)

2010-09-10 Thread Abhinav Gupta (JIRA)
Hive Join returns incorrect results if the join is (bigint = string)


 Key: HIVE-1627
 URL: https://issues.apache.org/jira/browse/HIVE-1627
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.5.0
Reporter: Abhinav Gupta


I was running a query joining on bigint column with string column.

And, result was incorrect because only "16 bytes seemed to be compared". The 
length of value more than 16 bytes when represented as base-10. 

The problem was fixed once I changed the join to (bigint = cast (string as 
bigint))

Is the bug because of type conversion on join keys?


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1611) Add alternative search-provider to Hive site

2010-09-10 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12907975#action_12907975
 ] 

Otis Gospodnetic commented on HIVE-1611:


Look at the search box at http://avro.apache.org/ (top-right corner) to see 
what this patch does.

Should we assign this to a committer now, since Alex is done with the patch?

Doug Cutting reviewed and committed the big change via AVRO-626 that made it 
possible for this patch to be literally a 1-line change:

Index: author/src/documentation/skinconf.xml
===
--- author/src/documentation/skinconf.xml   (revision 770021)
+++ author/src/documentation/skinconf.xml   (revision )
@@ -30,7 +30,7 @@
 In other words google will search the @domain for the query string.
 
   -->
-  
+  
 
   
   true  


> Add alternative search-provider to Hive site
> 
>
> Key: HIVE-1611
> URL: https://issues.apache.org/jira/browse/HIVE-1611
> Project: Hadoop Hive
>  Issue Type: Improvement
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Attachments: HIVE-1611.patch
>
>
> Use search-hadoop.com service to make available search in Hive sources, MLs, 
> wiki, etc.
> This was initially proposed on user mailing list. The search service was 
> already added in site's skin (common for all Hadoop related projects) before 
> so this issue is about enabling it for Hive. The ultimate goal is to use it 
> at all Hadoop's sub-projects' sites.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.