Hi, according to the description of the reflect UDF, you are trying to call
java.util.UUID.hashcode(uidString), which doesnt seem to be an existing
method on either java 6/7.
http://docs.oracle.com/javase/7/docs/api/java/util/UUID.html#hashCode()
Thanks
Szehon
On Wed, Apr 2, 2014 at 2:13 PM,
I was able to resolve the issue by setting "hive.optimize.index.filter" to
true.
In the hadoop logs
syslog:2014-04-03 05:44:51,204 INFO
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat: included column ids =
3,8,13
syslog:2014-04-03 05:44:51,204 INFO
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat:
HI Abhay,
What is the DDL for your "test" table?
2014-04-02 22:36 GMT-07:00 Abhay Bansal :
> I am new to Hive, apologise for asking such a basic question.
>
> Following exercise was done with hive .12 and hadoop 0.20.203
>
> I created a ORC file form java, and pushed it into a table with the s
I am new to Hive, apologise for asking such a basic question.
Following exercise was done with hive .12 and hadoop 0.20.203
I created a ORC file form java, and pushed it into a table with the same
schema. I checked the conf
property hive.optimize.ppdtrue
which should ideally use the ppd optimisat
Hi guys,
I am trying to use the reflect UDF for an UUID method and am getting an
exception. I believe this function should be available in java 1.6.0_31 the
system is running.
select reflect("java.util.UUID", "hashCode", uid_str) my_uid,
...
My suspicion is, this is because the hive column I
6113:
Caused by:
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:
Duplicate entry 'default' for key 'UNIQUE_DATABASE'
it looks to me like problem is on your side.
JV
On Tue, Apr 1, 2014 at 2:55 PM, Lior Schachter wrote:
> Hi all,
>
> We are randomly getting 2 types o
Makes perfect sense, thanks Petter!
On Wed, Apr 2, 2014 at 2:15 AM, Petter von Dolwitz (Hem) <
petter.von.dolw...@gmail.com> wrote:
> Hi David,
>
> you can implement a custom InputFormat (extends
> org.apache.hadoop.mapred.FileInputFormat) accompanied by a custom
> RecordReader (implements org.a
Hi David,
you can implement a custom InputFormat (extends
org.apache.hadoop.mapred.FileInputFormat) accompanied by a custom
RecordReader (implements org.apache.hadoop.mapred.RecordReader). The
RecordReader will be used to read your documents and from there you can
decide which units you will retur