Re: hive failure after HDP 2.3 upgrade

2015-11-19 Thread Brian Jeltema
cripts have run. > > On Nov 19, 2015 11:39 AM, "Brian Jeltema" <bdjelt...@gmail.com > <mailto:bdjelt...@gmail.com>> wrote: > Following up, I turned on logging in the MySQL server to capture the failing > query. The query being logged by MySQL is > > SELEC

hive failure after HDP 2.3 upgrade

2015-11-19 Thread Brian Jeltema
Originally posted in the Ambari users group, but probably more appropriate here: I’ve done a rolling upgrade to HDP 2.3 and everything appears to be working now except for Hive. The HiveServer2 process is shown as ‘Started’, but it’s really broken, as is the Hive Metastore. HiveServer2 is not

Re: hive failure after HDP 2.3 upgrade

2015-11-19 Thread Brian Jeltema
in the ESCAPE clause should be doubled. How can I fix this? Brian > On Nov 19, 2015, at 7:28 AM, Brian Jeltema <bdjelt...@gmail.com> wrote: > > Originally posted in the Ambari users group, but probably more appropriate > here: > > I’ve done a rolling upgrade to HDP 2.

EXPORTing multiple partitions

2015-06-25 Thread Brian Jeltema
Using Hive .13, I would like to export multiple partitions of a table, something conceptually like: EXPORT TABLE foo PARTITION (id=1,2,3) to ‘path’ Is there any way to accomplish this? Brian

Re: EXPORTing multiple partitions

2015-06-25 Thread Brian Jeltema
, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: Using Hive .13, I would like to export multiple partitions of a table, something conceptually like: EXPORT TABLE foo PARTITION (id=1,2,3) to ‘path’ Is there any way to accomplish this? Brian

writing to bucketed table in MR job

2015-01-13 Thread Brian Jeltema
I have a table that I would like to define to be bucketed, but I also need to write to new partitions using HCatOutputFormat (or similar) from an MR job. I’m getting an unsupported operation error when I try to do that. Is there some way to make this work? I suppose I could write to a temporary

UPDATE implementation

2014-12-03 Thread Brian Jeltema
I’m anticipating using UPDATE statements in Hive 0.14. In my use case, I may need to perform 30 or so updates at a time. Will each UPDATE result in an MR job doing a full partition scan? Brian

silent mode isn't silent

2014-08-27 Thread Brian Jeltema
Hive 0.13, I execute a query in silent mode, persisting the output as: hive -S -f query.hql /tmp/output.txt but I’m getting logging output in the output file, such as: 2014-08-27 14:53:02,741 [main] WARN org.apache.hadoop.conf.Configuration -

Re: UDF with dependent JARs

2014-08-03 Thread Brian Jeltema
;^) Regards, Sankar S On Sat, Aug 2, 2014 at 5:17 PM, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: I've written a small UDF and placed it in a JAR (a.jar). The UDF has a dependency on a class in another JAR (b.jar). in Hive, I do: add jar a.jar; add jar b.jar; create

UDF with dependent JARs

2014-08-02 Thread Brian Jeltema
I've written a small UDF and placed it in a JAR (a.jar). The UDF has a dependency on a class in another JAR (b.jar). in Hive, I do: add jar a.jar; add jar b.jar; create temporary function .; but when I execute the UDF, the dependency in b.jar is not found (NoClassDefFoundError). If

HCat and non-string partition types

2014-07-23 Thread Brian Jeltema
I have some Hive tables that are partitioned by an int field. When I tried to do a Sqoop import using Sqoops HCatalog support, it failed complaining that HCatalog only supports string partitions. However, I’ve used HCatalog in mapReduce jobs with int partitions successfully. The docs that I’ve

Re: DECIMAL precision is too small

2014-06-29 Thread Brian Jeltema
applicable, we could include it in the documentation.) -- Lefty On Sat, Jun 28, 2014 at 10:08 AM, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: Hive doesn’t support a BigDecimal data type, as far as I know. It supports a Decimal type that is based on BigDecimal, but the precision

Re: DECIMAL precision is too small

2014-06-29 Thread Brian Jeltema
Right, but in my case the numbers are never negative. On Jun 29, 2014, at 9:52 AM, Edward Capriolo edlinuxg...@gmail.com wrote: That does not work if your sorting negative numbers btw. As you would have to - pad and reverse negative numbers. On Sun, Jun 29, 2014 at 6:35 AM, Brian Jeltema

Re: DECIMAL precision is too small

2014-06-28 Thread Brian Jeltema
ghosh sumi...@yahoo.com wrote: Did you try BigDecimal? It is the same datatype as Java BigDecimal. On Thursday, 26 June 2014 8:34 AM, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: Sorry, I meant 128 bit On Jun 26, 2014, at 11:31 AM, Brian Jeltema brian.jelt...@digitalenvoy.net

DECIMAL precision is too small

2014-06-26 Thread Brian Jeltema
I need to represent an unsigned 64-bit value as a Hive DECIMAL. The current precision maximum is 38, which isn’t large enough to represent the high-end of this value. Is there an alternative? Brian

Re: DECIMAL precision is too small

2014-06-26 Thread Brian Jeltema
Sorry, I meant 128 bit On Jun 26, 2014, at 11:31 AM, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: I need to represent an unsigned 64-bit value as a Hive DECIMAL. The current precision maximum is 38, which isn’t large enough to represent the high-end of this value

Re: hive/hbase integration

2014-06-25 Thread Brian Jeltema
on your install environment. Also replace $HBASE_HOME with the full path of your hbase install. -Deepesh On Mon, Jun 23, 2014 at 9:14 AM, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying to use HBase

hive/hbase integration

2014-06-23 Thread Brian Jeltema
I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying to use HBase integration. Hive generated Map/Reduce jobs are failing with: Error: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.TableSplit this is discussed in several discussion threads, but

Re: HCatalog access from a Java app

2014-06-16 Thread Brian Jeltema
or you will get input splits and read the records on mappers??? The code will be different (somewhat)... let me know... On Fri, Jun 13, 2014 at 8:25 AM, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: Version 0.12.0. I’d like

Re: HCatalog access from a Java app

2014-06-16 Thread Brian Jeltema
the reader: HCatReader hcatReader = DataTransferFactory.getHCatReader(inputSplit, config); IteratorHCatRecord records = hcatReader.read(); b) Iterate over the records for that reader On Mon, Jun 16, 2014 at 9:57 AM, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: regarding

HCatalog access from a Java app

2014-06-13 Thread Brian Jeltema
I’m experimenting with HCatalog, and would like to be able to access tables and their schema from a Java application (not Hive/Pig/MapReduce). However, the API seems to be hidden, which leads leads me to believe that this is not a supported use case. Is HCatalog use limited to one of the

Re: HCatalog access from a Java app

2014-06-13 Thread Brian Jeltema
and will be removed in Hive 0.14.0. I can provide you with the code sample if you tell me what you are trying to do and what version of Hive you are using. On Fri, Jun 13, 2014 at 7:33 AM, Brian Jeltema brian.jelt...@digitalenvoy.net wrote: I’m experimenting with HCatalog, and would like to be able

Re: HCatalog access from a Java app

2014-06-13 Thread Brian Jeltema
Doing this, with the appropriate substitutions for my table, jarClass, etc: 2. To get the table schema... I assume that you are after HCat schema import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hadoop.mapreduce.Job; import