cripts have run.
>
> On Nov 19, 2015 11:39 AM, "Brian Jeltema" <bdjelt...@gmail.com
> <mailto:bdjelt...@gmail.com>> wrote:
> Following up, I turned on logging in the MySQL server to capture the failing
> query. The query being logged by MySQL is
>
> SELEC
Originally posted in the Ambari users group, but probably more appropriate here:
I’ve done a rolling upgrade to HDP 2.3 and everything appears to be working now
except for Hive. The HiveServer2
process is shown as ‘Started’, but it’s really broken, as is the Hive
Metastore. HiveServer2 is not
in the ESCAPE clause should be
doubled. How can I fix this?
Brian
> On Nov 19, 2015, at 7:28 AM, Brian Jeltema <bdjelt...@gmail.com> wrote:
>
> Originally posted in the Ambari users group, but probably more appropriate
> here:
>
> I’ve done a rolling upgrade to HDP 2.
Using Hive .13, I would like to export multiple partitions of a table,
something conceptually like:
EXPORT TABLE foo PARTITION (id=1,2,3) to ‘path’
Is there any way to accomplish this?
Brian
, Brian Jeltema brian.jelt...@digitalenvoy.net
wrote:
Using Hive .13, I would like to export multiple partitions of a table,
something conceptually like:
EXPORT TABLE foo PARTITION (id=1,2,3) to ‘path’
Is there any way to accomplish this?
Brian
I have a table that I would like to define to be bucketed, but I also need to
write
to new partitions using HCatOutputFormat (or similar) from an MR job. I’m
getting
an unsupported operation error when I try to do that. Is there some way to make
this work?
I suppose I could write to a temporary
I’m anticipating using UPDATE statements in Hive 0.14.
In my use case, I may need to perform 30 or so updates at a time. Will each
UPDATE
result in an MR job doing a full partition scan?
Brian
Hive 0.13, I execute a query in silent mode, persisting the output as:
hive -S -f query.hql /tmp/output.txt
but I’m getting logging output in the output file, such as:
2014-08-27 14:53:02,741 [main] WARN org.apache.hadoop.conf.Configuration -
;^)
Regards,
Sankar S
On Sat, Aug 2, 2014 at 5:17 PM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
I've written a small UDF and placed it in a JAR (a.jar).
The UDF has a dependency on a class in another JAR (b.jar).
in Hive, I do:
add jar a.jar;
add jar b.jar;
create
I've written a small UDF and placed it in a JAR (a.jar).
The UDF has a dependency on a class in another JAR (b.jar).
in Hive, I do:
add jar a.jar;
add jar b.jar;
create temporary function .;
but when I execute the UDF, the dependency in b.jar is not found
(NoClassDefFoundError).
If
I have some Hive tables that are partitioned by an int field. When I tried to
do a Sqoop import using Sqoops HCatalog
support, it failed complaining that HCatalog only supports string partitions.
However, I’ve used HCatalog in
mapReduce jobs with int partitions successfully. The docs that I’ve
applicable, we could include
it in the documentation.)
-- Lefty
On Sat, Jun 28, 2014 at 10:08 AM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
Hive doesn’t support a BigDecimal data type, as far as I know. It supports a
Decimal type that
is based on BigDecimal, but the precision
Right, but in my case the numbers are never negative.
On Jun 29, 2014, at 9:52 AM, Edward Capriolo edlinuxg...@gmail.com wrote:
That does not work if your sorting negative numbers btw. As you would have to
- pad and reverse negative numbers.
On Sun, Jun 29, 2014 at 6:35 AM, Brian Jeltema
ghosh sumi...@yahoo.com wrote:
Did you try BigDecimal? It is the same datatype as Java BigDecimal.
On Thursday, 26 June 2014 8:34 AM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
Sorry, I meant 128 bit
On Jun 26, 2014, at 11:31 AM, Brian Jeltema brian.jelt...@digitalenvoy.net
I need to represent an unsigned 64-bit value as a Hive DECIMAL. The current
precision maximum is 38,
which isn’t large enough to represent the high-end of this value. Is there an
alternative?
Brian
Sorry, I meant 128 bit
On Jun 26, 2014, at 11:31 AM, Brian Jeltema brian.jelt...@digitalenvoy.net
wrote:
I need to represent an unsigned 64-bit value as a Hive DECIMAL. The current
precision maximum is 38,
which isn’t large enough to represent the high-end of this value
on your install environment. Also replace $HBASE_HOME
with the full path of your hbase install.
-Deepesh
On Mon, Jun 23, 2014 at 9:14 AM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying
to use HBase
I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying
to use HBase integration. Hive generated Map/Reduce jobs
are failing with:
Error: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.mapreduce.TableSplit
this is discussed in several discussion threads, but
or you will get input splits and read the records on mappers???
The code will be different (somewhat)... let me know...
On Fri, Jun 13, 2014 at 8:25 AM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
Version 0.12.0.
I’d like
the reader:
HCatReader hcatReader = DataTransferFactory.getHCatReader(inputSplit, config);
IteratorHCatRecord records = hcatReader.read();
b) Iterate over the records for that reader
On Mon, Jun 16, 2014 at 9:57 AM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
regarding
I’m experimenting with HCatalog, and would like to be able to access tables and
their schema
from a Java application (not Hive/Pig/MapReduce). However, the API seems to be
hidden, which
leads leads me to believe that this is not a supported use case. Is HCatalog
use limited to
one of the
and
will be removed in Hive 0.14.0. I can provide you with the code sample if you
tell me what you are trying to do and what version of Hive you are using.
On Fri, Jun 13, 2014 at 7:33 AM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
I’m experimenting with HCatalog, and would like to be able
Doing this, with the appropriate substitutions for my table, jarClass, etc:
2. To get the table schema... I assume that you are after HCat schema
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.Job;
import
23 matches
Mail list logo