:50111/templeton/v1/ddl/database/default?user.name=%3cmyname%3e
so your URL suggests that you have a database called testtable?
I really hope that this helps.
Regards,
Peter Marron
Senior Developer
Trillium Software, A Harte Hanks Company
Theale Court, 1st Floor, 11-13 High Street
Theale
RG7 5AH
From https://cwiki.apache.org/confluence/display/Hive/Home
Hive is not designed for OLTP workloads and does not offer real-time queries
or row-level updates.
As far as I am aware UPDATE isn't even in the Hive DML.
Z
Peter Marron
Senior Developer
Trillium Software, A Harte Hanks Company
Theale
Hi,
Not sure if it is relevant to your problem but I'm just checking
that you know about
hive.optimize.index.filter.compact.minsize
it's set to 5Gbytes by default and if the estimated query size is
less than this then the index won't be used.
HTH.
Regards
Peter Marron
Senior Developer, Research
work will I have to re-build my metastore?
Any recommendations?
Peter Marron
Office: +44 (0) 118-940-7609
peter.mar...@trilliumsoftware.commailto:peter.mar...@trilliumsoftware.com
Theale Court First Floor, 11-13 High Street, Theale, RG7 5AH, UK
[cid:image009.png@01CF1C15.2F04D310
line option to the tool.
-- Lefty
On Tue, Jan 28, 2014 at 2:39 AM, Peter Marron
peter.mar...@trilliumsoftware.commailto:peter.mar...@trilliumsoftware.com
wrote:
Hi,
So I can see from http://hive.apache.org/downloads.html
that I can download versions 11 and 12 and they will work with
Hadoop
cluster.
Is there anyone who can through any light on my problems? Or suggest
any way forward?
All feedback welcome.
Z
Peter Marron
Office: +44 (0) 118-940-7609
peter.mar...@trilliumsoftware.commailto:peter.mar...@trilliumsoftware.com
Theale Court First Floor, 11-13 High Street, Theale, RG7 5AH
about it.)
Regards,
Peter Marron
Senior Developer, Research Development
Office: +44 (0) 118-940-7609
peter.mar...@trilliumsoftware.commailto:peter.mar...@trilliumsoftware.com
Theale Court First Floor, 11-13 High Street, Theale, RG7 5AH, UK
[cid:image001.png@01CEF1A7.CCFE66A0]
[cid:image002.png
Hi,
I am using Hive 0.11.0 over Hadoop 1.0.4.
Recently I have started investigating the user of Templeton and I have managed
to get
most of the services working. Specifically I can access resources like these:
http://hpcluster1:50111/templeton/v1/version
'com.trilliumsoftware.profiling.LookupInputFormat' OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
}
}
Hope that helps someone, it certainly would have helped me.
Z
From: Peter Marron [mailto:peter.mar...@trilliumsoftware.com]
Sent: 29 July 2013 08:47
To: user@hive.apache.org
Subject: Templeton create table
Hi,
(I'm a little bit behind in reading the lists, so apologies if this is a
duplicate question.)
I am running Templeton v1 (?) and HCatalog 0.5.0 with hive 0.11.0 over Hadoop
1.0.4.
I can use something like this:
curl -s -X PUT -HContent-type:application/json -d @createtable.json
AM, Peter Marron
peter.mar...@trilliumsoftware.commailto:peter.mar...@trilliumsoftware.com
wrote:
Hi Owen,
I’m curious about this advice about partitioning. Is there some fundamental
reason why Hive
is slow when the number of partitions is 10,000 rather than 1,000?
The precise numbers don't
Sorry, just caught up with the last couple of day’s email and I feel that this
question
has already been answered fairly comprehensively. Apologies.
Z
From: Peter Marron [mailto:peter.mar...@trilliumsoftware.com]
Sent: 04 July 2013 08:37
To: user@hive.apache.org
Subject: RE: Partition
-
From: Navis류승우 [mailto:navis@nexr.com]
Sent: 02 July 2013 08:50
To: user@hive.apache.org
Subject: Re: Override COUNT() function
MetadataOnlyOptimizer changes GBY on partition columns to simple TableScan with
one line dummy.
I think similar things can be done with stats.
2013/6/28 Peter
Hi Owen,
I’m curious about this advice about partitioning. Is there some fundamental
reason why Hive
is slow when the number of partitions is 10,000 rather than 1,000? And the
improvements
that you mention are they going to be in version 12? Is there a JIRA raised so
that I can track them?
not in a position to suggest anything.
On Thu, Jun 27, 2013 at 3:14 AM, Peter Marron
peter.mar...@trilliumsoftware.commailto:peter.mar...@trilliumsoftware.com
wrote:
Hi,
If you're suggesting that I use something like
SELECT * FROM data WHERE MyUdf(data. BLOCK__OFFSET__INSIDE__FILE);
rather than
SELECT
Hi,
I feel sure that someone has asked for this before, but here goes...
In the case where I have the query
SELECT COUNT(*) FROM table;
There are many cases where I can determine the count immediately.
(For example if I have run something like:
ANALYZE TABLE tablename
Given that I started the original thread it seems appropriate that I should
point out that I also have a bought and paid for (personal) digital copy.
It's a good book.
Peter Marron
Trillium Software UK Limited
Tel : +44 (0) 118 940 7609
Fax : +44 (0) 118 940 7699
E: peter.mar
Hi,
Using Hive 0.10.0 over Hadoop 1.0.4.
I guess that I know that this is a long shot.
Is there any way to access the context from inside a UDF?
Specifically I want to get hold of the value of the virtual
column BLOCK__OFFSET__INSIDE__FILE from inside a
UDF that I'm implementing. Of course I can
(or equivalent).
Congratulations on version 0.11.0.
Z
aka
Peter Marron
Trillium Software UK Limited
Tel : +44 (0) 118 940 7609
Fax : +44 (0) 118 940 7699
E: peter.mar...@trilliumsoftware.commailto:roy.willi...@trilliumsoftware.com
is not calling my getSplits? And why this only seems to happen if a
Map/Reduce is required? And, most importantly, what do I have to
do to get it to work the way that I expect?
Any help or comments would be welcome.
Peter Marron
Trillium Software UK Limited
Tel : +44 (0) 118 940 7609
Fax : +44 (0) 118 940
)
... 7 more
Error: GC overhead limit exceeded
Al
If Also when the
If Also when the
If Also when the
If this e-mail shouldn't be here and should only be on
a cloudera mailing list, please re-direct me.
Thanks in advance.
Peter Marron
Trillium Software UK Limited
Tel : +44 (0) 118 940 7609
Fax
Hi Nitin,
Can I set these parameters through the CDH management interface?
If not then what file do they need to be set in to make sure that CDH
picks them up?
Peter Marron
Trillium Software UK Limited
Tel : +44 (0) 118 940 7609
Fax : +44 (0) 118 940 7699
E: peter.mar
-10.4.2.0.jar) into the
Hadoop directory, where I assume that the reducer would be able to find it.
However I get exactly the same problem as before.
Is there some particular place that I should put the derby.jar to make this
problem go away? Is there anything else that I can try?
Peter Marron
From
to Map/Reduce
errors?
Regards,
Peter Marron
From: Dean Wampler [mailto:dean.wamp...@thinkbiganalytics.com]
Sent: 02 November 2012 14:03
To: user@hive.apache.org
Subject: Re: Creating Indexes
Oh, I saw this line in your Hive output and just assumed you were running in a
cluster:
Hadoop job
: Unable to alter index.
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask
So what have I done wrong, and what am I to do to get this index to build
successfully?
Any help appreciated.
Peter Marron
From: Peter Marron [mailto:peter.mar...@trilliumsoftware.com]
Sent
. However this didn't seem to help either.
Maybe this is the wrong list for this question
and I should post to
common-u...@hadoop.apache.orgmailto:common-u...@hadoop.apache.org?
Any help appreciated.
Peter Marron
2012-10-25 15:55:27,429 INFO org.apache.hadoop.mapred.ReduceTask: In-memory
merge
will then notice speed up for a query of the form,
select count(*) from tab where indexed_col = some_val
Thanks,
Shreepadma
On Tue, Oct 23, 2012 at 5:44 AM, Peter Marron
peter.mar...@trilliumsoftware.commailto:peter.mar...@trilliumsoftware.com
wrote:
Hi,
I'm very much a Hive newbie but I've
appreciated.
Peter Marron.
28 matches
Mail list logo