I don't know the details about Amazon S3, but I believe they have
support for their customers.
Would you contact them and let us know the details?
Zheng
On Thu, Jan 21, 2010 at 2:00 PM, ankit bhatnagar wrote:
> Thanks you guys .
> I have a Question if I use Mahout Project's XmlInputFormat
>
>
I am currently using hadoop 0.20.1. When I installed ant I did
-Dhadoop.version="0.20.0"
I will try and get you that output but my connection is rough right now and
I need to do a little port forwarding. I will get it to you when i can.
On Thu, Jan 21, 2010 at 5:04 PM, Ning Zhang wrote:
> Which
Which hadoop version are you using (you can see it by running 'hadoop
version')? Have you specified hadoop.version when you compile Hive? Are they
match?
Also can you post the log file found through the Tracking URL when you launch
the job (in your example http://master:50030/jobdetails.jsp...)
Thanks you guys .
I have a Question if I use Mahout Project's XmlInputFormat
Then while running on Amazon S3 how would the hive know this format?
I saw that when Amazon executes jobs firstly it installs hive.
I hope this make sense?
Ankit
I did not see any hive jar files within hadoop lib directory. I compiled it
using ant and the basic readme.
On Thu, Jan 21, 2010 at 4:17 PM, Ning Zhang wrote:
> It may be because you have 2 copies of Hive's jar files (e.g., one in
> Hadoop's lib directory and one in Hive's lib directory) and th
It may be because you have 2 copies of Hive's jar files (e.g., one in Hadoop's
lib directory and one in Hive's lib directory) and they are from different
releases. If you have compiled Hive trunk yoruself please make sure no Hive's
jar files in the Hadoop's directory.
On Jan 21, 2010, at 12:5
Here is what I got from the hive log;
2010-01-21 20:49:13,291 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.core.resources" but it cannot be resolved.
2010-01-21 20:49:13,291 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bund
Has anyone seen this error? Any help is appreciated, thanks;
hive> select * from apachelog where host ="64.62.191.114";
Total MapReduce jobs = 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201001211744_0002, Tracking URL =
http://master:50030/jobdetails.j
BTW, JIRA HIVE-1047 subsumes HIVE-1039 in trunk. So if you are using branch
0.5.0, HIVE-1039 is already there. If you are using 0.4 or previous releases,
you can either apply HIVE-1039 or HIVE-1047. Both of them are very simple
changes.
On Jan 21, 2010, at 9:56 AM, Namit Jain wrote:
> Which ve
Which version are you using ?
The bug mentioned was fixed by:
https://issues.apache.org/jira/browse/HIVE-1039
Thanks,
-namit
-Original Message-
From: Min Zhou [mailto:coderp...@gmail.com]
Sent: Thursday, January 21, 2010 12:40 AM
To: hive-user@hadoop.apache.org
Subject: Re: hive multip
Ah, you're right.
I assumed that Hive would not let the operation go through unless the user
had proper privileges in both the warehouse and the metastore, but trying it
out just now you're absolutely right.
Thanks for the heads-up.
/ Oscar
On Thu, Jan 21, 2010 at 1:22 AM, Amr Awadallah wrote:
You might want to take a look through the Hadoop Wiki site and browse their
various tutorials. In addition, you can also follow Cloudera's wonderful
tutorials if you download their virtual machine:
http://www.cloudera.com/hadoop-training-virtual-machine
On Thu, Jan 21, 2010 at 9:45 AM, ankit bhat
Hi Zheng,
Thanks for the info.
One Question though - could be a bad one.
Where does a hadoop sits ?
Ankit
Though that will protect the files, it won't protect the schema (e.g.
the read-only users will still be able to drop the table).
-- amr
On 1/20/2010 9:29 PM, Oscar Gothberg wrote:
Thanks!
I realized this could also be managed, albeit still clumsily, through
user/group read/write permissions
It should be a bug of hive. see below
hive> set hive.merge.mapfiles=true;
hive> explain from netflix insert overwrite table t1 select movie_id
insert overwrite table t2 select user_id;
OK
ABSTRACT SYNTAX TREE:
(TOK_QUERY (TOK_FROM (TOK_TABREF netflix)) (TOK_INSERT
(TOK_DESTINATION (TOK_TAB t1))
15 matches
Mail list logo