hi,harsh, i wrote the namenode's log here. The problem occurs occasionally
Thanks a lot!
2011-08-22 14:41:05,939 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=tpic,users,dialout ip=/172.28.1.
29 cmd=delete src=/walter/send_albums/110822_143455/_temporary
dst=null perm=nul
To go some ways in answering my own question.
This comment from Alejandro is very helpful.
https://issues.apache.org/jira/browse/HDFS-2277?focusedCommentId=13089177&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13089177
-- Forwarded message --
From:
Nicholas,
Thanks. In order to move forward with the integration of Hoop I need
consensus on:
*
https://issues.apache.org/jira/browse/HDFS-2178?focusedCommentId=13089106&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13089106
* https://issues.apache.org/jira/browse/
+1
I believe HDFS-2178 is very close to being committed. Great work Alejandro!
Nicholas
From: Alejandro Abdelnur
To: common-user@hadoop.apache.org; hdfs-...@hadoop.apache.org
Sent: Monday, August 22, 2011 2:16 PM
Subject: Hoop into 0.23 release
Hadoop develop
Hadoop developers,
Arun will be cutting a branch for Hadoop 0.23 as soon the trunk has a
successful build.
I'd like Hoop (https://issues.apache.org/jira/browse/HDFS-2178) to be part
of 0.23 (Nicholas already looked at the code).
In addition, the Jersey utils in Hoop will be handy for
https://iss
When I download the Pig 0.8.1 tarball I don't find any junit class files, just
a license file (which probably doesn't need to be there). If you build it it
will pull those via Ivy, but I they are not in the tarball.
AFAIK it will work with any Junit 4.x, but 4.5 is what we use in our testing.
I meant tasks running on the Task Trackers.
Harsh J.'s answer is what I needed. This makes sense now.
On Mon, Aug 22, 2011 at 11:06 AM, John Armstrong wrote:
> On Mon, 22 Aug 2011 11:01:23 -0700, "W.P. McNeill"
> wrote:
> > If it is, what is the proper way to make MyJar.jar available to both th
If you are asking how to make those classes available at run time you can
either use the -libjars command for the distributed cache or you can just shade
those classes into your jar using maven. I have had enough issues in the past
with classpath being flaky that I prefer the shading method but
On Mon, Aug 22, 2011 at 11:31 PM, W.P. McNeill wrote:
> What does HADOOP_CLASSPATH set in $HADOOP/conf/hadoop-env.sh do?
>
> This isn't clear to me from documentation and books, so I did some
> experimenting. Here's the conclusion I came to: the paths in
> HADOOP_CLASSPATH are added to the class p
On Mon, 22 Aug 2011 11:01:23 -0700, "W.P. McNeill"
wrote:
> If it is, what is the proper way to make MyJar.jar available to both the
> Job
> Client and the Task Trackers?
Do you mean the task trackers, or the tasks themselves? What process do
you want to be able to run the code in MyJar.jar?
What does HADOOP_CLASSPATH set in $HADOOP/conf/hadoop-env.sh do?
This isn't clear to me from documentation and books, so I did some
experimenting. Here's the conclusion I came to: the paths in
HADOOP_CLASSPATH are added to the class path of the Job Client, but they are
not added to the class path
On Aug 21, 2011, at 11:00 PM, steven zhuang wrote:
> thanks Allen, I really wish there wasn't such a version 0.21.0. :)
It is tricky (lots of config work), but you could always run the two
versions in parallel on the same gear, distcp from 0.21 to 0.20.203, then
shutdown the 0.21 inst
On Aug 22, 2011, at 3:00 AM, אבי ווקנין wrote:
> I assumed that the 1.7GB RAM will be the bottleneck in my environment that's
> why I am trying to change it now.
>
> I shut down the 4 datanodes with 1.7GB RAM (Amazon EC2 small instance) and
> replaced them with
>
> 2 datanodes with 7.5GB RAM (Am
Thanks for all the help on this issue. It turned out to be a very simple
problem with my 'compareTo' implementation.
The ordering was symmetric but _not_ transitive.
stan
On Tue, Aug 16, 2011 at 4:47 PM, Chris White wrote:
> Can you copy the contents of your parent Writable readField and write
I found out why that string was getting whipped.
It happens that on the function context.getInputValue(). Its the string
serialized as I wanted, but, I was copying as a new object instance
(std::string = context.getInputValue()) instead of reserve and memcpy the
content. Did it and it work like a
Hi.
I'm running a 2 Node Hadoop (v 20.203).
I'm trying to do a distributed image processing. And now, I'm facing an
issue,
The job it's a Hadoop pipes job, with custom RecordReader, Maper and Reducer
(standard record Writer yet).
The job itself it's using C++ and OpenCV as image processing l
Avi,
You can run with 1.7GB of RAM, but that means you're going to have 1 m/r slot
per node.
With 4 cores, figure 1 core for DN, 1 core for TT and then w hyper threading 2
threads per core means 4 virtual cores and then you could run with 4 slots per.
(3 mappers 1 reducer)
So that would be 1
Hi all,
I would like to force locality for the tasks in hadoop using
FairScheduler and schedule the tasks locally than on non-local nodes.
I am using CDH3 and tried setting the property in mapred-site.xml,
"mapred.fairscheduler.locality.delay" to 5000 ms. But it doesn't seem
to work. Also I coul
Hey to everyone! ;)
Anyone knows, how do I pass null or empty -param through to pig script?
Following *does not* work:
/pig/bin/pig -param *rootPath=* MapProfile.pig
/pig/bin/pig -param *rootPath=''* MapProfile.pig
/pig/bin/pig -param *rootPath=""* MapProfile.pig
/pig/bin/pig -param *rootPath=nul
发送自 HTC
- Reply message -
发件人: "lulynn_2008"
收件人: "u...@pig.apache.org" ,
"common-user@hadoop.apache.org"
主题: Can pig-0.8.1 can work with junit 4.3.1 or 4.8.1 or 4.8.2?
日期: 周日, 8 月 7 日, 2011 年 23:52
Hello,
I found pig-0.8.1 included junit-4.5 class files.
Could you please give me so
Hi Allen/Michel ,
First, thanks a lot for your reply.
I assumed that the 1.7GB RAM will be the bottleneck in my environment that's
why I am trying to change it now.
I shut down the 4 datanodes with 1.7GB RAM (Amazon EC2 small instance) and
replaced them with
2 datanodes with 7.5GB RAM (Amazon
Hi Allen/Michel ,
First, thanks a lot for your reply.
I assumed that the 1.7GB RAM will be the bottleneck in my environment that's
why
I am trying to change it now.
I shut down the 4 datanodes with 1.7GB RAM (Amazon EC2 small instance) and
replaced them with
2 datanodes with 7.5GB RAM (Amazon EC
22 matches
Mail list logo