hi all...
i'm using hadoop-17.2 version.. i'm tryin to cluster 3 nodes.
when i run applns, map tasks gets over and reduce tasks just halts..
And i get this Too-many fetch failures..
[EMAIL PROTECTED] hadoop-0.17.2.1]# bin/hadoop jar word/word.jar
org.myorg.WordCount input output2
08/10/01 10:56:
Are you seeing HADOOP-2009?
Thanks
Amareshwari
Nathan Marz wrote:
Unfortunately, setting those environment variables did not help my
issue. It appears that the "HADOOP_LZO_LIBRARY" variable is not
defined in both LzoCompressor.c and LzoDecompressor.c. Where is this
variable supposed to be set?
It's not ignored; it returns failure. Further, the point wasn't that
the File API is good, but rather that the File API doesn't provide a
cause for FileSystem to convert into a descriptive exception if the
error originates from there (as it does in many FileSystem
implementations). Finally,
Hi Nathan,
This is defined in build/native//config.h. It is
generated by autoconf during the build, and if it is missing or
incorrect then you probably need to make sure that the LZO libraries and
headers are in your search paths and then do a clean build.
-Colin
Nathan Marz wrote:
Unfortu
It's very interesting that the Java File API doesn't return
exceptions, but that doesn't mean it's a good interface. The fact
that there IS further exceptional information somewhere in the system
but that it is currently ignored is sort of troubling. Perhaps, at
least, we could add an overl
I am using Nutch for crawling and would like to configure Hadoop to use S3. I
made the appropriate changes to the Hadoop configuration and that appears to
be O.K. However, I *think* the problem I am hitting is that Hadoop now
expects ALL paths to be locations in S3. Below is a typical error I am
s
FileSystem::rename doesn't always have the cause, per
java.io.File::renameTo:
http://java.sun.com/javase/6/docs/api/java/io/File.html#renameTo(java.io.File)
Even if it did, it's not clear to FileSystem that the failure to
rename is fatal/exceptional to the application. -C
On Sep 30, 2008,
Unfortunately, setting those environment variables did not help my
issue. It appears that the "HADOOP_LZO_LIBRARY" variable is not
defined in both LzoCompressor.c and LzoDecompressor.c. Where is this
variable supposed to be set?
On Sep 30, 2008, at 12:33 PM, Colin Evans wrote:
Hi Nathan
On Sep 30, 2008, at 1:37 PM, Bryan Duxbury wrote:
Hey all,
Why is it that FileSystem.rename returns true or false instead of
throwing an exception? It seems incredibly inconvenient to get a
false result and then have to go poring over the namenode logs
looking for the actual error messag
Hey all,
Why is it that FileSystem.rename returns true or false instead of
throwing an exception? It seems incredibly inconvenient to get a
false result and then have to go poring over the namenode logs
looking for the actual error message. I had this case recently where
I'd forgotten to
I think we've identified a bug with the create-image parameter on the ec2
scripts under src/contrib
This was my workaround.
1) Start a single instance of the Hadoop AMI you want to modify using the
ElasticFox firefox plugin (or the ec2-tools)
2) Modify the /root/hadoop-init script and change t
Hi Nathan,
You probably need to add the Java headers to your build path as well - I
don't know why the Mac doesn't ship with this as a default setting:
export
CPATH="/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home/include
"
export
CPPFLAGS="-I/System/Library/Frameworks/J
Thanks for the help. I was able to get past my previous issue, but the
native build is still failing. Here is the end of the log output:
[exec] then mv -f ".deps/LzoCompressor.Tpo" ".deps/
LzoCompressor.Plo"; else rm -f ".deps/LzoCompressor.Tpo"; exit 1; fi
[exec] mkdir .libs
On Sep 30, 2008, at 11:46 AM, Doug Cutting wrote:
Arun C Murthy wrote:
You need to add libhadoop.so to your java.library.patch.
libhadoop.so is available in the corresponding release in the lib/
native directory.
I think he needs to first build libhadoop.so, since he appears to be
runnin
There's a patch to get the native targets to build on Mac OS X:
http://issues.apache.org/jira/browse/HADOOP-3659
You probably will need to monkey with LDFLAGS as well to get it to work,
but we've been able to build the native libs for the Mac without too
much trouble.
Doug Cutting wrote:
A
Arun C Murthy wrote:
You need to add libhadoop.so to your java.library.patch. libhadoop.so
is available in the corresponding release in the lib/native directory.
I think he needs to first build libhadoop.so, since he appears to be
running on OS X and we only provide Linux builds of this in re
Nathan,
You need to add libhadoop.so to your java.library.patch.
libhadoop.so is available in the corresponding release in the lib/
native directory.
Arun
On Sep 30, 2008, at 11:14 AM, Nathan Marz wrote:
I am trying to use SequenceFiles with LZO compression outside the
context of a MapR
I am trying to use SequenceFiles with LZO compression outside the
context of a MapReduce application. However, when I try to use the LZO
codec, I get the following errors in the log:
08/09/30 11:09:56 DEBUG conf.Configuration: java.io.IOException:
config()
at org.apache.hadoop.conf
On Tue, Sep 30, 2008 at 12:34 AM, Karl Anderson <[EMAIL PROTECTED]> wrote:
> I recommend using streaming instead if you can, much easier to develop and
> debug. It's also nice to not get that "stop doing that, jythonc is going
> away" message each time you compile :) Also check out the recently
>
Hello,
I am slightly confused about the number of reducers executed and the
size of data each receives.
Setup:
I have a setup of 5 task trackers.
In my hadoop-site:
(1)
mapred.reduce.tasks
7
The default number of reduce tasks per job. Typically
set
to a prime close to the number of
Hi,
Thanks, it worked. Correct me if I'm wrong, but isn't this a
configuration defect?
E.g the location of sec namenode is in conf/master and if I run start-
dfs.sh, the sec namenode starts it on B.
Similarly, given that the Jobtracker is specified to run on C,
shouldn't start-all.sh start
> However, HDFS uses HTTP to serve blocks up -that needs to be locked down
> too. Would the signing work there?
I am not familiar with HDFS over HTTP. Could it simply sign the
stream and include the signature at the end of the HTTP message
returned?
On Tue, Sep 30, 2008 at 8:56 AM, Steve Loughr
Jason Rutherglen wrote:
I implemented an RMI protocol using Hadoop IPC and implemented basic
HMAC signing. It is I believe faster than public key private key
because it uses a secret key and does not require public key
provisioning like PKI would. Perhaps it would be a baseline way to
sign the
23 matches
Mail list logo