Hi,all
Hadoop Beijing Meeting has successfully concluded on Nov 23. Thank you all
for your attention.
According to the agreements reached in this meeting, we have finished
setting up the hadoop-in-china nonprofit website:www.hadooper.cn.
We wish we can form a powerful hadoop
Hello,
I am currently using hadoop-0.18.0. I am not able to append files in
DFS. I came across a fix which was done on version 0.19.0
(http://issues.apache.org/jira/browse/HADOOP-1700). But I cannot migrate
to 0.19.0 version because it runs on JDK 1.6 and I have
to stick to JDK 1.5 Therefore,
Hello all,
I'm using Hadoop 0.19 and just discovered that it has no problems
processing .tgz files that contain text files. I was under the
impression that it wouldn't be able to break a .tgz file up into
multiple maps, but instead just treat it as 1 map per .tgz file. Was
this a recent change or
I believe I spoke a little too soon. Looks like Hadoop supports .gz
files, not .tgz. :-)
On Mon, Dec 1, 2008 at 10:46 AM, Ryan LeCompte [EMAIL PROTECTED] wrote:
Hello all,
I'm using Hadoop 0.19 and just discovered that it has no problems
processing .tgz files that contain text files. I was
More questions on component failure handling. Can anyone confirm (or correct)
that ?
1) When a TaskTracker crashes, the JobTracker haven't heard its heartbeat after
a timeout period will conclude its crashes and re-allocate the unfinished task
to other tasktrackers. Correct ?
2) If the
Hi,
Is there any API which COPY files from one folder to another on same HADOOP
cluster( DistCp can be used but its not effective with performance)
Sth like CopyFromLocal but with source and destination both on same hadoop
cluster.
Cheers,
Wasim
Hi,
I have an existing enterprise system using web services. I'd like to have an
event in the web service eventually result in a map/reduce being performed. It
would be very desirable to be able to package up the map reduce classes into a
jar that gets deployed inside the war file for the
Billy Pearson wrote:
We are looking for a way to support smaller clusters also that might
over run there heap size causing the cluster to crash.
Support for namespaces larger than RAM would indeed be a good feature to
have. Implementing this without impacting large cluster in-memory
Hardware/memory problems?
SIGBUS is relatively rare; it sometimes indicates a hardware error in
the memory system, depending on your arch.
Brian
On Dec 1, 2008, at 3:00 PM, Sagar Naik wrote:
Couple of the datanodes crashed with the following error
The /tmp is 15% occupied
#
# An
This looks like it could be a great feature for EC2-based Hadoop users:
http://aws.amazon.com/publicdatasets/
Has anyone tried it yet? Any datasets to share?
Doug
Brian Bockelman wrote:
Hardware/memory problems?
I m not sure.
SIGBUS is relatively rare; it sometimes indicates a hardware error in
the memory system, depending on your arch.
*uname -a : *
Linux hdimg53 2.6.15-1.2054_FC5smp #1 SMP Tue Mar 14 16:05:46 EST 2006
i686 i686 i386 GNU/Linux
I'd run memcheck overnight on the nodes that caused the problem, just
to be sure.
Another (unlikely) possibility is that the JNI callouts for the native
libraries Hadoop use (for the Compression codecs, I believe) have
crashed or were set up wrong, and died fatally enough to take out the
None of the jobs use compression for sure
-Sagar
Brian Bockelman wrote:
I'd run memcheck overnight on the nodes that caused the problem, just
to be sure.
Another (unlikely) possibility is that the JNI callouts for the native
libraries Hadoop use (for the Compression codecs, I believe) have
Was there anything mentioned as part of the tombstone message about
problematic frame? What java are you using? There are a few
reasons for SIGBUS errors, one is illegal address alignment, but from
java thats very unlikelythere were some issues with the native zip
library in older
hi,
I dont have additional information on it. If u know any other flag tht I
need to turn on , pl do tell me . The flags tht are currently on are
-XX:+HeapDumpOnOutOfMemoryError -XX:+UseParallelGC
-Dcom.sun.management.jmxremote
But this is what is listed in stdout (datanode.out) file
Java
FYI : Datanode does not run any user code and does not link with any
native/JNI code.
Raghu.
Chris Collins wrote:
Was there anything mentioned as part of the tombstone message about
problematic frame? What java are you using? There are a few reasons
for SIGBUS errors, one is illegal
Is there any way to determine which replica of each chunk is read by a
map-reduce program? I've been looking through the hadoop code, and it
seems like it tries to hide those kinds of details from the higher
level API. Ideally, I'd like the host the task was running on, the
file name and
A task may read from more than one block. For example, in line-oriented
input, lines frequently cross block boundaries. And a block may be read
from more than one host. For example, if a datanode dies midway through
providing a block, the client will switch to using a different datanode.
On 25-Nov-08, at 7:38 AM, Chris Quach wrote:
Hi,
I'm testing Hadoop to see if we could use for complex calculations
next to
the 'standard' implementation. I've set up a grid with 10 nodes and
if I run
the RandomTextWriter example only 2 nodes are used as mappers, while I
specified 10
I have some code where I create my own Hadoop job and the use the JobClient
to submit the Hadoop job. I noticed that the JobClient class has a
killJob() method. I was planning to play around and try to kill a running
Hadoop job. Does anybody know status the killJob method? I'm using Hadoop
File append is a major change, not a small bugfix. Probably, you need
to bite the bullet and upgrade to a newer JDK. :(
On Mon, Dec 1, 2008 at 4:29 AM, Sandeep Dhawan, Noida [EMAIL PROTECTED] wrote:
Hello,
I am currently using hadoop-0.18.0. I am not able to append files in
DFS. I came
21 matches
Mail list logo