Hi everyone,
I had lots error in Hadoop 0.20.2 like:
Error: java.lang.OutOfMemoryError: Java heap space
at
sun.net.www.http.ChunkedInputStream.processRaw(ChunkedInputStream.java:336)
at
sun.net.www.http.ChunkedInputStream.readAheadNonBlocking(ChunkedInputStream.java:493)
Hi all,
hdfs nodes can be started using the sh scripts provided with hadoop.
I read that it's all based on script files
is it possible to start an HDFS (standalone) from a java application by API?
Thanks
--
View this message in context:
http://hadoop-common.472056.n3.nabble.com/Starting-an-HDF
Thanks a lot guys.
Another query for production.
Do we have any way by which we can purge the hdfs job and history logs on time
basis.
For example we want to keep only last 30 days log and its size is increasing a
lot in production.
Thanks again
Regards,
Jagaran
__
Hi Jagaran,
Short answer: the append feature is not in any release. In this sense, it is
not stable. Below are more details on the Append feature status.
- 0.20.x (includes release 0.20.2)
There are known bugs in append. The bugs may cause data loss.
- 0.20-append
There were effort on fixing
Please help me on this.
I need it very urgently
Regards,
Jagaran
- Forwarded Message
From: jagaran das
To: common-user@hadoop.apache.org
Sent: Thu, 16 June, 2011 9:51:51 PM
Subject: Re: HDFS File Appending URGENT
Thanks a lot Xiabo.
I have tried with the below code in HDFS version
Hello.
I can see that if data node receives some IO error, this can cause
checkDir storm.
What I mean:
1) any error produces DataNode.checkDiskError call
2) this call locks volume:
java.lang.Thread.State: RUNNABLE
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
I see it is not so obvious and potentially dangerous so I will be learning &
experimenting first.
Thx for the tip.
2011/6/17 Steve Loughran
> On 16/06/11 14:19, MilleBii wrote:
>
>> But if my Filesystem is up& running fine... do I have to worry at all or
>> will the copy (ftp transfer) of hdfs
Hello.
My environment is: HDFS 0.21, NameNode + BackupNode.
After some time Backup node crashed with an exception (stack trace below).
Problem #1 - the process did not exit.
I've tried to run Secondary node to perform checkout. Got similar crash, but
it did exit.
Backed up my data and restarted na
On 16/06/11 14:19, MilleBii wrote:
But if my Filesystem is up& running fine... do I have to worry at all or
will the copy (ftp transfer) of hdfs will be enough.
I'm not going to make any predictions there as if/when things go wrong
-you do need to shut down the FS before the move
-you oug