Thanks for the solution u guys provided.
actually i got one solution ..
If we add the path of any external jar into classpath of hadoop which in
$hadoop_home/hadoop-$version/conf/hadoop-env.sh , It will do.
thanks for solution
2008/10/20 Gerardo Velez [EMAIL PROTECTED]
Hi!
Actually I got
On 10/21/08 3:33 AM, Jean-Adrien [EMAIL PROTECTED] wrote:
I expected to keep 3.75 Gb free.
But free space goes under 1 Gb, as if I kept the default settings
I noticed that you're running on /. In general, this is a bad idea, as
space can disappear in various ways and you'll never know.
On Mon, Oct 20, 2008 at 9:02 PM, Edward J. Yoon [EMAIL PROTECTED] wrote:
This RDF proposal is a good long time ago. Now we'd like to settle
down to research again. I attached our proposal, We'd love to hear
your feedback stories!!
Hello, Edward,
I'm very glad to see this idea moving forward.
I'm not sure I get this.
1. If you format the filesystem (which I thought is usually executed on the
master node, but anyway)
don't you erase all your data?
2. I guess I need to add the new machine to the conf/slaves file,
but then I run the start-all.sh again from the master node while my cluster
Hi,
I have a dfs cluster with replication set to 3. In dfshealth.jsp, I see a node
with:
Size(GB) = 930.25
Used(%) = 9.83
Remaining(GB) = 631.08
How is this possible? Why doesn't (size - remaining = used * size)? Are Size
and Remaining measuring in different units (replicated vs. not)?
At
When starting hadoop, it stays in safe mode forever. Looking into the issue,
seems there is problem with block replication.
Command hadoop fsck / shows error msg:
/X/part-7.deflate: CORRUPT block blk_1402039344260425079
/X/part-7.deflate: MISSING 1 blocks of total size
http://markmail.org/message/2xtywnnppacywsya shows we can exit safe mode
explicitly and just delete these corrupted files.
But I don't know how to exit safe mode explicitly.
Zheng
-Original Message-
From: Joey Pan [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 21, 2008 2:01 PM
To:
Hey Zheng,
You can explicitly exit safemode by
hadoop dfsadmin -safemode leave
Then, you can delete the file with the corrupt blocks by:
hadoop fsck / -delete
Joey: You may list the block locations for a specific file by:
hadoop fsck /user/brian/path/to/file -files -blocks
However, if they
Hi,
Just received an email from ApacheCon US organizers that they are giving 50%
discount for Hadoop Training
(http://us.apachecon.com/c/acus2008/sessions/93).
We just created a discount code for you to give people 50% off of the
cost of the training. The code is Hadoop50The discount
On 20-Oct-08, at 11:59 PM, 晋光峰 wrote:
Dear all,
I use Hadoop 0.18.0 [...]
0.18.0 is not a stable release. Try 0.17.x or 0.18.1. I had a
similar problem which was solved by switching. There are suggestions
on the web and archives of this list for dealing with the issue, but
try
---
Jim Kellerman, Powerset (Live Search, Microsoft Corporation)
-Original Message-
From: Milind Bhandarkar [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 21, 2008 3:37 PM
To: core-user@hadoop.apache.org; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; pig-
[EMAIL
Can you give me a detailed explanation about how to dealing with this issue?
I havn't found related archives of this list.
Regards
Guangfeng
On Tue, Oct 21, 2008 at 6:19 PM, Karl Anderson [EMAIL PROTECTED] wrote:
On 20-Oct-08, at 11:59 PM, 晋光峰 wrote:
Dear all,
I use Hadoop 0.18.0 [...]
You just start the new data-node as the cluster is running using
bin/hadoop datanode
The configuration on the new data-node should be the same as on other nodes.
The data-node should join the cluster automatically.
Formatting will destroy your file system.
--Konstantin
David Wei wrote:
Well, in
It is really different from what you said that on my cluster, and
performing format command will not destroy the file system. And if I do
not perform format command on datanode, the datanode will not be able to
find my the master.
Hopefully anybody can tell me the reason.
Konstantin Shvachko 写道:
14 matches
Mail list logo