Bull's eye. I am using 0.17.1.
Taeho Kang schrieb:
Gert,
What version of Hadoop are you using?
One of the people at my work who is using 0.17.1 is reporting a similar
problem - namenode's heapspace filling up too fast.
This is the status of his cluster (17 node cluster with version 0.17.1)
*-
Gert,
What version of Hadoop are you using?
One of the people at my work who is using 0.17.1 is reporting a similar
problem - namenode's heapspace filling up too fast.
This is the status of his cluster (17 node cluster with version 0.17.1)
*- 174541 files and directories, 121000 blocks = 295541 t
It looks like you have the whole file system flattened in one directory.
Both fsck and ls call the same method on the name-node getListing(), which
returns
an array FileStatus for each file in the directory.
I think that fsck works in this case because it does not use rpc and therefore
does not c
There I have:
export HADOOP_HEAPSIZE=8000
,which should be enough (actually in this case I don't know).
Running the fsck on the directory it turned out that there are 1785959
files in this dir... I have no clue how I can get the data out of there.
Can I somehow calculate, how much heap a nam
Check how much memory is allocated for the JVM running namenode.
In a file HADOOP_INSTALL/conf/hadoop-env.sh
you should change a line that starts with "export HADOOP_HEAPSIZE=1000"
It's set to 1GB by default.
On Fri, Jul 25, 2008 at 2:51 AM, Gert Pfeifer <[EMAIL PROTECTED]>
wrote:
> Update on
Update on this one...
I put some more memory in the machine running the name node. Now fsck is
running. Unfortunately ls fails with a time-out.
I identified one directory that causes the trouble. I can run fsck on it
but not ls.
What could be the problem?
Gert
Gert Pfeifer schrieb:
Hi,
I
Hi,
I am running a Hadoop DFS on a cluster of 5 data nodes with a name node
and one secondary name node.
I have 1788874 files and directories, 1465394 blocks = 3254268 total.
Heap Size max is 3.47 GB.
My problem is that I produce many small files. Therefore I have a cron
job which just runs daily