There is a configuration property that allows you to reserve some disk space
on datanode servers:
dfs.datanode.du.reserved
Reserved space in bytes. Always leave this much space free
for non dfs use
10
-Igor
-Original Message-
From: Michael Bieniosek [mailto:[EMAIL PROTECT
[
https://issues.apache.org/jira/browse/HADOOP-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12490139
]
Igor Bolotin commented on HADOOP-1170:
--
There is another issue HADOOP-1200 that was open exactly for this
[
https://issues.apache.org/jira/browse/HADOOP-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Igor Bolotin updated HADOOP-1170:
-
Status: Patch Available (was: Open)
> Very high CPU usage on data nodes because
[
https://issues.apache.org/jira/browse/HADOOP-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Igor Bolotin updated HADOOP-1170:
-
Attachment: 1170-v2.patch
This patch removes all FSDataset.checkDataDir() calls from DataNode
[
https://issues.apache.org/jira/browse/HADOOP-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485583
]
Igor Bolotin commented on HADOOP-1170:
--
I'll prepare patch with all calls removed later today
> Very
[
https://issues.apache.org/jira/browse/HADOOP-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Igor Bolotin updated HADOOP-1170:
-
Status: Open (was: Patch Available)
> Very high CPU usage on data nodes because
[
https://issues.apache.org/jira/browse/HADOOP-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Igor Bolotin updated HADOOP-1170:
-
Status: Patch Available (was: Open)
> Very high CPU usage on data nodes because
[
https://issues.apache.org/jira/browse/HADOOP-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Igor Bolotin updated HADOOP-1170:
-
Attachment: 1170.patch
Attached patch removes checkDataDir() calls from DataXceiveServer.run
Project: Hadoop
Issue Type: Bug
Components: dfs
Affects Versions: 0.11.2
Reporter: Igor Bolotin
While investigating performance issues in our Hadoop DFS/MapReduce cluster I
saw very high CPU usage by DataNode processes.
Stack trace showed following on most of the
h also means for every
task executed in the cluster. Once I commented out the check and
restarted datanodes - the performance went up and CPU usage went down to
reasonable level.
Now the question is - am I missing something here or this check should
really be removed?
Best regard
[
http://issues.apache.org/jira/browse/HADOOP-108?page=comments#action_12375436 ]
Igor Bolotin commented on HADOOP-108:
-
This problem didn't happen to us anymore after upgrading Hadoop.
Also, based on the description - this one looks like duplica
[
http://issues.apache.org/jira/browse/HADOOP-129?page=comments#action_12374672 ]
Igor Bolotin commented on HADOOP-129:
-
Does it make sense to create class that would extend File and override
unsupported operations to throw
[ http://issues.apache.org/jira/browse/HADOOP-139?page=all ]
Igor Bolotin updated HADOOP-139:
Attachment: deadlock.patch
Attached is proposed patch. I removed entire lock/release method
synchronization and replaced it with critical section
Deadlock in LocalFileSystem lock/release
-
Key: HADOOP-139
URL: http://issues.apache.org/jira/browse/HADOOP-139
Project: Hadoop
Type: Bug
Components: fs
Reporter: Igor Bolotin
LocalFileSystem lock/release methods
[
http://issues.apache.org/jira/browse/HADOOP-108?page=comments#action_12372167 ]
Igor Bolotin commented on HADOOP-108:
-
Found the difference - looks like it happens only when using hadoop with patch
from HADOOP-107
> EOFException in DataN
EOFException in DataNode$DataXceiver.run
Key: HADOOP-108
URL: http://issues.apache.org/jira/browse/HADOOP-108
Project: Hadoop
Type: Bug
Components: dfs
Reporter: Igor Bolotin
This morning - after upgrade of the
[
http://issues.apache.org/jira/browse/HADOOP-107?page=comments#action_12372041 ]
Igor Bolotin commented on HADOOP-107:
-
Just tested the patch and now it works as expected.
Thanks!
> Namenode errors "Failed to complete filename.crc
[
http://issues.apache.org/jira/browse/HADOOP-107?page=comments#action_12372028 ]
Igor Bolotin commented on HADOOP-107:
-
This is correct - I tryed to write log files directly to DFS and depending on
activity it could take pretty long time between calls
ct: Hadoop
Type: Bug
Components: dfs
Environment: Linux
Reporter: Igor Bolotin
We're getting lot of these errors and here is what I see in namenode log:
060327 002016 Removing lease [Lease. Holder: DFSClient_1897466025, heldlocks:
0, pendingcreates: 0], leases remaining: 1
0603
19 matches
Mail list logo