Hi,
I'm trying to decommission some nodes. The process I tried to follow is:
1) add them to conf/excluding (hadoop-site points there)
2) invoke hadoop dfsadmin -refreshNodes
This returns immediately, so I thought it was done, so i killed off
the cluster and rebooted without the new nodes, but
I'm starting to think I'm doing things wrong.
I have an absolute path to dfs.hosts.exclude that includes what i want
decommissioned, and a dfs.hosts which includes those i want to remain
commissioned (this points to the slaves file).
Nothing seems to do anything...
What am I missing?
-- David
Dear,
I want to config a 4-site hadoop cluster. but failed. Who can help me to
know why? and how can i start it? thanks.
environment:
Cent OS 5.2,
hadoop-site.xml setting:
fs.default.name hdfs://master.cloud:9000
mapred.job.tracker hdfs://master.cloud:9001
hadoop.tmp.dir
are r able to create a file in /tmp directory with the same user.
this is b'coz there is an error - /tmp/hadoop-user-namenode.pid:
Permission denied
Thanks n Regards
Nishi Gupta
Tata Consultancy Services
Mailto: [EMAIL PROTECTED]
Website: http://www.tcs.com
Leeau wrote:
Dear,
I want to config a 4-site hadoop cluster. but failed. Who can help me to
know why? and how can i start it? thanks.
you need to learn to read stack traces
2008-12-04 17:59:11,674 INFO org.apache.hadoop.ipc.Server: Stopping server
on 9000
2008-12-04 17:59:11,730 ERROR
On Dec 4, 2008, at 6:58 AM, Steve Loughran wrote:
Leeau wrote:
Dear,
I want to config a 4-site hadoop cluster. but failed. Who can help
me to
know why? and how can i start it? thanks.
you need to learn to read stack traces
Ah, the common claim of the Java programmer to the Unix admin.
Brian Bockelman wrote:
On Dec 4, 2008, at 6:58 AM, Steve Loughran wrote:
Leeau wrote:
Dear,
I want to config a 4-site hadoop cluster. but failed. Who can help me to
know why? and how can i start it? thanks.
you need to learn to read stack traces
Ah, the common claim of the Java
Hey David,
Look at the web interface. Here's mine:
http://dcache-head.unl.edu:8088/dfshealth.jsp
The admin state column says in service for normal nodes, and
decommissioning in progress for the rest. When the decommissioning
is done, the nodes will migrate to the list of dead nodes and
Hi,
I am having a 5 node cluster for hadoop usage. All nodes are multi-core.
I am running a shell command in Map function of my program and this shell
command takes one file as an input. Many of such files are copied in the
HDFS.
So in summary map function will run a command like ./run file1
I'm seeing some strange behavior with bzip2 files and release
0.19.0. I'm wondering if anyone can shed some light on what I'm seeing.
Basically it _looks_ like the processing of a particular bzip2 input
file is stopping after the first bzip2 block. Below is a comparison of
tests between
Just for the reference these links:
http://wiki.apache.org/hadoop/FAQ#17
http://hadoop.apache.org/core/docs/r0.19.0/hdfs_user_guide.html#DFSAdmin+Command
Decommissioning is not happening at once.
-refreshNodes just starts the process, but does not complete it.
There could be a lot of blocks on
Well, Map/Reduce and Hadoop by definition run maps in parallel. I think
you're interested in the following two configuration settings:
mapred.tasktracker.map.tasks.maximum
mapred.tasktracker.reduce.tasks.maximum
These go in hadoop-site.xml and will set the number of map and reduce tasks
for
Currently in Hadoop you cannot split bzip2 files:
http://issues.apache.org/jira/browse/HADOOP-4012
However, gzip files can be split:
http://issues.apache.org/jira/browse/HADOOP-437
Hope this helps.
Alex
On Thu, Dec 4, 2008 at 9:11 AM, Andy Sautins [EMAIL PROTECTED]wrote:
I'm seeing
im getting this error message when i am dong
*bash-3.2$ bin/hadoop dfs -put urls urls*
please lemme know the resolution, i have a project submission in a few hours
you didn't say what the error was?
but you can try this it should do the same thing
bin/hadoop dfs -cat urls/part-0* urls
elangovan anbalahan wrote:
im getting this error message when i am dong
*bash-3.2$ bin/hadoop dfs -put urls urls*
please lemme know the resolution, i have a project
Thanks for the link.
I followed that guide, and now I have rather strange behavior. If I
have dfs.hosts set (I didn't when I wrote my last email) to an empty
file when I start the cluster, nothing happens when I refreshnodes; I
take it that's expected. If it's set it to the hosts I want to keep,
Check u r conf in the classpath.
Check if Namenode is running
U r not able to connect to the intended Namenode
-Sagar
elangovan anbalahan wrote:
im getting this error message when i am dong
*bash-3.2$ bin/hadoop dfs -put urls urls*
please lemme know the resolution, i have a project
i tried that but nothing happened
bash-3.2$ bin/hadoop dfs -put urll urll
put: java.io.IOException: failed to create file /user/nutch/urll/.urls.crc
on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION (1)
bash-3.2$ bin/hadoop dfs -cat urls/part-0* urls
bash-3.2$ bin/hadoop
namenode is running.
but i did not understand what should i check in classpath ?
On Thu, Dec 4, 2008 at 1:34 PM, Sagar Naik [EMAIL PROTECTED] wrote:
Check u r conf in the classpath.
Check if Namenode is running
U r not able to connect to the intended Namenode
-Sagar
elangovan anbalahan
hadoop version ?
command : bin/hadoop version
-Sagar
elangovan anbalahan wrote:
i tried that but nothing happened
bash-3.2$ bin/hadoop dfs -put urll urll
put: java.io.IOException: failed to create file /user/nutch/urll/.urls.crc
on client 192.168.1.6 because target-length is 0, below
Andy,
As you said, you suspect that only one bzip2 block is being decompressed
and used; is you bzip2 file the concatenation of multiple bzip2 files (i.e.
are
you doing something like cat a.bz2 b.bz2 c.bz2 yourFile.bz2 ?) In such
a case, there will be many bzip2 end of stream markers in a
Hadoop 0.12.2
On Thu, Dec 4, 2008 at 1:54 PM, Sagar Naik [EMAIL PROTECTED] wrote:
hadoop version ?
command : bin/hadoop version
-Sagar
elangovan anbalahan wrote:
i tried that but nothing happened
bash-3.2$ bin/hadoop dfs -put urll urll
put: java.io.IOException: failed to create
Thanks for the response Abdul.
So, the bzip2 file in question is _kindof_ a concatenation of
multiple bzip2 files. It's not concatenated using cat a.bz2 b.bz2
yourFile.bz2, but it is created using pbzip2 ( pbzip2 v1.0.2 running on
CentOS 5.2 installed from the EPEL repository ). My
Please tell me why am i getting this error.
it is becoming hard for me to find a solution
*put: java.io.IOException: failed to create file
/user/nutch/urls/urls/.urllist.txt.crc on client 127.0.0.1 because
target-length is 0, below MIN_REPLICATION (1)*
i am getting this when i do
bin/hadoop dfs
Andy,
As was mentioned earlier that splitting support is being added for bzip2
files
and actually patch is under review now. I think, pbzip2 generated files
should
work fine with that because the split algorithm finds the next start of
block
marker and does not use end of stream marker. We
The Apache ZooKeeper team is proud to announce Apache ZooKeeper version
3.0.1.
ZooKeeper is a high-performance coordination service for distributed
applications. It exposes common services - such as naming, configuration
management, synchronization, and group services - in a simple interface
Thanks Abdul. Very exciting that hadoop will soon be able to handle
not only pbzip2 files but also be able to split bzip2 files.
I will apply the patch and report back.
Thank you
Andy
-Original Message-
From: Abdul Qadeer [mailto:[EMAIL PROTECTED]
Sent: Thursday,
Hi Aayush,
Do you want one map to run one command? You can give input file
consisting of lines of file outputfile. Use NLineInputFormat which
splits N lines of input as one split. i.e gives N lines to one map for
processing. By default, N is one. Then your map can just run the shell
command
28 matches
Mail list logo