- Original Message -
From: Arko Provo Mukherjee arkoprovomukher...@gmail.com
Date: Tuesday, November 8, 2011 1:26 pm
Subject: Issues with Distributed Caching
To: mapreduce-user@hadoop.apache.org
Hello,
I am having the following problem with Distributed Caching.
*In the driver
- Original Message -
From: donal0412 donal0...@gmail.com
Date: Tuesday, November 8, 2011 1:04 pm
Subject: dfs.write.packet.size set to 2G
To: hdfs-user@hadoop.apache.org
Hi,
I want to store lots of files in HDFS, the file size is = 2G.
I don't want the file to split into blocks,because
You can look at BlockPoolSliceScanner#scan method. This is in trunk code.
You can find this logic in DataBlockScanner#run in earlier versions.
Regards,
Uma
- Original Message -
From: kartheek muthyala kartheek0...@gmail.com
Date: Monday, November 7, 2011 7:31 pm
Subject: Any daemon?
forwarding to mapreduce
---BeginMessage---
Am I being completely silly asking about this? Does anyone know?
On Wed, Nov 2, 2011 at 6:27 PM, Meng Mao meng...@gmail.com wrote:
Is there any mechanism in place to remove failed task attempt directories
from the TaskTracker's jobcache?
It
in this issue
and
tried different ways but no result.^^
BS.
Masoud
On 11/03/2011 08:34 PM, Uma Maheswara Rao G 72686 wrote:
it wont disply any thing on console.
If you get any error while exceuting the command, then only it
will disply on console. In your case it might executed
This problem may come if you dont configure the hostmappings properly.
Can you check whether your tasktrackers are pingable from each other with the
configured hostsnames?
Regards,
Uma
- Original Message -
From: Russell Brown misterr...@gmail.com
Date: Friday, November 4, 2011 9:00 pm
- Original Message -
From: Russell Brown misterr...@gmail.com
Date: Friday, November 4, 2011 9:11 pm
Subject: Re: Never ending reduce jobs, error Error reading task
outputConnection refused
To: mapreduce-user@hadoop.apache.org
On 4 Nov 2011, at 15:35, Uma Maheswara Rao G 72686 wrote
- Original Message -
From: Russell Brown misterr...@gmail.com
Date: Friday, November 4, 2011 9:18 pm
Subject: Re: Never ending reduce jobs, error Error reading task
outputConnection refused
To: mapreduce-user@hadoop.apache.org
On 4 Nov 2011, at 15:44, Uma Maheswara Rao G 72686 wrote
in this issue
and
tried different ways but no result.^^
BS.
Masoud
On 11/03/2011 08:34 PM, Uma Maheswara Rao G 72686 wrote:
it wont disply any thing on console.
If you get any error while exceuting the command, then only it
will disply on console. In your case it might executed
Looks before comlpeting the file, folder has been deleted.
In HDFS, we will be able to delete the files any time. Application need to take
care about the file comleteness depending on his usage.
Do you have any dfsclient side logs in mapreduce, when exactly delete command
issued?
- Original
- Original Message -
From: kartheek muthyala kartheek0...@gmail.com
Date: Thursday, November 3, 2011 11:23 am
Subject: Packets-Block
To: common-user@hadoop.apache.org
Hi all,
I need some info related to the code section which handles the
followingoperations.
Basically DataXceiver.c
.
On Thu, Nov 3, 2011 at 12:55 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
- Original Message -
From: kartheek muthyala kartheek0...@gmail.com
Date: Thursday, November 3, 2011 11:23 am
Subject: Packets-Block
To: common-user@hadoop.apache.org
Hi all,
I
this folder permission via cygwin,NO RESULT.
Im really confused. ...
any idea please ...?
Thanks,
B.S
On 11/01/2011 05:38 PM, Uma Maheswara Rao G 72686 wrote:
Looks, that is permissions related issue on local dirs
There is an issue filed in mapred, related to this problem
https
Can you please give some trace?
- Original Message -
From: Masoud mas...@agape.hanyang.ac.kr
Date: Tuesday, November 1, 2011 11:08 am
Subject: under cygwin JUST tasktracker run by cyg_server user, Permission
denied .
To: common-user@hadoop.apache.org
Hi
I have problem in running
.
On 11/01/2011 04:33 PM, Uma Maheswara Rao G 72686 wrote:
Can you please give some trace?
- Original Message -
From: Masoudmas...@agape.hanyang.ac.kr
Date: Tuesday, November 1, 2011 11:08 am
Subject: under cygwin JUST tasktracker run by cyg_server user,
Permission denied
If you want to trace one particular block associated with a file, you can first
check the file Name and find the NameSystem.allocateBlock: from your NN logs.
here you can find the allocated blockID. After this, you just grep with this
blockID from your huge logs. Take the time spamps for each
- Original Message -
From: Jay Vyas jayunit...@gmail.com
Date: Saturday, October 29, 2011 8:27 pm
Subject: can't format namenode
To: common-user@hadoop.apache.org
Hi guys : In order to fix some issues im having (recently posted),
I'vedecided to try to make sure my name node is
- Original Message -
From: Josu Lazkano josu.lazk...@barcelonamedia.org
Date: Thursday, October 27, 2011 9:38 pm
Subject: Permission denied for normal users
To: hdfs-user@hadoop.apache.org
Hello list, I am new on Hadoop, I configura a 3 slaves and 1 master
Hadoop cluster.
The problem
Hi,
Firt of all, welcome to Hadoop.
- Original Message -
From: panamamike panamam...@hotmail.com
Date: Sunday, October 23, 2011 8:29 pm
Subject: Need help understanding Hadoop Architecture
To: core-u...@hadoop.apache.org
I'm new to Hadoop. I've read a few articles and presentations
- Original Message -
From: Ossi los...@gmail.com
Date: Friday, October 21, 2011 2:57 pm
Subject: lost data with 1 failed datanode and replication factor 3 in 6 node
cluster
To: common-user@hadoop.apache.org
hi,
We managed to lost data when 1 datanode broke down in a cluster of 6
- Original Message -
From: Mark question markq2...@gmail.com
Date: Saturday, October 22, 2011 5:57 am
Subject: Remote Blocked Transfer count
To: common-user common-user@hadoop.apache.org
Hello,
I wonder if there is a way to measure how many of the data blocks
havetransferred over
use case can you guys
point to?
I am not sure, what is your exact question here. Can you please clarify more on
this?
~Kartheek
On Mon, Oct 17, 2011 at 12:53 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
AFAIK, append option is there in 20Append branch. Mainly
supports
- Original Message -
From: bourne1900 bourne1...@yahoo.cn
Date: Tuesday, October 18, 2011 3:21 pm
Subject: could not complete file...
To: common-user common-user@hadoop.apache.org
Hi,
There are 20 threads which put file into HDFS ceaseless, every
file is 2k.
When 1 million files
get your question clearly.
~Kartheek
On Tue, Oct 18, 2011 at 12:14 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
- Original Message -
From: kartheek muthyala kartheek0...@gmail.com
Date: Tuesday, October 18, 2011 11:54 am
Subject: Re: Does hadoop support append
- Original Message -
From: Oleg Ruchovets oruchov...@gmail.com
Date: Tuesday, October 18, 2011 4:11 pm
Subject: execute hadoop job from remote web application
To: common-user@hadoop.apache.org
Hi , what is the way to execute hadoop job on remote cluster. I
want to
execute my hadoop
access for this site.
Also please go through this docs,
http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Example%3A+WordCount+v2.0
Here is the wordcount example.
On Tue, Oct 18, 2011 at 1:13 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
- Original
AFAIK, append option is there in 20Append branch. Mainly supports sync. But
there are some issues with that.
Same has been merged to 20.205 branch and will be released soon (rc2
available). And also fixed many bugs in this branch. As per our basic testing
it is pretty good as of now.Need to
So is there a client program to call this?
Can one write their own simple client to call this method from all
diskson the cluster?
How about a map reduce job to collect from all disks on the cluster?
On 10/15/11 4:51 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.comwrote
?
P.s. I'd moved this conversation to hdfs-user@ earlier on, but now I
see it being cross posted into mr-user, common-user, and common-
dev --
Why?
On Mon, Oct 17, 2011 at 9:25 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
We can write the simple program and you can call this API
- Original Message -
From: Mayuran Yogarajah mayuran.yogara...@casalemedia.com
Date: Tuesday, October 18, 2011 4:24 am
Subject: Hadoop node disk failure - reinstall question
To: common-user@hadoop.apache.org common-user@hadoop.apache.org
One of our nodes died today, it looks like the
You are using Which version of Hadoop ?
Please check the recent discussion, which will help you related to this problem.
http://search-hadoop.com/m/PPgvNPUoL2subj=Re+Starting+Datanode
Regards,
Uma
- Original Message -
From: Majid Azimi majid.merk...@gmail.com
Date: Sunday, October 16,
Are you able to ping the other node with the configured hostnames?
Make sure that you should be able to ping to the other machine with the
configured hostname in ect/hosts files.
Regards,
Uma
- Original Message -
From: praveenesh kumar praveen...@gmail.com
Date: Sunday, October 16, 2011
19:52, Uma Maheswara Rao G 72686
mahesw...@huawei.comwrote:
Are you able to ping the other node with the configured hostnames?
Make sure that you should be able to ping to the other machine
with the
configured hostname in ect/hosts files.
Regards,
Uma
- Original Message
/** Return the disk usage of the filesystem, including total capacity,
* used space, and remaining space */
public DiskStatus getDiskStatus() throws IOException {
return dfs.getDiskStatus();
}
DistributedFileSystem has the above API from java API side.
Regards,
Uma
- Original
I think below can give you more info about it.
http://developer.yahoo.com/blogs/hadoop/posts/2009/08/the_anatomy_of_hadoop_io_pipel/
Nice explanation by Owen here.
Regards,
Uma
- Original Message -
From: Yang Xiaoliang yangxiaoliang2...@gmail.com
Date: Wednesday, October 5, 2011 4:27 pm
Yes, FileStatus class would be trhe equavalent for list.
FileStstus has the API's isDir and getPath. This both api's can satify for
your futher usage.:-)
I think small difference would be, FileStatus will ensure the sorted order.
Regards,
Uma
- Original Message -
From: John Conwell
Hi,
It looks to me that, problem with your NFS. It is not supporting locks. Which
version of NFS are you using?
Please check your NFS locking support by writing simple program for file
locking.
I think NFS4 supports locking ( i did not tried).
http://nfs.sourceforge.net/
A6. What are the
Distcp will run as mapreduce job.
Here tasktrackers required the hostname mappings to contact to other nodes.
Please configure the mapping correctly in both the machines and try.
egards,
Uma
- Original Message -
From: trang van anh anh...@vtc.vn
Date: Wednesday, October 5, 2011 1:41 pm
Hello Kiran,
Can you check that block presents in DN and check the generation timestamp in
metafile(if you are aware of it)?
Can you grep the blk_-8354424441116992221 from your logs and paste here?
We have seen this when recovery is in progress and read parallelly(in 0.20x
versions). If
FileSystem objects will be cached in jvm.
When it tries to get the FS object by using Filesystem.get(..) ( sequence file
internally will use it), it will return same fs object if scheme and authority
is same for the uri.
fs cache key's equals implementation is below
static boolean
hi,
Here is some useful info:
A small file is one which is significantly smaller than the HDFS block size
(default 64MB). If you’re storing small files, then you probably have lots of
them (otherwise you wouldn’t turn to Hadoop), and the problem is that HDFS
can’t handle lots of files.
Every
Java 6, Cygwin ( maven + tortoiseSVN are for building hadoop) should be enough
for running standalone mode in windows.
Regards,
Uma
- Original Message -
From: Mark Kerzner markkerz...@gmail.com
Date: Saturday, September 24, 2011 4:58 am
Subject: How to run Hadoop in standalone mode in
, 2011 at 10:03 AM, He Chen airb...@gmail.com wrote:
Hi
It is interesting that a guy from Huawei is also working on
Hadoop project.
:)
Chen
On Sun, Sep 25, 2011 at 11:29 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
Hi,
You can find the Code
Hello Abdelrahman,
Are you able to ping from one machine to other with the configured hostname?
configure both the hostnames in /etc/hosts file properly and try.
Regards,
Uma
- Original Message -
From: Abdelrahman Kamel abdouka...@gmail.com
Date: Monday, September 26, 2011 8:47 pm
Hello Joris,
Looks You have configured mapred.map.child.java.opts to -Xmx512M,
To spawn a child process that much memory is required.
Can you check what other processes occupied memory in your machine. Bacuse your
current task is not getting the enough memory to initialize. or try to reduce
Hi,
You can find the Code in DFSOutputStream.java
Here there will be one thread DataStreamer thread. This thread will pick the
packets from DataQueue and write on to the sockets.
Before this, when actually writing the chunks, based on the block size
parameter passed from client, it will
,
Arun
On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
Hello Arun,
If you want to apply MAPREDUCE-1253 on 21 version,
applying patch directly using commands may not work because of
codebase changes.
So, you take the patch and apply
from storing metadata info, Is there anything more NN/JT
machinesare doing ?? .
So I can say I can survive with poor NN if I am not dealing with
lots of
files in HDFS ?
On Thu, Sep 22, 2011 at 11:08 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
By just changing the configs
,
Arun
On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
Hello Arun,
If you want to apply MAPREDUCE-1253 on 21 version,
applying patch directly using commands may not work because of
codebase changes.
So, you take the patch and apply
Hello Arun,
On which code base you are trying to apply the patch.
Code should match to apply the patch.
Regards,
Uma
- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Wednesday, September 21, 2011 11:33 am
Subject: Making Mumak work with capacity scheduler
To:
is not able to start after crashing without
enough HD space.
Wei
-Original Message-
From: Uma Maheswara Rao G 72686 [mailto:mahesw...@huawei.com]
Sent: Tuesday, September 20, 2011 9:30 PM
To: common-user@hadoop.apache.org
Subject: Re: RE: java.io.IOException: Incorrect data format
Looks that patchs are based on 0.22 version. So, you can not apply them
directly.
You may need to merge them logically ( back port them).
one more point to note here 0.21 version of hadoop is not a stable version.
Presently 0.20xx versions are stable.
Regards,
Uma
- Original Message -
Hello Arun,
If you want to apply MAPREDUCE-1253 on 21 version,
applying patch directly using commands may not work because of codebase
changes.
So, you take the patch and apply the lines in your code base manually. I am
not sure any otherway for this.
Did i understand wrongly your
Hi,
You need not copy the files to NameNode.
Hadoop provide Client code as well to copy the files.
To copy the files from other node ( non dfs), you need to put the
hadoop**.jar's into classpath and use the below code snippet.
FileSystem fs =new DistributedFileSystem();
For more understanding the flows, i would recommend you to go through once
below docs
http://hadoop.apache.org/common/docs/r0.16.4/hdfs_design.html#The+File+System+Namespace
Regards,
Uma
- Original Message -
From: Uma Maheswara Rao G 72686 mahesw...@huawei.com
Date: Wednesday, September
,
Praveenesh
On Wed, Sep 21, 2011 at 2:37 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
For more understanding the flows, i would recommend you to go
through once
below docs
http://hadoop.apache.org/common/docs/r0.16.4/hdfs_design.html#The+File+System+Namespace
Regards,
Uma
?
or is there any other way to do it from java code ?
Thanks,
Praveenesh
-- Forwarded message --
From: Uma Maheswara Rao G 72686 mahesw...@huawei.com
Date: Wed, Sep 21, 2011 at 3:27 PM
Subject: Re: Any other way to copy to HDFS ?
To: common-user@hadoop.apache.org
When
Hi,
Any cluster restart happend? ..is your NameNode detecting DataNodes as live?
Looks DNs did not report anyblocks to NN yet. You have 13 blocks persisted in
NameNode namespace. At least 12 blocks should be reported from your DNs. Other
wise automatically it will not come out of safemode.
pm
Subject: Re: Problem with MR job
To: common-user@hadoop.apache.org
Cc: Uma Maheswara Rao G 72686 mahesw...@huawei.com
Hi,
Some more logs, specifically from the JobTracker:
2011-09-21 10:22:43,482 INFO
org.apache.hadoop.mapred.JobInProgress:
Initializing job_201109211018_0001
2011
issue HDFS-1623 to
build.(Inprogress)This may take couple of months to integrate.
-Jignesh
On Sep 17, 2011, at 12:08 AM, Uma Maheswara Rao G 72686 wrote:
Hi Kobina,
Some experiences which may helpful for you with respective to DFS.
1. Selecting the correct version.
I
You copy the same installations to new machine and change ip address.
After that configure the new NN addresses to your clients and DNs.
Also Does Namenode/JobTracker machine's configuration needs to be better
than datanodes/tasktracker's ??
I did not get this question.
Regards,
Uma
-
, 2011, at 12:08 AM, Uma Maheswara Rao G 72686 wrote:
Hi Kobina,
Some experiences which may helpful for you with respective
to DFS.
1. Selecting the correct version.
I will recommend to use 0.20X version. This is pretty
stable version
and all other organizations
, 2011 at 10:20 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
You copy the same installations to new machine and change ip
address. After that configure the new NN addresses to your
clients and DNs.
Also Does Namenode/JobTracker machine's configuration needs to
be better
Hadoop has its RPC machanism mainly Writables to overcome some of the
disadvantages on normal serializations.
For more info:
http://www.lexemetech.com/2008/07/rpc-and-serialization-with-hadoop.html
Regards,
Uma
- Original Message -
From: jie_zhou jie_z...@xa.allyes.com
Date:
)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.
java:1326)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1448)
Wei
-Original Message-
From: Uma Maheswara Rao G 72686 [mailto:mahesw...@huawei.com]
Sent: Tuesday, September 20, 2011 9:10 PM
Hello,
You need configure heap size for child tasks using below proprty.
mapred.child.java.opts in mapred-site.xml
by default it will be 200mb. But your io.sort.mb(300) is more than that.
So, configure more heap space for child tasks.
ex:
-Xmx512m
Regards,
Uma
- Original Message -
should I set?
Thanks
On Mon, Sep 19, 2011 at 6:28 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
Hello,
You need configure heap size for child tasks using below proprty.
mapred.child.java.opts in mapred-site.xml
by default it will be 200mb. But your io.sort.mb(300
One more point to check.
Did you copy sh files from windows box? If yes, please do dos2unix conversion
if your target os is linux.
other point is,
it is clear that format has abborted. You need to give Y option instead of y.
( Harsh mentioned it)
Thanks
Uma
- Original Message -
Did you give permissions recursively?
$ sudo chown -R hduser:hadoop hadoop
Regards,
Uma
- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Sunday, September 18, 2011 12:00 pm
Subject: Submitting Jobs from different user to a queue in capacity scheduler
To:
Hello Arun,
Now we reached to hadoop permissions ;)
If you really need not worry about permissions, then you can disable it and
proceed (dfs.permissions = false).
else you can set the required permissions to user as well.
permissions guide.
Hi Arun,
Setting mapreduce.jobtracker.staging.root.dir propery value to /user might fix
this issue...
or other way could be, just execute below command
hadoop fs -chmod 777 /
Regards,
Uma
- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Sunday, September 18, 2011 8:38 pm
-user@hadoop.apache.org
Cc: hadoop-u...@lucene.apache.org
On Sun, Sep 18, 2011 at 9:35 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
or other way could be, just execute below command
hadoop fs -chmod 777 /
I wouldn't do this - it's overkill, and there's no way to go back
To: common-user@hadoop.apache.org
Cc: Uma Maheswara Rao G 72686 mahesw...@huawei.com
Hi,
When you say that 0.20.205 will support appends, you mean for
general
purpose writes on the HDFS? or only Hbase?
Thanks,
George
On 9/17/2011 7:08 AM, Uma Maheswara Rao G 72686 wrote:
6
and is known to be buggy. *sync*
support is what HBase needs and what 0.20.205 will support. Before 205
is released, you can also find these features in CDH3 or by building
your own release from SVN.
-Todd
On Sat, Sep 17, 2011 at 4:59 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote
Hi,
please find the below links
https://media.blackhat.com/bh-us-10/whitepapers/Becherer/BlackHat-USA-2010-Becherer-Andrew-Hadoop-Security-wp.pdf
http://markmail.org/download.xqy?id=yjdqleg3zv5pr54tnumber=1
Which will help you to understand more.
Regards,
Uma
- Original Message -
From:
Hello,
First of all where you are planning to use Hadoop?
Regards,
Uma
- Original Message -
From: Kobina Kwarko kobina.kwa...@gmail.com
Date: Saturday, September 17, 2011 0:41 am
Subject: risks of using Hadoop
To: common-user common-user@hadoop.apache.org
Hello,
Please can someone
September 2011 20:34, Uma Maheswara Rao G 72686
mahesw...@huawei.comwrote:
Hello,
First of all where you are planning to use Hadoop?
Regards,
Uma
- Original Message -
From: Kobina Kwarko kobina.kwa...@gmail.com
Date: Saturday, September 17, 2011 0:41 am
Subject: risks
Hi,
It is very much possible. Infact that is the main use case for Hadoop :-)
You need to put the hadoop-hdfs*.jar hdoop-common*.jar's in your class path
from where you want to run the client program.
At client node side use the below sample code
Configuration conf=new Configuration();
Hi Jhon,
Mostly the problem with your java. This problem can come if your java link
refers to java-gcj.
Please check some related links:
http://jeffchannell.com/Flex-3/gc-warning.html
Regards,
Uma
- Original Message -
From: john smith js1987.sm...@gmail.com
Date: Sunday, September 4,
Hi Florin,
./hadoop fs -ls path
Above command will give timestamp also.
Regards,
Uma Mahesh
- Original Message -
From: Florin P florinp...@yahoo.com
Date: Tuesday, August 9, 2011 12:52 pm
Subject: Listing the content of a HDFS folder oder by timestamp using shell
To:
Hi Florin,
Recently i had given the patch for controlling .crc files at client side.
Please look at https://issues.apache.org/jira/browse/HADOOP-7178.
Provided one extra API in FileSystem.java,
public void copyToLocalFile(boolean delSrc, Path src, Path dst, boolean
useRawLocalFileSystem)
/dirxx/import_2011_07_15
Regards,
Florin
--- On Tue, 8/9/11, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
From: Uma Maheswara Rao G 72686 mahesw...@huawei.com
Subject: Re: Listing the content of a HDFS folder oder by
timestamp using shell
To: hdfs-user@hadoop.apache.org
Date
Hi Rahul,
one possibility could be system time updations:
Can you check , System time changed in your system?
Since the heartbeats will depends on System times, that will effect sending the
heartbeats to NN.
Whihc version of hadoop are you using?
approximately how many blocks will be there in
Hi,
Before starting, you need to format the namenode.
./hdfs namenode -format
then this directories will be created.
respective configuration is 'dfs.namenode.name.dir'
default configurations will exist in hdfs-default.xml.
If you want to configure your own directory path, you can add the
and many Thanks :D
From: Uma Maheswara Rao G 72686 mahesw...@huawei.com
To: common-user@hadoop.apache.org; A Df
abbey_dragonfor...@yahoo.comCc: common-user@hadoop.apache.org
common-user@hadoop.apache.org
Sent: Wednesday, 27 July 2011, 17:31
Subject: Re
and many Thanks :D
From: Uma Maheswara Rao G 72686 mahesw...@huawei.com
To: common-user@hadoop.apache.org; A Df
abbey_dragonfor...@yahoo.comCc: common-user@hadoop.apache.org
common-user@hadoop.apache.org
Sent: Wednesday, 27 July 2011, 17:31
Subject: Re
Hi A Df,
Did you format the NameNode first?
Can you check the NN logs whether NN is started or not?
Regards,
Uma
**
This email and its attachments contain confidential information from HUAWEI,
which is
Hi Vighnesh,
Step 1) Download the code base from apache svn repository.
Step 2) In root folder you can find build.xml file. In that folder just execute
a)ant and b)ant eclipse
this will generate the eclipse project setings files.
After this directly you can import this project in you
Hi Mahesh,
When starting the NN, it will throw exception with your provided configuration.
please check the code snippet below where exactly validation will happen.
in NameNode:
public static InetSocketAddress getAddress(URI filesystemURI) {
String authority =
Hi,
We have already thoughts about it.
Looks like you are talking about this features right
https://issues.apache.org/jira/browse/HDFS-1640
https://issues.apache.org/jira/browse/HDFS-2115
but implementation not yet ready in trunk
Regards,
Uma
Hi,
We have already thoughts about it.
Looks like you are talking about this features right
https://issues.apache.org/jira/browse/HDFS-1640
https://issues.apache.org/jira/browse/HDFS-2115
but implementation not yet ready in trunk
Regards,
Uma
91 matches
Mail list logo