Re: Hadoop PseudoDistributed configuration

2011-11-08 Thread Uma Maheswara Rao G 72686
did you configure fs.default.name with DFS address. you might have configured file:///. if yes, please update dfs address hdfs://xx.xx.xx.xx:port and try. you ned to add this in core-site.xml file. Regards, Uma - Original Message - From: Joey Echeverria Date: Tuesday, November 8, 2011

Re: Any daemon?

2011-11-07 Thread Uma Maheswara Rao G 72686
You can look at BlockPoolSliceScanner#scan method. This is in trunk code. You can find this logic in DataBlockScanner#run in earlier versions. Regards, Uma - Original Message - From: kartheek muthyala Date: Monday, November 7, 2011 7:31 pm Subject: Any daemon? To: common-user@hadoop.a

Re: under cygwin JUST tasktracker run by cyg_server user, Permission denied .....

2011-11-04 Thread Uma Maheswara Rao G 72686
ts more than 4 days im working in this issue > and > tried different ways but no result.^^ > > BS. > Masoud > > On 11/03/2011 08:34 PM, Uma Maheswara Rao G 72686 wrote: > > it wont disply any thing on console. > > If you get any error while exceuting the co

Re: Question about superuser and permissions

2011-11-03 Thread Uma Maheswara Rao G 72686
Also i would suggest you to look at TestDFSPermission & TestDFSShell test cases for dfs permissions. Which will help you for writing your unit tests. Thanks Uma - Original Message - From: Joey Echeverria Date: Friday, November 4, 2011 4:02 am Subject: Re: Question about superuser and pe

Re: under cygwin JUST tasktracker run by cyg_server user, Permission denied .....

2011-11-03 Thread Uma Maheswara Rao G 72686
lder permission via cygwin,NO RESULT. > Im really confused. ... > > any idea please ...? > > Thanks, > B.S > > > On 11/01/2011 05:38 PM, Uma Maheswara Rao G 72686 wrote: > > Looks, that is permissions related issue on local dirs > > There is an i

Re: Packets->Block

2011-11-03 Thread Uma Maheswara Rao G 72686
heek. > > On Thu, Nov 3, 2011 at 12:55 PM, Uma Maheswara Rao G 72686 < > mahesw...@huawei.com> wrote: > > > - Original Message - > > From: kartheek muthyala > > Date: Thursday, November 3, 2011 11:23 am > > Subject: Packets->Block > >

Re: Packets->Block

2011-11-03 Thread Uma Maheswara Rao G 72686
- Original Message - From: kartheek muthyala Date: Thursday, November 3, 2011 11:23 am Subject: Packets->Block To: common-user@hadoop.apache.org > Hi all, > I need some info related to the code section which handles the > followingoperations. > > Basically DataXceiver.c on the client si

Re: under cygwin JUST tasktracker run by cyg_server user, Permission denied .....

2011-11-01 Thread Uma Maheswara Rao G 72686
main(TaskTracker.java:3611) > > 2011-11-01 14:26:54,479 INFO org.apache.hadoop.mapred.TaskTracker: > SHUTDOWN_MSG: > /******** > > > Thanks, > BR. > > On 11/01/2011 04:33 PM, Uma Maheswara Rao G 72686 wrote: > > Can you please give some trace? > > - Original Messag

Re: under cygwin JUST tasktracker run by cyg_server user, Permission denied .....

2011-11-01 Thread Uma Maheswara Rao G 72686
Can you please give some trace? - Original Message - From: Masoud Date: Tuesday, November 1, 2011 11:08 am Subject: under cygwin JUST tasktracker run by cyg_server user, Permission denied . To: common-user@hadoop.apache.org > Hi > I have problem in running hadoop under cygwin 1.7 > o

Re: Server log files, order of importance ?

2011-10-31 Thread Uma Maheswara Rao G 72686
If you want to trace one particular block associated with a file, you can first check the file Name and find the NameSystem.allocateBlock: from your NN logs. here you can find the allocated blockID. After this, you just grep with this blockID from your huge logs. Take the time spamps for each op

Re: can't format namenode....

2011-10-29 Thread Uma Maheswara Rao G 72686
- Original Message - From: Jay Vyas Date: Saturday, October 29, 2011 8:27 pm Subject: can't format namenode To: common-user@hadoop.apache.org > Hi guys : In order to fix some issues im having (recently posted), > I'vedecided to try to make sure my name node is formatted But > th

Re: Need help understanding Hadoop Architecture

2011-10-23 Thread Uma Maheswara Rao G 72686
Hi, Firt of all, welcome to Hadoop. - Original Message - From: panamamike Date: Sunday, October 23, 2011 8:29 pm Subject: Need help understanding Hadoop Architecture To: core-u...@hadoop.apache.org > > I'm new to Hadoop. I've read a few articles and presentations > which are > direct

Re: Remote Blocked Transfer count

2011-10-22 Thread Uma Maheswara Rao G 72686
- Original Message - From: Mark question Date: Saturday, October 22, 2011 5:57 am Subject: Remote Blocked Transfer count To: common-user > Hello, > > I wonder if there is a way to measure how many of the data blocks > havetransferred over the network? Or more generally, how many times

Re: lost data with 1 failed datanode and replication factor 3 in 6 node cluster

2011-10-22 Thread Uma Maheswara Rao G 72686
- Original Message - From: Ossi Date: Friday, October 21, 2011 2:57 pm Subject: lost data with 1 failed datanode and replication factor 3 in 6 node cluster To: common-user@hadoop.apache.org > hi, > > We managed to lost data when 1 datanode broke down in a cluster of 6 > datanodes with >

Re: execute hadoop job from remote web application

2011-10-18 Thread Uma Maheswara Rao G 72686
, 2011 at 5:00 PM, Oleg Ruchovets > wrote: > > Excellent. Can you give a small example of code. > > Good samle by Bejoy hope, you have access for this site. Also please go through this docs, http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Example%3A+Wo

Re: execute hadoop job from remote web application

2011-10-18 Thread Uma Maheswara Rao G 72686
- Original Message - From: Oleg Ruchovets Date: Tuesday, October 18, 2011 4:11 pm Subject: execute hadoop job from remote web application To: common-user@hadoop.apache.org > Hi , what is the way to execute hadoop job on remote cluster. I > want to > execute my hadoop job from remote web

Re: Does hadoop support append option?

2011-10-18 Thread Uma Maheswara Rao G 72686
our question clearly. > > ~Kartheek > > On Tue, Oct 18, 2011 at 12:14 PM, Uma Maheswara Rao G 72686 < > mahesw...@huawei.com> wrote: > > > - Original Message - > > From: kartheek muthyala > > Date: Tuesday, October 18, 2011 11:54 am > > Su

Re: could not complete file...

2011-10-18 Thread Uma Maheswara Rao G 72686
- Original Message - From: bourne1900 Date: Tuesday, October 18, 2011 3:21 pm Subject: could not complete file... To: common-user > Hi, > > There are 20 threads which put file into HDFS ceaseless, every > file is 2k. > When 1 million files have finished, client begin throw "coulod not

Re: Does hadoop support append option?

2011-10-17 Thread Uma Maheswara Rao G 72686
ber remain the same?. Any typical use case can you guys > point to? > I am not sure, what is your exact question here. Can you please clarify more on this? > ~Kartheek > > On Mon, Oct 17, 2011 at 12:53 PM, Uma Maheswara Rao G 72686 < > mahesw...@huawei.com> wrot

Re: Hadoop node disk failure - reinstall question

2011-10-17 Thread Uma Maheswara Rao G 72686
- Original Message - From: Mayuran Yogarajah Date: Tuesday, October 18, 2011 4:24 am Subject: Hadoop node disk failure - reinstall question To: "common-user@hadoop.apache.org" > One of our nodes died today, it looks like the disk containing the > OS > expired. I will need to reinstall

Re: Is there a good way to see how full hdfs is

2011-10-17 Thread Uma Maheswara Rao G 72686
you > >guarantee perfect homogeneity of path names in your cluster. > > > >But I wonder, why won't using a general monitoring tool (such as > >nagios) for this purpose cut it? What's the end goal here? > > > >P.s. I'd moved this conversation to hdfs-u

Re: Is there a good way to see how full hdfs is

2011-10-17 Thread Uma Maheswara Rao G 72686
> So is there a client program to call this? > > Can one write their own simple client to call this method from all > diskson the cluster? > > How about a map reduce job to collect from all disks on the cluster? > > On 10/15/11 4:51 AM, "Uma Maheswara Rao G 7268

Re: Does hadoop support append option?

2011-10-17 Thread Uma Maheswara Rao G 72686
AFAIK, append option is there in 20Append branch. Mainly supports sync. But there are some issues with that. Same has been merged to 20.205 branch and will be released soon (rc2 available). And also fixed many bugs in this branch. As per our basic testing it is pretty good as of now.Need to wai

Re: Too much fetch failure

2011-10-16 Thread Uma Maheswara Rao G 72686
9:52, Uma Maheswara Rao G 72686 > wrote: > > Are you able to ping the other node with the configured hostnames? > > > > Make sure that you should be able to ping to the other machine > with the > > configured hostname in ect/hosts files. > > > > Regards, > >

Re: Too much fetch failure

2011-10-16 Thread Uma Maheswara Rao G 72686
Are you able to ping the other node with the configured hostnames? Make sure that you should be able to ping to the other machine with the configured hostname in ect/hosts files. Regards, Uma - Original Message - From: praveenesh kumar Date: Sunday, October 16, 2011 6:46 pm Subject: Re:

Re: Unrecognized option: -jvm

2011-10-16 Thread Uma Maheswara Rao G 72686
You are using Which version of Hadoop ? Please check the recent discussion, which will help you related to this problem. http://search-hadoop.com/m/PPgvNPUoL2&subj=Re+Starting+Datanode Regards, Uma - Original Message - From: Majid Azimi Date: Sunday, October 16, 2011 2:22 am Subject: Un

Re: Is there a good way to see how full hdfs is

2011-10-15 Thread Uma Maheswara Rao G 72686
/** Return the disk usage of the filesystem, including total capacity, * used space, and remaining space */ public DiskStatus getDiskStatus() throws IOException { return dfs.getDiskStatus(); } DistributedFileSystem has the above API from java API side. Regards, Uma - Original Mess

Re: How to get number of live nodes in hadoop

2011-10-11 Thread Uma Maheswara Rao G 72686
Hello Raimon, In DFS to know the DN status you can use getDataNodeStats API from Distributed fileSystem. In MR, to know the number of active trackers, you can use getClusterStatus from jobclient. It will give other stats as well. Hope this will help. Regards, Uma - Original Message

Re: Error using hadoop distcp

2011-10-10 Thread Uma Maheswara Rao G 72686
Distcp will run as mapreduce job. Here tasktrackers required the hostname mappings to contact to other nodes. Please configure the mapping correctly in both the machines and try. egards, Uma - Original Message - From: trang van anh Date: Wednesday, October 5, 2011 1:41 pm Subject: Re: Err

Re: Secondary namenode fsimage concept

2011-10-10 Thread Uma Maheswara Rao G 72686
Hi, It looks to me that, problem with your NFS. It is not supporting locks. Which version of NFS are you using? Please check your NFS locking support by writing simple program for file locking. I think NFS4 supports locking ( i did not tried). http://nfs.sourceforge.net/ A6. What are the m

Re: How to iterate over a hdfs folder with hadoop

2011-10-10 Thread Uma Maheswara Rao G 72686
Yes, FileStatus class would be trhe equavalent for list. FileStstus has the API's isDir and getPath. This both api's can satify for your futher usage.:-) I think small difference would be, FileStatus will ensure the sorted order. Regards, Uma - Original Message - From: John Conwell Da

Re: hadoop input buffer size

2011-10-10 Thread Uma Maheswara Rao G 72686
I think below can give you more info about it. http://developer.yahoo.com/blogs/hadoop/posts/2009/08/the_anatomy_of_hadoop_io_pipel/ Nice explanation by Owen here. Regards, Uma - Original Message - From: Yang Xiaoliang Date: Wednesday, October 5, 2011 4:27 pm Subject: Re: hadoop input bu

Re: Block Size

2011-09-29 Thread Uma Maheswara Rao G 72686
hi, Here is some useful info: A small file is one which is significantly smaller than the HDFS block size (default 64MB). If you’re storing small files, then you probably have lots of them (otherwise you wouldn’t turn to Hadoop), and the problem is that HDFS can’t handle lots of files. Every

Re: FileSystem closed

2011-09-29 Thread Uma Maheswara Rao G 72686
FileSystem objects will be cached in jvm. When it tries to get the FS object by using Filesystem.get(..) ( sequence file internally will use it), it will return same fs object if scheme and authority is same for the uri. fs cache key's equals implementation is below static boolean isEqual(Obj

Re: Too many fetch failures. Help!

2011-09-26 Thread Uma Maheswara Rao G 72686
Hello Abdelrahman, Are you able to ping from one machine to other with the configured hostname? configure both the hostnames in /etc/hosts file properly and try. Regards, Uma - Original Message - From: Abdelrahman Kamel Date: Monday, September 26, 2011 8:47 pm Subject: Too many fetch fa

Re: HDFS file into Blocks

2011-09-25 Thread Uma Maheswara Rao G 72686
Sep 26, 2011 at 10:03 AM, He Chen wrote: > > > Hi > > > > It is interesting that a guy from Huawei is also working on > Hadoop project. > > :) > > > > Chen > > > > On Sun, Sep 25, 2011 at 11:29 PM, Uma Maheswara Rao G 72686 < > > ma

Re: How to run Hadoop in standalone mode in Windows

2011-09-25 Thread Uma Maheswara Rao G 72686
Java 6, Cygwin ( maven + tortoiseSVN are for building hadoop) should be enough for running standalone mode in windows. Regards, Uma - Original Message - From: Mark Kerzner Date: Saturday, September 24, 2011 4:58 am Subject: How to run Hadoop in standalone mode in Windows To: common-user

Re: HDFS file into Blocks

2011-09-25 Thread Uma Maheswara Rao G 72686
Hi, You can find the Code in DFSOutputStream.java Here there will be one thread DataStreamer thread. This thread will pick the packets from DataQueue and write on to the sockets. Before this, when actually writing the chunks, based on the block size parameter passed from client, it will s

Re: RE: Making Mumak work with capacity scheduler

2011-09-22 Thread Uma Maheswara Rao G 72686
umak$ > bin/mumak.sh src/test/data/19-jobs.trace.json.gz > src/test/data/19-jobs.topology.json.gz > it gets stuck at some point. Log is here > <http://pastebin.com/9SNUHLFy> > Thanks, > Arun > > > > > > On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswa

Re: Can we replace namenode machine with some other machine ?

2011-09-22 Thread Uma Maheswara Rao G 72686
etadata info, Is there anything more NN/JT > machinesare doing ?? . > So I can say I can survive with poor NN if I am not dealing with > lots of > files in HDFS ? > > On Thu, Sep 22, 2011 at 11:08 AM, Uma Maheswara Rao G 72686 < > mahesw...@huawei.com> wrote: > >

Re: Can we replace namenode machine with some other machine ?

2011-09-21 Thread Uma Maheswara Rao G 72686
ave ? > > Thanks, > Praveenesh > > > On Thu, Sep 22, 2011 at 10:20 AM, Uma Maheswara Rao G 72686 < > mahesw...@huawei.com> wrote: > > > You copy the same installations to new machine and change ip > address.> After that configure the new NN address

Re: RE: risks of using Hadoop

2011-09-21 Thread Uma Maheswara Rao G 72686
o know about .20.2 is that stable? Is it same as > the one you > > > mention in your email(Federation changes), If I need scaled > nameNode and > > > append support, which version I should choose. > > > > > > Regarding Single point of failure, I believe Ho

Re: Can we replace namenode machine with some other machine ?

2011-09-21 Thread Uma Maheswara Rao G 72686
You copy the same installations to new machine and change ip address. After that configure the new NN addresses to your clients and DNs. >Also Does Namenode/JobTracker machine's configuration needs to be better >than datanodes/tasktracker's ?? I did not get this question. Regards, Uma - Ori

Re: risks of using Hadoop

2011-09-21 Thread Uma Maheswara Rao G 72686
an get Fedaration implementaion. But there is no release happend for 0.23 branch yet. Regarding NameNode High Availability, there is one issue HDFS-1623 to build.(Inprogress)This may take couple of months to integrate. > > > > -Jignesh > > > > On Sep 17, 2011, at 12:08

Re: Problem with MR job

2011-09-21 Thread Uma Maheswara Rao G 72686
: Problem with MR job To: common-user@hadoop.apache.org Cc: Uma Maheswara Rao G 72686 > > Hi, > > Some more logs, specifically from the JobTracker: > > 2011-09-21 10:22:43,482 INFO > org.apache.hadoop.mapred.JobInProgress: > Initializing job_201109211018_0001 > 20

Re: Problem with MR job

2011-09-21 Thread Uma Maheswara Rao G 72686
Hi, Any cluster restart happend? ..is your NameNode detecting DataNodes as live? Looks DNs did not report anyblocks to NN yet. You have 13 blocks persisted in NameNode namespace. At least 12 blocks should be reported from your DNs. Other wise automatically it will not come out of safemode. Re

Re: Fwd: Any other way to copy to HDFS ?

2011-09-21 Thread Uma Maheswara Rao G 72686
vokeMethod(RetryInvocationHandler.java:82) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at $Proxy0.create(Unknown Source) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.(DFSClient.java:2833) > ... 10 more > As far as I k

Re: Any other way to copy to HDFS ?

2011-09-21 Thread Uma Maheswara Rao G 72686
? > > Thanks, > Praveenesh > > On Wed, Sep 21, 2011 at 2:37 PM, Uma Maheswara Rao G 72686 < > mahesw...@huawei.com> wrote: > > > For more understanding the flows, i would recommend you to go > through once > > below docs > > > > > ht

Re: Any other way to copy to HDFS ?

2011-09-21 Thread Uma Maheswara Rao G 72686
For more understanding the flows, i would recommend you to go through once below docs http://hadoop.apache.org/common/docs/r0.16.4/hdfs_design.html#The+File+System+Namespace Regards, Uma - Original Message - From: Uma Maheswara Rao G 72686 Date: Wednesday, September 21, 2011 2:36 pm

Re: Any other way to copy to HDFS ?

2011-09-21 Thread Uma Maheswara Rao G 72686
Hi, You need not copy the files to NameNode. Hadoop provide Client code as well to copy the files. To copy the files from other node ( non dfs), you need to put the hadoop**.jar's into classpath and use the below code snippet. FileSystem fs =new DistributedFileSystem(); fs.initialize("NAMENO

Re: Making Mumak work with capacity scheduler

2011-09-21 Thread Uma Maheswara Rao G 72686
Hello Arun, If you want to apply MAPREDUCE-1253 on 21 version, applying patch directly using commands may not work because of codebase changes. So, you take the patch and apply the lines in your code base manually. I am not sure any otherway for this. Did i understand wrongly your intenti

Re: Making Mumak work with capacity scheduler

2011-09-20 Thread Uma Maheswara Rao G 72686
Looks that patchs are based on 0.22 version. So, you can not apply them directly. You may need to merge them logically ( back port them). one more point to note here 0.21 version of hadoop is not a stable version. Presently 0.20xx versions are stable. Regards, Uma - Original Message - Fr

Re: RE: RE: java.io.IOException: Incorrect data format

2011-09-20 Thread Uma Maheswara Rao G 72686
s not able to start after crashing without > enough HD space. > > Wei > > -Original Message----- > From: Uma Maheswara Rao G 72686 [mailto:mahesw...@huawei.com] > Sent: Tuesday, September 20, 2011 9:30 PM > To: common-user@hadoop.apache.org > Subject: Re: RE: ja

Re: Making Mumak work with capacity scheduler

2011-09-20 Thread Uma Maheswara Rao G 72686
Hello Arun, On which code base you are trying to apply the patch. Code should match to apply the patch. Regards, Uma - Original Message - From: ArunKumar Date: Wednesday, September 21, 2011 11:33 am Subject: Making Mumak work with capacity scheduler To: hadoop-u...@lucene.apache.org

Re: RE: java.io.IOException: Incorrect data format

2011-09-20 Thread Uma Maheswara Rao G 72686
server.datanode.DataNode.instantiateDataNode(Data > Node.java:1318) >at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode. > java:1326) >at > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1448) > Wei > > -Original Message-

Re: java.io.IOException: Incorrect data format

2011-09-20 Thread Uma Maheswara Rao G 72686
Can you check what is the value for command 'df -h'in NN machine. I think, one more possibility could be that while saving image itself it would have been currupted. To avoid such cases it has been already handled in trunk.For more details https://issues.apache.org/jira/browse/HDFS-1594 Regards,

Re: Namenode server is not starting for lily.

2011-09-19 Thread Uma Maheswara Rao G 72686
One more point to check. Did you copy sh files from windows box? If yes, please do dos2unix conversion if your target os is linux. other point is, it is clear that format has abborted. You need to give Y option instead of y. ( Harsh mentioned it) Thanks Uma - Original Message - F

Re: Out of heap space errors on TTs

2011-09-19 Thread Uma Maheswara Rao G 72686
al memory . So how many per node maximum tasks should I set? > > Thanks > > On Mon, Sep 19, 2011 at 6:28 PM, Uma Maheswara Rao G 72686 < > mahesw...@huawei.com> wrote: > > > Hello, > > > > You need configure heap size for child tasks using below propr

Re: Out of heap space errors on TTs

2011-09-19 Thread Uma Maheswara Rao G 72686
Hello, You need configure heap size for child tasks using below proprty. "mapred.child.java.opts" in mapred-site.xml by default it will be 200mb. But your io.sort.mb(300) is more than that. So, configure more heap space for child tasks. ex: -Xmx512m Regards, Uma - Original Message -

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
uler To: common-user@hadoop.apache.org Cc: hadoop-u...@lucene.apache.org > On Sun, Sep 18, 2011 at 9:35 AM, Uma Maheswara Rao G 72686 < > mahesw...@huawei.com> wrote: > > > or other way could be, just execute below command > > hadoop fs -chmod 777 / > > > > I woul

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
Hi Arun, Setting mapreduce.jobtracker.staging.root.dir propery value to /user might fix this issue... or other way could be, just execute below command hadoop fs -chmod 777 / Regards, Uma - Original Message - From: ArunKumar Date: Sunday, September 18, 2011 8:38 pm Subject: Re: Submi

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
Hi Arun, Here NameNode is in safe mode. This is not related to permission problem. It looks to me that you have synced some blocks and restarted NN. So, NN will expect some blocks to come out of safemode. But in your version of hadoop, that partial blocks will not be reported again from DN. or

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
Hello Arun, Now we reached to hadoop permissions ;) If you really need not worry about permissions, then you can disable it and proceed (dfs.permissions = false). else you can set the required permissions to user as well. permissions guide. http://hadoop.apache.org/common/docs/current/hdfs_per

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
Did you give permissions recursively? $ sudo chown -R hduser:hadoop hadoop Regards, Uma - Original Message - From: ArunKumar Date: Sunday, September 18, 2011 12:00 pm Subject: Submitting Jobs from different user to a queue in capacity scheduler To: hadoop-u...@lucene.apache.org > Hi ! >

Re: risks of using Hadoop

2011-09-17 Thread Uma Maheswara Rao G 72686
be buggy. *sync* > support is what HBase needs and what 0.20.205 will support. Before 205 > is released, you can also find these features in CDH3 or by building > your own release from SVN. > > -Todd > > On Sat, Sep 17, 2011 at 4:59 AM, Uma Maheswara Rao G 72686 > wrote: >

Re: risks of using Hadoop

2011-09-17 Thread Uma Maheswara Rao G 72686
@hadoop.apache.org Cc: Uma Maheswara Rao G 72686 > > Hi, > > When you say that 0.20.205 will support appends, you mean for > general > purpose writes on the HDFS? or only Hbase? > > Thanks, > George > > On 9/17/2011 7:08 AM, Uma Maheswara Rao G 72686 wrote: &g

Re: risks of using Hadoop

2011-09-16 Thread Uma Maheswara Rao G 72686
ank you. > > On 16 September 2011 20:34, Uma Maheswara Rao G 72686 > wrote: > > > Hello, > > > > First of all where you are planning to use Hadoop? > > > > Regards, > > Uma > > - Original Message - > > From: Kobina Kwarko > &g

Re: risks of using Hadoop

2011-09-16 Thread Uma Maheswara Rao G 72686
Hello, First of all where you are planning to use Hadoop? Regards, Uma - Original Message - From: Kobina Kwarko Date: Saturday, September 17, 2011 0:41 am Subject: risks of using Hadoop To: common-user > Hello, > > Please can someone point some of the risks we may incur if we > decid

Re: Tutorial about Security in Hadoop

2011-09-16 Thread Uma Maheswara Rao G 72686
Hi, please find the below links https://media.blackhat.com/bh-us-10/whitepapers/Becherer/BlackHat-USA-2010-Becherer-Andrew-Hadoop-Security-wp.pdf http://markmail.org/download.xqy?id=yjdqleg3zv5pr54t&number=1 Which will help you to understand more. Regards, Uma - Original Message - From:

Re: Is it possible to access the HDFS via Java OUTSIDE the Cluster?

2011-09-05 Thread Uma Maheswara Rao G 72686
Hi, It is very much possible. Infact that is the main use case for Hadoop :-) You need to put the hadoop-hdfs*.jar hdoop-common*.jar's in your class path from where you want to run the client program. At client node side use the below sample code Configuration conf=new Configuration(); //y

Re: Out of Memory Exception while building hadoop

2011-09-04 Thread Uma Maheswara Rao G 72686
Hi Jhon, Mostly the problem with your java. This problem can come if your java link refers to java-gcj. Please check some related links: http://jeffchannell.com/Flex-3/gc-warning.html Regards, Uma - Original Message - From: john smith Date: Sunday, September 4, 2011 10:22 pm Subject:

Re: Any related paper on how to resolve hadoop SPOF issue?

2011-08-25 Thread Uma Maheswara Rao G 72686
Hi, Community started working on NameNode High availabily. you can refer : https://issues.apache.org/jira/browse/HDFS-1623 regards, Uma - Original Message - From: George Kousiouris Date: Thursday, August 25, 2011 3:15 pm Subject: Any related paper on how to resolve hadoop SPOF issue?

Re: cygwin not connecting to Hadoop server

2011-07-28 Thread Uma Maheswara Rao G 72686
any Thanks :D > > > > > >________________ > >From: Uma Maheswara Rao G 72686 > >To: common-user@hadoop.apache.org; A Df > >Cc: "common-user@hadoop.apache.org" > > >Sent: Wednesday, 27 July 2011, 17:31 > >Subject: Re: cygwin not connecting to Hadoop server

Re: cygwin not connecting to Hadoop server

2011-07-28 Thread Uma Maheswara Rao G 72686
any Thanks :D > > > > > >________________ > >From: Uma Maheswara Rao G 72686 > >To: common-user@hadoop.apache.org; A Df > >Cc: "common-user@hadoop.apache.org" > > >Sent: Wednesday, 27 July 2011, 17:31 > >Subject: Re: cygwin not connecting to Hadoop server

Re: /tmp/hadoop-oracle/dfs/name is in an inconsistent state

2011-07-28 Thread Uma Maheswara Rao G 72686
Hi, Before starting, you need to format the namenode. ./hdfs namenode -format then this directories will be created. respective configuration is 'dfs.namenode.name.dir' default configurations will exist in hdfs-default.xml. If you want to configure your own directory path, you can add the above

Re: cygwin not connecting to Hadoop server

2011-07-27 Thread Uma Maheswara Rao G 72686
Hi A Df, Did you format the NameNode first? Can you check the NN logs whether NN is started or not? Regards, Uma ** This email and its attachments contain confidential information from HUAWEI, which is int

Re: Build Hadoop 0.20.2 from source

2011-07-26 Thread Uma Maheswara Rao G 72686
Hi Vighnesh, Step 1) Download the code base from apache svn repository. Step 2) In root folder you can find build.xml file. In that folder just execute a)ant and b)ant eclipse this will generate the eclipse project setings files. After this directly you can import this project in you eclips

Re: FW: Question about property "fs.default.name"

2011-07-23 Thread Uma Maheswara Rao G 72686
Hi Mahesh, When starting the NN, it will throw exception with your provided configuration. please check the code snippet below where exactly validation will happen. in NameNode: public static InetSocketAddress getAddress(URI filesystemURI) { String authority = filesystemURI.getAuthority();

Re: replicate data in HDFS with smarter encoding

2011-07-18 Thread Uma Maheswara Rao G 72686
Hi, We have already thoughts about it. Looks like you are talking about this features right https://issues.apache.org/jira/browse/HDFS-1640 https://issues.apache.org/jira/browse/HDFS-2115 but implementation not yet ready in trunk Regards, Uma ***