Re: Fwd: Any other way to copy to HDFS ?

2011-09-21 Thread Uma Maheswara Rao G 72686
? or is there any other way to do it from java code ? Thanks, Praveenesh -- Forwarded message -- From: Uma Maheswara Rao G 72686 mahesw...@huawei.com Date: Wed, Sep 21, 2011 at 3:27 PM Subject: Re: Any other way to copy to HDFS ? To: common-user@hadoop.apache.org When

Re: Problem with MR job

2011-09-21 Thread Uma Maheswara Rao G 72686
Hi, Any cluster restart happend? ..is your NameNode detecting DataNodes as live? Looks DNs did not report anyblocks to NN yet. You have 13 blocks persisted in NameNode namespace. At least 12 blocks should be reported from your DNs. Other wise automatically it will not come out of safemode.

Re: Problem with MR job

2011-09-21 Thread Uma Maheswara Rao G 72686
pm Subject: Re: Problem with MR job To: common-user@hadoop.apache.org Cc: Uma Maheswara Rao G 72686 mahesw...@huawei.com Hi, Some more logs, specifically from the JobTracker: 2011-09-21 10:22:43,482 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201109211018_0001 2011

Re: risks of using Hadoop

2011-09-21 Thread Uma Maheswara Rao G 72686
issue HDFS-1623 to build.(Inprogress)This may take couple of months to integrate. -Jignesh On Sep 17, 2011, at 12:08 AM, Uma Maheswara Rao G 72686 wrote: Hi Kobina, Some experiences which may helpful for you with respective to DFS. 1. Selecting the correct version. I

Re: Can we replace namenode machine with some other machine ?

2011-09-21 Thread Uma Maheswara Rao G 72686
You copy the same installations to new machine and change ip address. After that configure the new NN addresses to your clients and DNs. Also Does Namenode/JobTracker machine's configuration needs to be better than datanodes/tasktracker's ?? I did not get this question. Regards, Uma -

Re: RE: risks of using Hadoop

2011-09-21 Thread Uma Maheswara Rao G 72686
, 2011, at 12:08 AM, Uma Maheswara Rao G 72686 wrote: Hi Kobina, Some experiences which may helpful for you with respective to DFS. 1. Selecting the correct version. I will recommend to use 0.20X version. This is pretty stable version and all other organizations

Re: Can we replace namenode machine with some other machine ?

2011-09-21 Thread Uma Maheswara Rao G 72686
, 2011 at 10:20 AM, Uma Maheswara Rao G 72686 mahesw...@huawei.com wrote: You copy the same installations to new machine and change ip address. After that configure the new NN addresses to your clients and DNs. Also Does Namenode/JobTracker machine's configuration needs to be better

Re: A question about RPC

2011-09-21 Thread Uma Maheswara Rao G 72686
Hadoop has its RPC machanism mainly Writables to overcome some of the disadvantages on normal serializations. For more info: http://www.lexemetech.com/2008/07/rpc-and-serialization-with-hadoop.html Regards, Uma - Original Message - From: jie_zhou jie_z...@xa.allyes.com Date:

Re: RE: java.io.IOException: Incorrect data format

2011-09-20 Thread Uma Maheswara Rao G 72686
) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode. java:1326) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1448) Wei -Original Message- From: Uma Maheswara Rao G 72686 [mailto:mahesw...@huawei.com] Sent: Tuesday, September 20, 2011 9:10 PM

Re: Out of heap space errors on TTs

2011-09-19 Thread Uma Maheswara Rao G 72686
Hello, You need configure heap size for child tasks using below proprty. mapred.child.java.opts in mapred-site.xml by default it will be 200mb. But your io.sort.mb(300) is more than that. So, configure more heap space for child tasks. ex: -Xmx512m Regards, Uma - Original Message -

Re: Out of heap space errors on TTs

2011-09-19 Thread Uma Maheswara Rao G 72686
should I set? Thanks On Mon, Sep 19, 2011 at 6:28 PM, Uma Maheswara Rao G 72686 mahesw...@huawei.com wrote: Hello, You need configure heap size for child tasks using below proprty. mapred.child.java.opts in mapred-site.xml by default it will be 200mb. But your io.sort.mb(300

Re: Namenode server is not starting for lily.

2011-09-19 Thread Uma Maheswara Rao G 72686
One more point to check. Did you copy sh files from windows box? If yes, please do dos2unix conversion if your target os is linux. other point is, it is clear that format has abborted. You need to give Y option instead of y. ( Harsh mentioned it) Thanks Uma - Original Message -

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
Did you give permissions recursively? $ sudo chown -R hduser:hadoop hadoop Regards, Uma - Original Message - From: ArunKumar arunk...@gmail.com Date: Sunday, September 18, 2011 12:00 pm Subject: Submitting Jobs from different user to a queue in capacity scheduler To:

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
Hello Arun, Now we reached to hadoop permissions ;) If you really need not worry about permissions, then you can disable it and proceed (dfs.permissions = false). else you can set the required permissions to user as well. permissions guide.

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
Hi Arun, Setting mapreduce.jobtracker.staging.root.dir propery value to /user might fix this issue... or other way could be, just execute below command hadoop fs -chmod 777 / Regards, Uma - Original Message - From: ArunKumar arunk...@gmail.com Date: Sunday, September 18, 2011 8:38 pm

Re: Submitting Jobs from different user to a queue in capacity scheduler

2011-09-18 Thread Uma Maheswara Rao G 72686
-user@hadoop.apache.org Cc: hadoop-u...@lucene.apache.org On Sun, Sep 18, 2011 at 9:35 AM, Uma Maheswara Rao G 72686 mahesw...@huawei.com wrote: or other way could be, just execute below command hadoop fs -chmod 777 / I wouldn't do this - it's overkill, and there's no way to go back

Re: risks of using Hadoop

2011-09-17 Thread Uma Maheswara Rao G 72686
To: common-user@hadoop.apache.org Cc: Uma Maheswara Rao G 72686 mahesw...@huawei.com Hi, When you say that 0.20.205 will support appends, you mean for general purpose writes on the HDFS? or only Hbase? Thanks, George On 9/17/2011 7:08 AM, Uma Maheswara Rao G 72686 wrote: 6

Re: risks of using Hadoop

2011-09-17 Thread Uma Maheswara Rao G 72686
and is known to be buggy. *sync* support is what HBase needs and what 0.20.205 will support. Before 205 is released, you can also find these features in CDH3 or by building your own release from SVN. -Todd On Sat, Sep 17, 2011 at 4:59 AM, Uma Maheswara Rao G 72686 mahesw...@huawei.com wrote

Re: Tutorial about Security in Hadoop

2011-09-16 Thread Uma Maheswara Rao G 72686
Hi, please find the below links https://media.blackhat.com/bh-us-10/whitepapers/Becherer/BlackHat-USA-2010-Becherer-Andrew-Hadoop-Security-wp.pdf http://markmail.org/download.xqy?id=yjdqleg3zv5pr54tnumber=1 Which will help you to understand more. Regards, Uma - Original Message - From:

Re: risks of using Hadoop

2011-09-16 Thread Uma Maheswara Rao G 72686
Hello, First of all where you are planning to use Hadoop? Regards, Uma - Original Message - From: Kobina Kwarko kobina.kwa...@gmail.com Date: Saturday, September 17, 2011 0:41 am Subject: risks of using Hadoop To: common-user common-user@hadoop.apache.org Hello, Please can someone

Re: risks of using Hadoop

2011-09-16 Thread Uma Maheswara Rao G 72686
September 2011 20:34, Uma Maheswara Rao G 72686 mahesw...@huawei.comwrote: Hello, First of all where you are planning to use Hadoop? Regards, Uma - Original Message - From: Kobina Kwarko kobina.kwa...@gmail.com Date: Saturday, September 17, 2011 0:41 am Subject: risks

Re: Is it possible to access the HDFS via Java OUTSIDE the Cluster?

2011-09-05 Thread Uma Maheswara Rao G 72686
Hi, It is very much possible. Infact that is the main use case for Hadoop :-) You need to put the hadoop-hdfs*.jar hdoop-common*.jar's in your class path from where you want to run the client program. At client node side use the below sample code Configuration conf=new Configuration();

Re: Out of Memory Exception while building hadoop

2011-09-04 Thread Uma Maheswara Rao G 72686
Hi Jhon, Mostly the problem with your java. This problem can come if your java link refers to java-gcj. Please check some related links: http://jeffchannell.com/Flex-3/gc-warning.html Regards, Uma - Original Message - From: john smith js1987.sm...@gmail.com Date: Sunday, September 4,

Re: Listing the content of a HDFS folder oder by timestamp using shell

2011-08-09 Thread Uma Maheswara Rao G 72686
Hi Florin, ./hadoop fs -ls path Above command will give timestamp also. Regards, Uma Mahesh - Original Message - From: Florin P florinp...@yahoo.com Date: Tuesday, August 9, 2011 12:52 pm Subject: Listing the content of a HDFS folder oder by timestamp using shell To:

Re: Get rid of crc files when using FileUtil.copy(FileSystem srcFS, Path src, File dst, boolean deleteSource, Configuration conf) is used

2011-08-09 Thread Uma Maheswara Rao G 72686
Hi Florin, Recently i had given the patch for controlling .crc files at client side. Please look at https://issues.apache.org/jira/browse/HADOOP-7178. Provided one extra API in FileSystem.java, public void copyToLocalFile(boolean delSrc, Path src, Path dst, boolean useRawLocalFileSystem)

Re: Listing the content of a HDFS folder oder by timestamp using shell

2011-08-09 Thread Uma Maheswara Rao G 72686
/dirxx/import_2011_07_15 Regards, Florin --- On Tue, 8/9/11, Uma Maheswara Rao G 72686 mahesw...@huawei.com wrote: From: Uma Maheswara Rao G 72686 mahesw...@huawei.com Subject: Re: Listing the content of a HDFS folder oder by timestamp using shell To: hdfs-user@hadoop.apache.org Date

Re: Dananode not sending the Hearbeat messages to Namenode

2011-08-03 Thread Uma Maheswara Rao G 72686
Hi Rahul, one possibility could be system time updations: Can you check , System time changed in your system? Since the heartbeats will depends on System times, that will effect sending the heartbeats to NN. Whihc version of hadoop are you using? approximately how many blocks will be there in

Re: /tmp/hadoop-oracle/dfs/name is in an inconsistent state

2011-07-28 Thread Uma Maheswara Rao G 72686
Hi, Before starting, you need to format the namenode. ./hdfs namenode -format then this directories will be created. respective configuration is 'dfs.namenode.name.dir' default configurations will exist in hdfs-default.xml. If you want to configure your own directory path, you can add the

Re: cygwin not connecting to Hadoop server

2011-07-28 Thread Uma Maheswara Rao G 72686
and many Thanks :D From: Uma Maheswara Rao G 72686 mahesw...@huawei.com To: common-user@hadoop.apache.org; A Df abbey_dragonfor...@yahoo.comCc: common-user@hadoop.apache.org common-user@hadoop.apache.org Sent: Wednesday, 27 July 2011, 17:31 Subject: Re

Re: cygwin not connecting to Hadoop server

2011-07-28 Thread Uma Maheswara Rao G 72686
and many Thanks :D From: Uma Maheswara Rao G 72686 mahesw...@huawei.com To: common-user@hadoop.apache.org; A Df abbey_dragonfor...@yahoo.comCc: common-user@hadoop.apache.org common-user@hadoop.apache.org Sent: Wednesday, 27 July 2011, 17:31 Subject: Re

Re: cygwin not connecting to Hadoop server

2011-07-27 Thread Uma Maheswara Rao G 72686
Hi A Df, Did you format the NameNode first? Can you check the NN logs whether NN is started or not? Regards, Uma ** This email and its attachments contain confidential information from HUAWEI, which is

Re: Build Hadoop 0.20.2 from source

2011-07-26 Thread Uma Maheswara Rao G 72686
Hi Vighnesh, Step 1) Download the code base from apache svn repository. Step 2) In root folder you can find build.xml file. In that folder just execute a)ant and b)ant eclipse this will generate the eclipse project setings files. After this directly you can import this project in you

Re: FW: Question about property fs.default.name

2011-07-23 Thread Uma Maheswara Rao G 72686
Hi Mahesh, When starting the NN, it will throw exception with your provided configuration. please check the code snippet below where exactly validation will happen. in NameNode: public static InetSocketAddress getAddress(URI filesystemURI) { String authority =

Re: replicate data in HDFS with smarter encoding

2011-07-18 Thread Uma Maheswara Rao G 72686
Hi, We have already thoughts about it. Looks like you are talking about this features right https://issues.apache.org/jira/browse/HDFS-1640 https://issues.apache.org/jira/browse/HDFS-2115 but implementation not yet ready in trunk Regards, Uma

Re: replicate data in HDFS with smarter encoding

2011-07-18 Thread Uma Maheswara Rao G 72686
Hi, We have already thoughts about it. Looks like you are talking about this features right https://issues.apache.org/jira/browse/HDFS-1640 https://issues.apache.org/jira/browse/HDFS-2115 but implementation not yet ready in trunk Regards, Uma

<    1   2