Re: Query over DFSClient

2010-03-31 Thread Pallavi Palleti
could shed some light on this. I am essentially looking whether DFSClient approaches namenode in the case of failure of all datanodes that namenode has given for a given data block previously. Thanks Pallavi On 03/30/2010 05:01 PM, Pallavi Palleti wrote: Hi, Could some one kindly let me kn

Re: Redirecting hadoop log messages to a log file at client side

2010-03-31 Thread Pallavi Palleti
variable. I would also recommend to use one logging system in the code, which will be commons-logging in this case. Alex K On Tue, Mar 30, 2010 at 12:12 AM, Pallavi Palleti< pallavi.pall...@corp.aol.com> wrote: Hi Alex, Thanks for the reply. I have already created a logger

Query over DFSClient

2010-03-30 Thread Pallavi Palleti
Hi, Could some one kindly let me know if the DFSClient takes care of datanode failures and attempt to write to another datanode if primary datanode (and replicated datanodes) fail. I looked into the souce code of DFSClient and figured out that it attempts to write to one of the datanodes in p

Re: Redirecting hadoop log messages to a log file at client side

2010-03-30 Thread Pallavi Palleti
x27;s log4j, you need to modify (or create) log4j.properties file and point you code (via classpath) to it. A sample log4j.properties is in the conf directory (either apache or CDH distributions). Alex K On Mon, Mar 29, 2010 at 11:25 PM, Pallavi Palleti< pallavi.pall...@corp.aol.com> wrot

Redirecting hadoop log messages to a log file at client side

2010-03-29 Thread Pallavi Palleti
Hi, I am copying certain data from a client machine (which is not part of the cluster) using DFSClient to HDFS. During this process, I am encountering some issues and the error/info logs are going to stdout. Is there a way, I can configure the property at client side so that the error/info lo

Re: MapFile throwing IOException though reading data properly

2009-09-17 Thread Pallavi Palleti
information with this group so that it can be useful for others. However, I am still puzzled on what is the difference between them. Thanks Pallavi - Original Message - From: "Pallavi Palleti" To: common-user@hadoop.apache.org Cc: core-u...@hadoop.apache.org Sent: Thursday, Se

MapFile throwing IOException though reading data properly

2009-09-17 Thread Pallavi Palleti
Hi all, I came across this strange error where my MapFile is reading data into object that is passed to it and throws an IOException saying java.io.IOException: @e5b723 read 2628 bytes, should read 2628 When I went thru the code of SequenceFile.java (line no:1796), I could see this snippet of

Re: File is closed but data is not visible

2009-08-13 Thread Pallavi Palleti
ot visible Pallavi Palleti wrote: > yes. Then you can check NameNode log for such a file name. If it is closed then you will notice 'completeFile...' message with the filename. This will also show if there was anything odd with the file. Raghu. > - Original Message - &g

Re: File is closed but data is not visible

2009-08-12 Thread Pallavi Palleti
into the currently opened HDFS file. > If >> it >>> belongs to new interval, the old file is closed and new file is >> created. >>> I have been logging the time at which the file is being created and > at >>> which the file is being closed at my

Re: File is closed but data is not visible

2009-08-11 Thread Pallavi Palleti
visible > > Please provide information on what version of hadoop you are using and > the > method of opening and closing the file. > > > On Tue, Aug 11, 2009 at 12:48 AM, Pallavi Palleti < > pallavi.pall...@corp.aol.com> wrote: > >> Hi all, >>

File is closed but data is not visible

2009-08-11 Thread Pallavi Palleti
Hi all, We have an application where we pull logs from an external server(far apart from hadoop cluster) to hadoop cluster. Sometimes, we could see huge delay (of 1 hour or more) in actually seeing the data in HDFS though the file has been closed and the variable is set to null from the externa

No Space Left On Device though space is available

2009-08-02 Thread Pallavi Palleti
Hi all, We are having a 60 node cluster running hadoop-0.18.2. We are seeing "No Space Left On Device" and the detailed error is org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.RuntimeException: javax.xml.transfor m.TransformerException: java.io.IOException: No space left

Re: Remote access to cluster using user as hadoop

2009-07-30 Thread Pallavi Palleti
New Delhi Subject: Re: Remote access to cluster using user as hadoop Pallavi Palleti wrote: > Hi all, > > I have made changes to the hadoop-0.18.2 code to allow hadoop super user > access only from some specified IP Range. If it is untrusted IP, it throws an > exception. I would

Re: Remote access to cluster using user as hadoop

2009-07-30 Thread Pallavi Palleti
Hi all, I have made changes to the hadoop-0.18.2 code to allow hadoop super user access only from some specified IP Range. If it is untrusted IP, it throws an exception. I would like to add it as a patch so that people can use it if needed in their environment. Can some one tell me what is the

Re: Getting Slaves list in hadoop

2009-07-27 Thread Pallavi Palleti
on, Jul 27, 2009 at 3:19 PM, Pallavi Palleti < pallavi.pall...@corp.aol.com> wrote: > Hi all, > > Is there an easy way to get the slaves list in Server.java code? > > Thanks > Pallavi >

Getting Slaves list in hadoop

2009-07-27 Thread Pallavi Palleti
Hi all, Is there an easy way to get the slaves list in Server.java code? Thanks Pallavi

Re: Remote access to cluster using user as hadoop

2009-07-24 Thread Pallavi Palleti
I guess, I forgot to restart namenode after changes. It is working fine now. Apologies for the spam. Thanks Pallavi - Original Message - From: "Pallavi Palleti" To: common-user@hadoop.apache.org Sent: Friday, July 24, 2009 6:45:02 PM GMT +05:30 Chennai, Kolkata, Mumbai,

Re: Remote access to cluster using user as hadoop

2009-07-24 Thread Pallavi Palleti
Hi all, I tried to trackdown to the place where I can add some conditions for not allowing any remote user with username as hadoop(root user) (other than some specific hostnames or ipaddresses). I could see the call path as FsShell -> DistributedFileSystem ->DFSClient - ClientProtocol. As there

Re: Issue with HDFS Client when datanode is temporarily unavailable

2009-07-24 Thread Pallavi Palleti
a kind of checkpointing to resume from where the data has failed to copy back to HDFS which will add an overhead for a solution which is near real time. Thanks Pallavi - Original Message - From: "Pallavi Palleti" To: common-user@hadoop.apache.org Sent: Wednesday, July 22, 20