could shed
some light on this. I am essentially looking whether DFSClient
approaches namenode in the case of failure of all datanodes that
namenode has given for a given data block previously.
Thanks
Pallavi
On 03/30/2010 05:01 PM, Pallavi Palleti wrote:
Hi,
Could some one kindly let me kn
variable.
I would also recommend to use one logging system in the code, which will be
commons-logging in this case.
Alex K
On Tue, Mar 30, 2010 at 12:12 AM, Pallavi Palleti<
pallavi.pall...@corp.aol.com> wrote:
Hi Alex,
Thanks for the reply. I have already created a logger
Hi,
Could some one kindly let me know if the DFSClient takes care of
datanode failures and attempt to write to another datanode if primary
datanode (and replicated datanodes) fail. I looked into the souce code
of DFSClient and figured out that it attempts to write to one of the
datanodes in p
x27;s log4j, you
need to modify (or create) log4j.properties file and point you code (via
classpath) to it.
A sample log4j.properties is in the conf directory (either apache or CDH
distributions).
Alex K
On Mon, Mar 29, 2010 at 11:25 PM, Pallavi Palleti<
pallavi.pall...@corp.aol.com> wrot
Hi,
I am copying certain data from a client machine (which is not part of
the cluster) using DFSClient to HDFS. During this process, I am
encountering some issues and the error/info logs are going to stdout. Is
there a way, I can configure the property at client side so that the
error/info lo
information with this
group so that it can be useful for others. However, I am still puzzled on what
is the difference between them.
Thanks
Pallavi
- Original Message -
From: "Pallavi Palleti"
To: common-user@hadoop.apache.org
Cc: core-u...@hadoop.apache.org
Sent: Thursday, Se
Hi all,
I came across this strange error where my MapFile is reading data into object
that is passed to it and throws an IOException saying
java.io.IOException: @e5b723 read 2628 bytes, should read 2628
When I went thru the code of SequenceFile.java (line no:1796), I could see this
snippet of
ot visible
Pallavi Palleti wrote:
> yes.
Then you can check NameNode log for such a file name. If it is closed
then you will notice 'completeFile...' message with the filename. This
will also show if there was anything odd with the file.
Raghu.
> - Original Message -
&g
into the currently opened HDFS file.
> If
>> it
>>> belongs to new interval, the old file is closed and new file is
>> created.
>>> I have been logging the time at which the file is being created and
> at
>>> which the file is being closed at my
visible
>
> Please provide information on what version of hadoop you are using and
> the
> method of opening and closing the file.
>
>
> On Tue, Aug 11, 2009 at 12:48 AM, Pallavi Palleti <
> pallavi.pall...@corp.aol.com> wrote:
>
>> Hi all,
>>
Hi all,
We have an application where we pull logs from an external server(far apart
from hadoop cluster) to hadoop cluster. Sometimes, we could see huge delay (of
1 hour or more) in actually seeing the data in HDFS though the file has been
closed and the variable is set to null from the externa
Hi all,
We are having a 60 node cluster running hadoop-0.18.2. We are seeing "No Space
Left On Device" and the detailed error is
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
java.lang.RuntimeException: javax.xml.transfor
m.TransformerException: java.io.IOException: No space left
New Delhi
Subject: Re: Remote access to cluster using user as hadoop
Pallavi Palleti wrote:
> Hi all,
>
> I have made changes to the hadoop-0.18.2 code to allow hadoop super user
> access only from some specified IP Range. If it is untrusted IP, it throws an
> exception. I would
Hi all,
I have made changes to the hadoop-0.18.2 code to allow hadoop super user access
only from some specified IP Range. If it is untrusted IP, it throws an
exception. I would like to add it as a patch so that people can use it if
needed in their environment. Can some one tell me what is the
on, Jul 27, 2009 at 3:19 PM, Pallavi Palleti <
pallavi.pall...@corp.aol.com> wrote:
> Hi all,
>
> Is there an easy way to get the slaves list in Server.java code?
>
> Thanks
> Pallavi
>
Hi all,
Is there an easy way to get the slaves list in Server.java code?
Thanks
Pallavi
I guess, I forgot to restart namenode after changes. It is working fine now.
Apologies for the spam.
Thanks
Pallavi
- Original Message -
From: "Pallavi Palleti"
To: common-user@hadoop.apache.org
Sent: Friday, July 24, 2009 6:45:02 PM GMT +05:30 Chennai, Kolkata, Mumbai,
Hi all,
I tried to trackdown to the place where I can add some conditions for not
allowing any remote user with username as hadoop(root user) (other than some
specific hostnames or ipaddresses). I could see the call path as FsShell ->
DistributedFileSystem ->DFSClient - ClientProtocol. As there
a kind of
checkpointing to resume from where the data has failed to copy back to HDFS
which will add an overhead for a solution which is near real time.
Thanks
Pallavi
- Original Message -
From: "Pallavi Palleti"
To: common-user@hadoop.apache.org
Sent: Wednesday, July 22, 20
19 matches
Mail list logo