Hi Pallavi,

DFSClient uses log4j.properties for configuration.  What is your classpath?
 I need to know how exactly you invoke your program (java, hadoop script,
etc.).  The log level and appender is driven by the hadoop.root.logger
config variable.

I would also recommend to use one logging system in the code, which will be
commons-logging in this case.

Alex K

On Tue, Mar 30, 2010 at 12:12 AM, Pallavi Palleti <
pallavi.pall...@corp.aol.com> wrote:

> Hi Alex,
>
> Thanks for the reply. I have already created a logger (from
> log4j.logger)and configured the same to log it to a file and it is logging
> for all the log statements that I have in my client code. However, the
> error/info logs of DFSClient are going to stdout.  The DFSClient code is
> using log from commons-logging.jar. I am wondering how to redirect those
> logs (which are right now going to stdout) to append to the existing logger
> in client code.
>
> Thanks
> Pallavi
>
>
>
> On 03/30/2010 12:06 PM, Alex Kozlov wrote:
>
>> Hi Pallavi,
>>
>> It depends what logging configuration you are using.  If it's log4j, you
>> need to modify (or create) log4j.properties file and point you code (via
>> classpath) to it.
>>
>> A sample log4j.properties is in the conf directory (either apache or CDH
>> distributions).
>>
>> Alex K
>>
>> On Mon, Mar 29, 2010 at 11:25 PM, Pallavi Palleti<
>> pallavi.pall...@corp.aol.com>  wrote:
>>
>>
>>
>>> Hi,
>>>
>>> I am copying certain data from a client machine (which is not part of the
>>> cluster) using DFSClient to HDFS. During this process, I am encountering
>>> some issues and the error/info logs are going to stdout. Is there a way,
>>> I
>>> can configure the property at client side so that the error/info logs are
>>> appended to existing log file (being created using logger at client code)
>>> rather writing to stdout.
>>>
>>> Thanks
>>> Pallavi
>>>
>>>
>>>
>>
>>
>

Reply via email to