unsubscribe

2016-02-26 Thread Ram



unsubscribe

2016-02-26 Thread Ram



unsubscribe

2016-02-26 Thread Ram



unsubscribe

2016-02-26 Thread Bhasin, Taran
taran.bha...@mhfi.com


The information contained in this message is intended only for the recipient, 
and may be a confidential attorney-client communication or may otherwise be 
privileged and confidential and protected from disclosure. If the reader of 
this message is not the intended recipient, or an employee or agent responsible 
for delivering this message to the intended recipient, please be aware that any 
dissemination or copying of this communication is strictly prohibited. If you 
have received this communication in error, please immediately notify us by 
replying to the message and deleting it from your computer. McGraw Hill 
Financial reserves the right, subject to applicable local law, to monitor, 
review and process the content of any electronic message or information sent to 
or from McGraw Hill Financial e-mail addresses without informing the sender or 
recipient of the message. By sending electronic message or information to 
McGraw Hill Financial e-mail addresses you, as the sender, are consenting to 
McGraw Hill Financial processing any of your personal data therein.


Re: Namenode shutdown due to long GC Pauses

2016-02-26 Thread bappa kon
Can you also share the GC details and jmap histogram output?

Thanks

On Thu, Feb 25, 2016 at 4:21 PM, Gokulakannan M (Engineering - Data
Platform)  wrote:

> Hi Jitendra,
>
> Trying to find the pattern but one thing observed is that the metrics 
> *RpcDetailedActivity.GetServerDefaultsNumOps
> *is pretty high(around 14 million) when long pause happened.
>
> G1 garbage collector is used already. These are the main JVM parameters.
>
> -XX:+UseG1GC
> -XX:ParallelGCThreads=8 -XX:ConcGCThreads=8 -XX:+UseNUMA
> -XX:MaxGCPauseMillis=500 -XX:GCPauseIntervalMillis=1000
> -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xss256k
> -XX:StringTableSize=103 -XX:+UseTLAB -XX:+UseCondCardMark
> -XX:+UseFastAccessorMethods -XX:+AggressiveOpts -XX:+UseCompressedOops
> -server -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
> -XX:+PrintGCDateStamps
> -Xms75776m -Xmx75776m
>
> On Thu, Feb 25, 2016 at 3:46 PM, bappa kon  wrote:
>
>> Which garbage collector you are using currently in your env? Can you
>> share the jvm parameters?.  If you are using CMS and already optimized your
>> parameter then probably you can look at to G1 garbage collector.
>>
>> First you should look at the GC stats and pattern to find out the cause
>> of long GC.
>>
>> Regards
>> Jitendra
>>
>>
>>
>> On Thu, Feb 25, 2016 at 3:24 PM, Sandeep Nemuri 
>> wrote:
>>
>>> You my need to tune your GC settings.
>>>
>>>
>>> ᐧ
>>>
>>> On Thu, Feb 25, 2016 at 3:04 PM, Namikaze Minato 
>>> wrote:
>>>
 This happened to us. Our namenodes are on a virtual machine, and
 reducing the number of replication locations of the journal node to
 1 (it's backed by by a safe raid array anyway) solved the problem.

 Regards,
 LLoyd

 On 25 February 2016 at 06:39, Gokulakannan M (Engineering - Data
 Platform)  wrote:
 > Hi,
 >
 > It is known that namenode shuts down when a long GC pause happens
 when NN
 > writes edits to journal nodes - Namenode thinks that journal nodes
 didn't
 > respond but actually it was due to the long GC pause. Any pointers on
 > solving this issue?
 >

 -
 To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
 For additional commands, e-mail: user-h...@hadoop.apache.org


>>>
>>>
>>> --
>>> *  Regards*
>>> *  Sandeep Nemuri*
>>>
>>
>>
>
>
> --
>
>


Re: unsubscribe

2016-02-26 Thread Mohammad Tariq
You need to use *user-unsubscr...@hadoop.apache.org
* for that.



[image: http://]

Tariq, Mohammad
about.me/mti
[image: http://]



On Fri, Feb 26, 2016 at 7:27 PM, Ram  wrote:

>
>


RE: unsubscribe

2016-02-26 Thread Bourre, Marc
marc.bou...@ehealthontario.on.ca

From: Bhasin, Taran [mailto:taran.bha...@mhfi.com]
Sent: Friday, February 26, 2016 9:09 AM
To: user@hadoop.apache.org
Subject: unsubscribe

taran.bha...@mhfi.com


The information contained in this message is intended only for the recipient, 
and may be a confidential attorney-client communication or may otherwise be 
privileged and confidential and protected from disclosure. If the reader of 
this message is not the intended recipient, or an employee or agent responsible 
for delivering this message to the intended recipient, please be aware that any 
dissemination or copying of this communication is strictly prohibited. If you 
have received this communication in error, please immediately notify us by 
replying to the message and deleting it from your computer. McGraw Hill 
Financial reserves the right, subject to applicable local law, to monitor, 
review and process the content of any electronic message or information sent to 
or from McGraw Hill Financial e-mail addresses without informing the sender or 
recipient of the message. By sending electronic message or information to 
McGraw Hill Financial e-mail addresses you, as the sender, are consenting to 
McGraw Hill Financial processing any of your personal data therein.


unsubscribe

2016-02-26 Thread Anand Sreenivasan
-- 
Thanks,
Anand


Re: YARN control on external Hadoop Streaming

2016-02-26 Thread Prabhu Joseph
Found mapred.child.limit is the parameter i am looking for, but it seems it
is removed. Do anyone know why this parameter got removed.

On Fri, Feb 26, 2016 at 5:21 AM, Prabhu Joseph 
wrote:

> Hi All,
>
>   A Hadoop Streamin Job which runs on YARN, triggers separate external
> processes and which can take more memory / CPU. Just want to check if there
> any way we can control the resource Memory and CPU of external Process
> through YARN.
>
>
>
>
> Thanks,
> Prabhu Joseph
>


Re: libhdfs force close hdfsFile

2016-02-26 Thread Chris Nauroth
Hello Ken,

The closest thing to what you're requesting is in the Java API, there is the 
slightly dodgy, semi-private, we-hope-only-HBase-calls-it method 
DistributedFileSystem#recoverLease.  This is capable of telling the NameNode to 
recover the lease (and ultimately close the file if necessary) based on any 
specified path.  This method is not exposed through libhdfs though, and just so 
it's clear, I wouldn't recommend using it even if it was.

When I hear questions like this, it's often because an application is writing 
to a file at a certain path and there is a desire for recoverability if the 
application terminates prematurely, such as due to a server crash.  Users would 
like another process to be able to take over right away and start writing to 
the file again, but the NameNode won't allow this until after expiration of the 
old client's lease.  Is this the use case you had in mind?

If so, then a pattern that can work well is for the application to create and 
write to a unique temporary file name instead of the final destination path.  
Then, after writing all data, the application renames the temporary file to the 
desired final destination.  Since the leases are tracked on the file paths 
being written, the old client's lease on its temporary file won't block the new 
client from writing to a different temporary file.

--Chris Nauroth

From: Ken Huang mailto:dnion...@gmail.com>>
Date: Thursday, February 25, 2016 at 5:49 PM
To: "user@hadoop.apache.org" 
mailto:user@hadoop.apache.org>>
Subject: libhdfs force close hdfsFile

Hi,

Does anyone know how to close a hdfsFile while the connection between 
hdfsClient and NameNode is lost ?

Thanks
Ken Huang