thank you.
Please unsubscribe from all hadoop lists
Hi,
Why not write simple bash script for this. I understand, this not exactly
the answer for your question but unless you have to use java code, its not
bad option.
Regards,
S
id
Raskar.
On Wed, Apr 27, 2016 at 6:29 AM, Ashutosh Kumar
wrote:
>
> http://www.drdobbs.com/database/hadoop-writin
Hello.
My team is working on moving some Hadoop 1 jobs (using an old AWS EMR AMI) to
YARN / Hadoop 2 (using the newer AWS EMR Release 4.x). We have an edge node
with Hadoop 2.7.2 installed from which jobs get submitted to the cluster. It
appears that we must have the yarn.application.classpat
The biggest win I've seen for stability of hadoop components is to give
them their own hard disks; or alternatively their own hosts.
Obviously, you'll also want to check the usual suspects or resource and
processor contention.
On Wed, May 4, 2016 at 3:59 PM, Anandha L Ranganathan wrote:
> The R
The RM is keep going down and here is the error message we are getting.
How do we fix the issue ?
ZK and RM are on the same host .
2016-05-04 19:17:36,132 INFO resourcemanager.RMAppManager
(RMAppManager.java:checkAppNumCompletedLimit(247)) - Max number of
completed apps kept in state store
?Sandeep, Bramha,
Thanks for you answer, that helps a lot! We'll see if we go for the custom
StandByException handling or with the 3rd party module.
Regards, Adam.
De : Sandeep Nemuri
Envoy? : mercredi 4 mai 2016 11:18
? : Brahma Reddy Battula
Cc : Cecile, Ad
This could help you: https://pypi.python.org/pypi/PyHDFS/0.1.0
Thanks,
Sandeep
ᐧ
On Wed, May 4, 2016 at 2:40 PM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:
> 1. Have a list of namenodes, built from configurations.
> 2. Execute the op on each namenode until its success.
> 3. Ha
1. Have a list of namenodes, built from configurations.
2. Execute the op on each namenode until its success.
3. Have the successfull namenode url as active namenode, and use the same for
next operations.
4. Whenever a StandByException or some network exception (other than remote
exceptions) occu
Hi,
I did enabled the vmem check. But what I really want to know is that why huge
virtual memory(17.9 GB) used when JVM option -XX:+PrintGCDetails was added.
The appliction which I run was hadoop jar hadoop-mapreduce-examples-2.7.2.jar
pi -D mapreduce.map.java.opts=-XX:+PrintGCDetails 5 100.
Hi all,
for research purpose (we are working to know the completion time of an
hadoop computation, if you are interested feel free to shoot me an
email) I want to know when the sort phase starts for every reducers.
Without writing any code is possible to know when the sort phase start?
This i
Hi all,
for research purpose (we are working to know the completion time of an
hadoop computation, if you are interested feel free to shoot me an
email) I want to know when the sort phase starts for every reducers.
Without writing any code is possible to know when the sort phase start?
This i
Hello,
I'm not sure to understand your answer, may I add a little piece of code:
def _build_hdfs_url(self, hdfs_path, hdfs_operation, opt_query_param_tuples=[]):
"""
:type hdfs_path: str
:type hdfs_operation: str
"""
if not hdfs_path.startswith("/"):
I think you can simply use the nameservice (dfs.nameservices) which is
defined in hdfs-site.xml
The hdfs client should be able to resolve the current active namenode and
get the necessary information.
Thanks,
Sandeep Nemuri
ᐧ
On Wed, May 4, 2016 at 12:04 PM, Cecile, Adam wrote:
> Hello All,
>
>
14 matches
Mail list logo