Which scheduler is being used? Capacity/Fair/Something else?
From: Nicolae Marasoiu [mailto:nicolae.maras...@adswizz.com]
Sent: Monday, November 23, 2015 7:59 AM
To: user@hadoop.apache.org
Subject: yarn does not allocate enough tasks/containers to my available node
Hi,
Tasks are
Have a look at the logs for your attempt_1448325816071_0002_m_03_0.
Regards,
LLoyd
-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org
Hi,
To get entire log, yarn logs command can help you.
yarn logs -applicationId application_1448325816071_0002
Thanks,
- Tsuyoshi
On Tue, Nov 24, 2015 at 7:43 PM, Namikaze Minato wrote:
> Have a look at the logs for your attempt_1448325816071_0002_m_03_0.
>
>
Hello,
I'm using fake-s3 for testing "s3a://"-backed storage locally. This
requires path-style access, which is not accessible via configuration.
I'm aware of [HDFS-8727], stating that setting a custom endpoint
switches to path-style access automatically. However, this is not
working for me.
Hello,
I need to decommission a datanode from a running cluster, my probleme is
that the "dfs.hosts.exclude" property was not set on either the
"hdfs-site.xml" nor the "core-site.xml", and since adding it requires an
HDFS restart (which I can't do). How do I decommission that datanode ?
Hello,
Seems like I was updating the "hdfs-site.xml" on the datanode I want to
decommission instead of the namenode. I added the exclude property in the
"hdfs-site.xml" on the namenode, executed the refresh command and it worked
without restarting the cluster... it is now decommissioning.
Please
Hello Arpan and Neeraj,
Thanks for the response. I have checked the setting in hive and sentry on
my production servers they are exactly the same. It works find in Prod but
not in my test environment. Is there any thing else I am missing.
Thanks
Jay
Thanks
Jay
On Tue, Nov 24, 2015 at
Hello,
I'm using fake-s3 for testing "s3a://"-backed storage locally. This
requires path-style access, which is not accessible via configuration.
I'm aware of [HDFS-8727], stating that setting a custom endpoint
switches to path-style access automatically. However, this is not
working for me.
We are facing below mentioned error on storing dataset using HCatStorer.Can
someone please help us
STORE F INTO 'default.CONTENT_SVC_USED' using
org.apache.hive.hcatalog.pig.HCatStorer();
ERROR hive.log - Got exception: java.net.URISyntaxException Malformed
escape pair at index 9: