Any idea? or I should ask another user group? Thanks.
On Mon, Nov 26, 2018 at 2:02 PM Lian Jiang wrote:
> On HDP3, I cannot get the full log of a failing spark job by using yarn
> api:
>
> curl -k -u guest:"" -X GET https://
> myhost.com/gateway/ui/resourcemanager/
On HDP3, I cannot get the full log of a failing spark job by using yarn api:
curl -k -u guest:"" -X GET https://
myhost.com/gateway/ui/resourcemanager/v1/cluster/apps/
//www.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_600_incompatible_changes.html#hadoop_600_ic
>
> Please use webhdfs or httpfs instead.
>
> On Thu, Aug 30, 2018 at 9:36 AM Lian Jiang wrote:
>
>> I am using HDP3.0 which uses HADOOP3.1.0 and Spark 2.3.1.
I am using HDP3.0 which uses HADOOP3.1.0 and Spark 2.3.1. My spark
streaming jobs running fine in HDP2.6.4 (HADOOP2.7.3, spark 2.2.0) fails in
HDP3:
java.lang.IllegalAccessError: class
org.apache.hadoop.hdfs.web.HftpFileSystem cannot access its superinterface
I am using HDP3.0 and ambari 2.7 blueprint to install the cluster.
Namenode/journal node failed to start due to the exception:
a:main(1715)) - Failed to start namenode.
java.lang.AbstractMethodError:
ur problem,
> 1. Set NameNode checkpoint.
> 2. Review threshold for uncommitted transactions.
>
> Thanks
>
> On Aug 5, 2018, at 7:10 PM, Lian Jiang wrote:
>
> Hi,
>
> The primary namenode of my HA cluster using HDP2.6 goes into safemode
> daily due to "Namenode
Hi,
The primary namenode of my HA cluster using HDP2.6 goes into safemode daily
due to "Namenode last checkpoint". The default
dfs.namenode.checkpoint.period is 6 hours while
https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
says the default is 1 hour.
Any idea? Thanks.
On Mon, Jul 30, 2018 at 11:25 AM, Lian Jiang wrote:
> This document mentions namenode formatting and bootstrapping:
> https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/
> data-storage/content/format_namenodes.html
>
> Can ambari blueprint take
This document mentions namenode formatting and bootstrapping:
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/data-storage/content/format_namenodes.html
Can ambari blueprint take care of it? How to streamline the HDP
installation automation?
On Mon, Jul 30, 2018 at 9:32 AM, Lian Jiang
Hi,
I am using ambari 2.7 blueprint to install HDP3.0. After installing, the
active namenode cannot start due to error:
2018-07-30 04:41:03,839 WARN namenode.FSNamesystem
(FSNamesystem.java:loadFromDisk(716)) - Encountered exception loading
fsimage
java.io.IOException: NameNode is not
> Your value of fs.defaultFS is supposed to have a port number, e.g.
> hdfs://test-cluster:9000.
>
> On 7/18/18 4:28 PM, Lian Jiang wrote:
>
> Hi,
>
> I am enabling HA for hdfs and yarn on my hdp2.6 cluster. HDFS can start
> but yarn cannot due to e
Hi,
I am enabling HA for hdfs and yarn on my hdp2.6 cluster. HDFS can start but
yarn cannot due to error:
2018-07-18 18:11:23,967 FATAL
applicationhistoryservice.ApplicationHistoryServer
(ApplicationHistoryServer.java:launchAppHistoryServer(177)) - Error
starting ApplicationHistoryServer
12 matches
Mail list logo