Hi,

Please refer to my previous mail for complete logs.

Thanks,

On Mon, Jun 18, 2018 at 1:17 PM Till Rohrmann <trohrm...@apache.org> wrote:

> Could you also please share the complete log file with us.
>
> Cheers,
> Till
>
> On Sat, Jun 16, 2018 at 5:22 PM Ted Yu <yuzhih...@gmail.com> wrote:
>
>> The error for core-default.xml is interesting.
>>
>> Flink doesn't have this file. Probably it came with Yarn. Please check
>> the hadoop version Flink was built with versus the hadoop version in your
>> cluster.
>>
>> Thanks
>>
>> -------- Original message --------
>> From: Garvit Sharma <garvit...@gmail.com>
>> Date: 6/16/18 7:23 AM (GMT-08:00)
>> To: trohrm...@apache.org
>> Cc: Chesnay Schepler <ches...@apache.org>, user@flink.apache.org
>> Subject: Re: Exception while submitting jobs through Yarn
>>
>> I am not able to figure out, got stuck badly in this since last 1 week.
>> Any little help would be appreciated.
>>
>>
>> 2018-06-16 19:25:10,523 DEBUG
>> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator  -
>> Parallelism set: 1 for 8
>>
>> 2018-06-16 19:25:10,578 DEBUG
>> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator  -
>> Parallelism set: 1 for 1
>>
>> 2018-06-16 19:25:10,588 DEBUG
>> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator  -
>> CONNECTED: KeyGroupStreamPartitioner - 1 -> 8
>>
>> 2018-06-16 19:25:10,591 DEBUG
>> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator  -
>> Parallelism set: 1 for 5
>>
>> 2018-06-16 19:25:10,597 DEBUG
>> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator  -
>> CONNECTED: KeyGroupStreamPartitioner - 5 -> 8
>>
>> 2018-06-16 19:25:10,618 FATAL org.apache.hadoop.conf.Configuration
>>                     - error parsing conf core-default.xml
>>
>> javax.xml.parsers.ParserConfigurationException: Feature '
>> http://apache.org/xml/features/xinclude' is not recognized.
>>
>> at
>> org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown
>> Source)
>>
>> at
>> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2482)
>>
>> at
>> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2444)
>>
>> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2361)
>>
>> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1188)
>>
>> at
>> org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider.getRecordFactory(RecordFactoryProvider.java:49)
>>
>> at org.apache.hadoop.yarn.util.Records.<clinit>(Records.java:32)
>>
>> at
>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getQueueInfoRequest(YarnClientImpl.java:495)
>>
>> at
>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getAllQueues(YarnClientImpl.java:525)
>>
>> at
>> org.apache.flink.yarn.AbstractYarnClusterDescriptor.checkYarnQueues(AbstractYarnClusterDescriptor.java:658)
>>
>> at
>> org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:486)
>>
>> at
>> org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:75)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:235)
>>
>> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:210)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1020)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1096)
>>
>> at java.security.AccessController.doPrivileged(Native Method)
>>
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
>>
>> at
>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>
>> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1096)
>>
>> 2018-06-16 19:25:10,620 WARN  
>> org.apache.flink.yarn.AbstractYarnClusterDescriptor
>>           - Error while getting queue information from YARN: null
>>
>> 2018-06-16 19:25:10,621 DEBUG
>> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Error
>> details
>>
>> java.lang.ExceptionInInitializerError
>>
>> at
>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getQueueInfoRequest(YarnClientImpl.java:495)
>>
>> at
>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getAllQueues(YarnClientImpl.java:525)
>>
>> at
>> org.apache.flink.yarn.AbstractYarnClusterDescriptor.checkYarnQueues(AbstractYarnClusterDescriptor.java:658)
>>
>> at
>> org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:486)
>>
>> at
>> org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:75)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:235)
>>
>> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:210)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1020)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1096)
>>
>

-- 

Garvit Sharma
github.com/garvitlnmiit/

No Body is a Scholar by birth, its only hard work and strong determination
that makes him master.

Reply via email to