Hi,
I've recently migrated from Flink 1.8 to Flink 1.10
And when I start the job using YarnClusterDescriptor.deployJobCluster
method everything works fine.
However when I start the job from shell script, the job fails with messages:
*Shell script reports:*
Cluster specification: ClusterSpecificat
e flink jar files in lib/ have?
>
> If your are launching Flink like this...
>
> ./bin/flink run -m yarn-cluster -p 4 -yjm 1024m -ytm 4096m ./my/project.jar
>
> ... it will use the files in lib/ for starting Flink.
>
> Best,
> Robert
>
>
> On Mon, Mar 30, 2020 at 5:39 PM V
eparately from Flink as
> part of flink-shaded (a project that bundles various dependencies to be
> used by Flink).
>
> As of right now there are hardly any practical differences between the two.
>
> On 30/03/2020 23:31, Vitaliy Semochkin wrote:
>
> Thank you very much Sivap
It has dependencies with various hadoop
> versions.
> https://search.maven.org/artifact/org.apache.flink/flink-shaded-hadoop-2
>
> On Mon, Mar 30, 2020 at 10:13 PM Vitaliy Semochkin
> wrote:
>
>> Hi,
>>
>> I can not find flink-shaded-hadoop2 for flink 1.10 i
Hi,
I can not find flink-shaded-hadoop2 for flink 1.10 in maven repositories.
According to maven central
https://search.maven.org/artifact/org.apache.flink/flink-shaded-hadoop
The latest released version was was 1.8.3
Is it going to be leased soon or one should build it for himself or i'm
searchi
>
> Caused by: java.lang.IllegalAccessError: tried to access method
>> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
>> from class
>> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>
>
>
>
> On Mon, Mar 30, 202
his is only a local
> file path, I am afraid this cannot understand hdfs paths.
>
>
> [1]
> https://github.com/apache/flink/blob/ae3b0ff80b93a83a358ab474060473863d2c30d6/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/BootstrapTools.java#L420
>
> Best
Hi,
When I launch Flink Application Cluster I keep getting a message
" Log file environment variable 'log.file' is not set."
I use console logging via log4j
and I read logs via yarn logs -applicationId
What's the purpose of log.file property?
What this file will contain and on which host sh
lvable? And whether the port 43757 of that
> machine is permitted to be accessed?
>
> Thanks,
> Zhu Zhu
>
> Vitaliy Semochkin 于2020年3月27日周五 上午1:54写道:
>
>> Hi,
>>
>> I'm facing an issue similar to
>> https://issues.apache.org/jira/browse/FLINK-14074
>
t;
> Thank you~
>
> Xintong Song
>
>
>
> On Tue, Mar 24, 2020 at 5:59 AM Vitaliy Semochkin
> wrote:
>
>> Hi,
>>
>> what ClusterSpecificationBuilder.taskManagerMemoryMB is for in flink 1.10?
>> It's only usage I see is in YarnCluse
Hi,
I'm facing an issue similar to
https://issues.apache.org/jira/browse/FLINK-14074
Job starts and then yarn logs report "*Could not resolve ResourceManager
address akka.tcp://flink*"
A fragment from yarn logs looks like this:
LazyFromSourcesSchedulingStrategy]
16:54:21,279 INFO org.apache.fli
Hi,
I create a job with following parameters:
org.apache.flink.configuration.Configuration{
yarn.containers.vcores=2
yarn.appmaster.vcores=1
}
ClusterSpecification{
taskManagerMemoryMB=1024
slotsPerTaskManager=1
}
After I launch job programmatically I have :
yarn node -list -showDetails
Configure
Hi,
what ClusterSpecificationBuilder.taskManagerMemoryMB is for in flink 1.10?
It's only usage I see is in YarnCluserDescriptor.validateClusterResources
and I do not get the meaning of it.
How is it different from taskmanager.memory.process.size?
And what's the point of having it, if it's not used
ojects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html#start-a-session
>
> On Thu, Mar 12, 2020 at 3:56 AM Vitaliy Semochkin
> wrote:
>
>> Hi,
>>
>> How can I specify a yarn queue when I start a new job programmatically?
>>
>> Regards,
>> Vitaliy
>>
>
Hi,
How can I specify a yarn queue when I start a new job programmatically?
Regards,
Vitaliy
15 matches
Mail list logo