ress from ZK.
>
> Please ensure that you have specified the HA related config options in CLI
> via -D or set them in the flink-conf.yaml.
>
> Best,
> Yang
>
> sidhant gupta 于2021年2月3日周三 下午10:02写道:
>
>> Is it possible to use flink CLI instead of flink client for connec
]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source)
~[?:?]
... 6 more
Please help me on this.
Thanks
Sidhant Gupta
flink-conf.yaml
Description: application/yaml
k load balancer.
> Flink client is not aware of whether it is a network balancer or multiple
> ZooKeeper server address.
> After then Flink client will retrieve the active leader JobManager address
> via ZooKeeperHAService
> and submit the job successfully via rest client.
>
> Best,
&g
terms of request or response ?
Can we use flink cli in jenkins to upload the jar to the flink cluster and
run the jobs?
[1]
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Best-way-to-find-the-current-alive-jobmanager-with-HA-mode-zookeeper-td21787.html
Thanks
Sidhant Gupta
Hi Till,
Thanks for the clarification and suggestions
Regards
Sidhant Gupta
On Wed, Dec 2, 2020, 10:10 PM Till Rohrmann wrote:
> Hi Sidhant,
>
> Have you seen this discussion [1]? If you want to use S3, then you need to
> make sure that you start your Flink processes with the
(SimpleChannelInboundHandler.java:105)
[flink-dist_2.11-1.11.2.jar:1.11.2]
Looks like web.upload.dir only supports local path.
Any suggestions on how to upload and submit job jar in flink HA cluster
setup via web UI and also it from the CLI ?
Thanks and regards
Sidhant Gupta
8日周四 下午3:30写道:
>
>> The easiest way to suppress this error would be to disable the logging
>> for TaskManagerStdoutFileHandler in your log4j.properties file.
>>
>> Cheers,
>> Till
>>
>> On Wed, Oct 7, 2020 at 8:48 PM sidhant gupta wrote:
>>
>>> Hi Till,
>
work around ?
Thanks
Sidhant Gupta
On Wed, Oct 7, 2020, 9:15 PM Till Rohrmann wrote:
> Hi Sidhant,
>
> when using Flink's Docker image, then the cluster won't create the out
> files. Instead the components will directly write to STDOUT which is
> captured by Kubernetes and can
to see the logs in the logs tab but not the stdout logs in the
web ui and getting the below mentioned error after running the job.
Thanks
Sidhant Gupta
On Wed, Oct 7, 2020, 8:00 PM 大森林 wrote:
> it's easy,
> just restart your flink cluster(standalone mode)
>
> if you run flink
++ user
On Wed, Oct 7, 2020, 6:47 PM sidhant gupta wrote:
> Hi
>
> I checked in the $FLINK_HOME/logs. The .out file was not there. Can you
> suggest what should be the action item ?
>
> Thanks
> Sidhant Gupta
>
>
> On Wed, Oct 7, 2020, 7:17 AM 大森林 wrote:
o be added in log4j.properties in the docker
container e.g. log4j.rootLogger=INFO, file
Are there any other properties which needs to be configured in any of the
other property files or any jar needs to be added in the */opt/flink *path ?
Thanks
Sidhant Gupta
Sidhant Gupta
On Tue, Aug 25, 2020, 11:51 AM Till Rohrmann wrote:
> Hi Sidhant,
>
> the cluster components use tcp to communicate with each other. If you are
> not using Flink's HA services, then the TaskManager nodes need to be
> configured with the JobManager's address to
++d...@flink.apache.org
On Mon, Aug 24, 2020, 7:31 PM sidhant gupta wrote:
> Hi User
>
> How jobmanager and task manager communicates with each other ? How to set
> connection between jobmanager and task manager running in different/same
> ec2 instance ? Is it http or tcp ?
Hi User
How jobmanager and task manager communicates with each other ? How to set
connection between jobmanager and task manager running in different/same
ec2 instance ? Is it http or tcp ? How the service discovery works ?
Thanks
Sidhant Gupta
netes operators.
>
> [1] https://www.ververica.com/getting-started
>
> On Tue, Aug 11, 2020 at 3:23 PM Till Rohrmann
> wrote:
>
>> Hi Sidhant,
>>
>> see the inline comments for answers
>>
>> On Tue, Aug 11, 2020 at 3:10 PM sidhant gupta
>> wr
manager ? Refer:
https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/production_ready.html#set-an-explicit-max-parallelism
Regards
Sidhant Gupta
On Tue, 11 Aug, 2020, 5:19 PM Till Rohrmann, wrote:
> Hi Sidhant,
>
> I am not an expert on AWS services but I believe that
a data
streaming api which reads records from a Kafka topic, process it and then
write to a another Kafka topic. Please let me know your thoughts on this.
Thanks
Sidhant Gupta
17 matches
Mail list logo