This is the code I was looking for, which will allow me programmatically
to connect to remote jobmanager same as spark remote master .
The spark master which shares the compute load with slaves , in the case of
flink jobmanager with taskmanagers.
Configuration conf = new Configuration();
Glad to hear that.
Som Lima 于2020年4月20日周一 上午8:08写道:
> I will thanks. Once I had it set up and working.
> I switched my computers around from client to server to server to client.
> With your excellent instructions I was able to do it in 5 .minutes
>
> On Mon, 20 Apr 2020, 00:05 Jeff Zhang,
I will thanks. Once I had it set up and working.
I switched my computers around from client to server to server to client.
With your excellent instructions I was able to do it in 5 .minutes
On Mon, 20 Apr 2020, 00:05 Jeff Zhang, wrote:
> Som, Let us know when you have any problems
>
> Som
Som, Let us know when you have any problems
Som Lima 于2020年4月20日周一 上午2:31写道:
> Thanks for the info and links.
>
> I had a lot of problems I am not sure what I was doing wrong.
>
> May be conflicts with setup from apache spark. I think I may need to
> setup users for each development.
>
>
>
Thanks for the info and links.
I had a lot of problems I am not sure what I was doing wrong.
May be conflicts with setup from apache spark. I think I may need to setup
users for each development.
Anyway I kept doing fresh installs about four altogether I think.
Everything works fine now
Hi Som,
You can take a look at flink on zeppelin, in zeppelin you can connect to a
remote flink cluster via a few configuration, and you don't need to worry
about the jars. Flink interpreter will ship necessary jars for you. Here's
a list of tutorials.
1) Get started
Hi Tison,
I think I may have found what I want in example 22.
https://www.programcreek.com/java-api-examples/?api=org.apache.flink.configuration.Configuration
I need to create Configuration object first as shown .
Also I think flink-conf.yaml file may contain configuration for client
rather
Thanks.
flink-conf.yaml does allow me to do what I need to do without making any
changes to client source code.
But
RemoteStreamEnvironment constructor expects a jar file as the third
parameter also.
RemoteStreamEnvironment
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port"
options before run the program or take a look at RemoteStreamEnvironment
which enables configuring host and port.
Best,
tison.
Som Lima 于2020年4月19日周日 下午5:58写道:
> Hi,
>
> After running
>
> $ ./bin/start-cluster.sh
>
> The
Hi,
After running
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager to localhost:6123
final ExecutionEnvironment env = Environment.getExecutionEnvironment();
which is same on spark.
val spark =
SparkSession.builder.master(local[*]).appname("anapp").getOrCreate
10 matches
Mail list logo