K8s is self managed on ec2 nodes
After submitting the job and getting an exception I checked:
1.ssh into the machine and verify using the cli the pod has access.
2. In the job main method I instantiate a s3 client from the sdk (once with
default credential chain and once with access key and
Hi Oran,
How is that k8s deployed? Are you sure all nodes have the same IAM role?
can you try and see if this is fixed by granting permissions to that bucket
to the IAM role in use?
On Sun, Jan 31, 2021 at 5:15 PM OranShuster wrote:
> I made some more tests and the issue is still not resolved
I made some more tests and the issue is still not resolved
Since the submitted job main method is executed before the execution graph
is submitted i added the aws sdk as an dependency and used it to upload
files to the bucket in the main method
Once with the default credentials provider, this
So i'm really stumped on this for a couple of days now
Some general info -
Flink version 1.12.1, using k8s HA service. The k8s is self managed on AWS
our checkpoints and savepoints are on s3, i created a new bucket just for it
and set the proper permissions to the k8s node
The job manager is