uj,
>
> I recalled another ticket on this topic, which had some things to test. I
> don't know if that resolved the issue, can you verify it? See
> https://issues.apache.org/jira/browse/FLINK-31095
>
> Best regards,
>
> Martijn
>
> On Tue, May 23, 2023 at 7:04 AM Anuj Jai
Hello,
Please provide some pointers on this issue.
Thanks !!
Regards
Anuj
On Fri, May 19, 2023 at 1:34 PM Anuj Jain wrote:
> Hi Community,
> Looking forward to some advice on the problem.
>
> I also found this similar Jira, but not sure if a fix has been done for
> the Hadoop
487>
Is there any other way to integrate Flink source/sink with AWS IAM from EKS
?
Regards
Anuj
On Thu, May 18, 2023 at 12:41 PM Anuj Jain wrote:
> Hi,
> I have a flink job running on EKS, reading and writing data records to S3
> buckets.
> I am trying to set up access credential
and that works.
Am I using the correct credential provider for IAM integration, not sure if
Hadoop S3a supports it.
https://issues.apache.org/jira/browse/HADOOP-18154
Please advise if I am doing anything wrong in setting up credentials via
IAM.
Regards
Anuj Jain
tored in flink-conf.yaml. The
>> recommended method for setting up credentials is by using IAM, not via
>> Access Keys. See
>> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/s3/#configure-access-credentials
>> for more details.
>>
>
ntials-from-hashicorp-vault>
> for
> more detailed instructions.
> Besides, it should be possible to override Configuration object in your
> job code. Are you using Application mode to run the job?
>
> Best regards,
> Biao Geng
>
> Anuj Jain 于2023年5月8日周一 13:55
. I
think flink creates the connection pool at startup even before the job is
started.
Thanks and Regards
Anuj Jain
>
>
> Hi Community,
>
>
> I am trying to use flink-parquet for reading and writing parquet files
> from the Flink filesystem connectors.
>
> In File source, I would be decoding parquet files and converting them to
> avro records and similarly in file sink i would be encoding avro records to
>
Hi Community,
I am trying to use flink-parquet for reading and writing parquet files from
the Flink filesystem connectors.
In File source, I would be decoding parquet files and converting them to
avro records and similarly in file sink i would be encoding avro records to
parquet files.
For
gt;
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-196%3A+Source+API+stability+guarantees
>
> Best,
> Yangze Guo
>
> On Wed, May 3, 2023 at 12:08 PM Anuj Jain wrote:
> >
> > Hi Community,
> > I saw some flink classes annotated with
> >
Hi Community,
I saw some flink classes annotated with
@Experimental
@PublicEvolving
@Internal
What do these annotations mean? Can I use these classes in production?
How the class APIs would evolve in future. Can they break backward
compatibility in terms of API declaration or implementation, in
Hi Community,
Does Flink File Sink support compression of output files, to reduce the
file size?
I think File source supports reading of compressed formats like gzip, bzip2
etc.; is there any way for sinking the files in compressed format ?
Any help is appreciated.
Regards
Anuj
12 matches
Mail list logo