[ 
https://issues.apache.org/jira/browse/FLINK-30745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679514#comment-17679514
 ] 

Surendra Singh Lilhore edited comment on FLINK-30745 at 1/22/23 5:54 AM:
-------------------------------------------------------------------------

[~dheerajpanangat] , sorry for late reply.

As mentioned in Flink doc ([Azure blob Flink configuration 
|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/azure/]
 ), you need to configure abfs properties in *flink-conf.yaml.*

[HadoopConfigLoader 
|https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/util/HadoopConfigLoader.java#L82]
 load this configuration from flink configuration. 
{quote}Provided the shaded classes instead of Hadoop classes
{quote}
You we correct

 

Please configure below properties in flink-conf.yaml in Kubernetes cluster and 
try. 
{noformat}
fs.azure.account.auth.type : OAuth

fs.azure.account.oauth.provider.type : 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider

fs.azure.account.oauth2.client.id : <application_id>

fs.azure.account.oauth2.client.secret : <secret>

fs.azure.account.oauth2.client.endpoint : 
https://XXXXXXXXXXXXXX.com/XXXXXXXXXXXXXXXXXXXXX/oauth2/token {noformat}
 

 


was (Author: surendrasingh):
[~dheerajpanangat] , sorry for late reply.

As mentioned in flink doc ([Azure Blob Storage | Apache 
Flink|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/azure/]),
 you need to configure abfs properties in *flink-conf.yaml.*

[HadoopConfigLoader 
|https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/util/HadoopConfigLoader.java#L82]
 load this configuration from flink configuration. 
{quote}Provided the shaded classes instead of Hadoop classes
{quote}
You we correct

 

Please configure below properties in flink-conf.yaml in Kubernetes cluster and 
try. 
{noformat}
fs.azure.account.auth.type : OAuth

fs.azure.account.oauth.provider.type : 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider

fs.azure.account.oauth2.client.id : <application_id>

fs.azure.account.oauth2.client.secret : <secret>

fs.azure.account.oauth2.client.endpoint : 
https://XXXXXXXXXXXXXX.com/XXXXXXXXXXXXXXXXXXXXX/oauth2/token {noformat}
 

 

> Check-pointing with Azure Data Lake Storage
> -------------------------------------------
>
>                 Key: FLINK-30745
>                 URL: https://issues.apache.org/jira/browse/FLINK-30745
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / FileSystem
>    Affects Versions: 1.15.2, 1.14.6
>            Reporter: Dheeraj Panangat
>            Priority: Major
>
> Hi,
> While checkpointing to Azure Blob Storage using Flink, we get the following 
> error :
> {code:java}
> Caused by: Configuration property <accoutnname>.dfs.core.windows.net not 
> found.
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:372)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1133)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:174)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:110)
>  {code}
> We have given the configurations in core-site.xml too for following
> {code:java}
> fs.hdfs.impl
> fs.abfs.impl -> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem
> fs.file.impl
> fs.azure.account.auth.type
> fs.azure.account.oauth.provider.type
> fs.azure.account.oauth2.client.id
> fs.azure.account.oauth2.client.secret
> fs.azure.account.oauth2.client.endpoint
> fs.azure.createRemoteFileSystemDuringInitialization -> true {code}
> On debugging found that flink reads from core-default-shaded.xml, but even if 
> the properties are specified there, the default configs are not loaded and we 
> get a different exception as :
> {code:java}
> Caused by: Unable to load key provider class.
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getTokenProvider(AbfsConfiguration.java:540)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1136)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:174)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:110)
>  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to