galenwarren edited a comment on pull request #15599:
URL: https://github.com/apache/flink/pull/15599#issuecomment-1018971976


   @xintongsong I've realized something as I'm working through the docs and 
checking things.
   
   The existing GCS 
[documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/gcs/)
 describes authentication methods used to access GCS. The first is via changes 
to core-site.xml and flink-conf.yaml; the second is by setting the 
GOOGLE_APPLICATION_CREDENTIALS environment variable.
   
   So far, I've doing all my work using the second method, and I think that's 
Google's generally preferred method at this point, i.e. setting that 
environment variable seems to be the standard way that works across all Google 
libraries. Right now, if someone were to set credentials in the Hadoop 
configuration as documented, that would work for the `FileSystem` part of the 
GS FileSystem but not for the `RecoverableWriter` part.
   
   So, I can see two paths forward:
   * Remove the documentation for how to set credentials in the Hadoop config 
and simply require users to set the GOOGLE_APPLICATION_CREDENTIALS environment 
variable, if needed, for authentication
   * Change the `RecoverableWriter` implementation to look for credentials in 
the Hadoop config and use them, if they exist
   
   The first would be easist, obviously, but if someone were already using the 
Hadoop config to access GCS, they would need to change that in order to use 
`flink-gs-fs-hadoop`. 
   
   Any thoughts? Thanks.
   
   EDIT: For now, I've gone ahead and just documented the one way to specify 
credentials, via GOOGLE_APPLICATION_CREDENTIALS.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to