cgivre commented on code in PR #2714:
URL: https://github.com/apache/drill/pull/2714#discussion_r1045149074
##########
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java:
##########
@@ -82,13 +83,21 @@ public class FileSystemSchema extends AbstractSchema {
public FileSystemSchema(String name, SchemaConfig schemaConfig) throws
IOException {
super(Collections.emptyList(), name);
final DrillFileSystem fs =
ImpersonationUtil.createFileSystem(schemaConfig.getUserName(),
plugin.getFsConf());
+ // Set OAuth Information
+ OAuthConfig oAuthConfig = plugin.getConfig().oAuthConfig();
+ if (oAuthConfig != null) {
+ OAuthEnabledFileSystem underlyingFileSystem = (OAuthEnabledFileSystem)
fs.getUnderlyingFs();
Review Comment:
@jnturton Good question. I think that may be possible but with a lot of
refactoring. I don't fully understand the file system creation process, but in
following the flow, I do think that would involve a lot of refactoring.
On an unrelated note, Hadoop seems to ship with other classes which extend
`FileSystem` such as FTP, SFTP and a few others. It may be possible for Drill
to query those by simply adding a few import statements.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]