[jira] [Created] (HADOOP-14145) Ensure GenericOptionParser is used for S3Guard CLI
Sean Mackrory created HADOOP-14145: -- Summary: Ensure GenericOptionParser is used for S3Guard CLI Key: HADOOP-14145 URL: https://issues.apache.org/jira/browse/HADOOP-14145 Project: Hadoop Common Issue Type: Sub-task Reporter: Sean Mackrory Assignee: Sean Mackrory As discussed in HADOOP-14094. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14144) s3guard: CLI import does not yield an empty diff.
Aaron Fabbri created HADOOP-14144: - Summary: s3guard: CLI import does not yield an empty diff. Key: HADOOP-14144 URL: https://issues.apache.org/jira/browse/HADOOP-14144 Project: Hadoop Common Issue Type: Sub-task Reporter: Aaron Fabbri Priority: Minor I expected the following steps to yield zero diff from `hadoop s3guard diff` command. (1) hadoop s3guard init ... (create fresh table) (2) hadoop s3guard import (fresh table, existing bucket with data in it) (3) hadoop s3guard diff .. Instead I still get a non-zero diff on step #3, and also noticed some entries are printed twice. {noformat} dude@computer:~/Code/hadoop$ hadoop s3guard diff -meta dynamodb://dude-dev -region us-west-2 s3a://dude-dev S3 D s3a://fabbri-dev/user/fabbri/test/parentdirdest S3 D s3a://fabbri-dev/user/fabbri/test/parentdirdest {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14143) S3A Path Style Being Ignore
Vishnu Vardhan created HADOOP-14143: --- Summary: S3A Path Style Being Ignore Key: HADOOP-14143 URL: https://issues.apache.org/jira/browse/HADOOP-14143 Project: Hadoop Common Issue Type: Bug Reporter: Vishnu Vardhan Hi: In the following example, path style specification is being ignore scala>:paste sc.setLogLevel("DEBUG") sc.hadoopConfiguration.set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem") sc.hadoopConfiguration.set("fs.s3a.endpoint","webscaledemo.netapp.com:8082") sc.hadoopConfiguration.set("fs.s3a.access.key","") sc.hadoopConfiguration.set("fs.s3a.secret.key","") sc.hadoopConfiguration.set("fs.s3a.path.style.access","false") val s3Rdd = sc.textFile("s3a://myBkt8") s3Rdd.count() Debug Log: application/x-www-form-urlencoded; charset=utf-8 Thu, 02 Mar 2017 22:46:56 GMT /myBkt8/" 17/03/02 14:46:56 DEBUG request: Sending Request: GET https://webscaledemo.netapp.com:8082 /myBkt8/ Parameters: (max-keys: 1, prefix: user/vardhan/, delimiter: /, ) Headers: (Authorization: AWS 2SNAJYEMQU45YPVYC89D:PIQqLcr6FV61H0+Ay7tw3WygGFo=, User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.12.3 Java_HotSpot(TM)_64-Bit_Server_VM/25.60-b23/1.8.0_60, Date: Thu, 02 Mar 2017 22:46:56 GMT, Content-Type: application/x-www-form-urlencoded; charset=utf-8, ) 17/03/02 14:46:56 DEBUG PoolingClientConnectionManager: Connection request: [route: {s}->https://webscaledemo.netapp.com:8082][total kept alive: 0; route allocated: 0 of 15; total allocated: 0 of 15] 17/03/02 14:46:56 DEBUG PoolingClientConnectionManager: Connection leased: [id: 2][route: {s}->https://webscaledemo.netapp.com:8082][total kept alive: 0; route allocated: 1 of 15; total allocated: 1 of 15] 17/03/02 14:46:56 DEBUG DefaultClientConnectionOperator: Connecting to webscaledemo.netapp.com:8082 17/03/02 14:46:57 DEBUG RequestAddCookies: CookieSpec selected: default 17/03/02 14:46:57 DEBUG RequestAuthCache: Auth cache not set in the context 17/03/02 14:46:57 DEBUG RequestProxyAuthentication: Proxy auth state: UNCHALLENGED 17/03/02 14:46:57 DEBUG SdkHttpClient: Attempt -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14142) S3A - Adding unexpected prefix
Vishnu Vardhan created HADOOP-14142: --- Summary: S3A - Adding unexpected prefix Key: HADOOP-14142 URL: https://issues.apache.org/jira/browse/HADOOP-14142 Project: Hadoop Common Issue Type: Bug Reporter: Vishnu Vardhan Priority: Critical Hi: S3A seems to prefix unexpected prefix to my s3 path Specifically, in the debug log below the following line is unexpected > GET /myBkt8/?max-keys=1=user%2Fvardhan%2F=%2F HTTP/1.1 It is not clear where the "prefix" is coming from and why. I executed the following commands sc.setLogLevel("DEBUG") sc.hadoopConfiguration.set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem") sc.hadoopConfiguration.set("fs.s3a.endpoint","webscaledemo.netapp.com:8082") sc.hadoopConfiguration.set("fs.s3a.access.key","") sc.hadoopConfiguration.set("fs.s3a.secret.key","") sc.hadoopConfiguration.set("fs.s3a.path.style.access","false") val s3Rdd = sc.textFile("s3a://myBkt98") s3Rdd.count() debug log is below application/x-www-form-urlencoded; charset=utf-8 Thu, 02 Mar 2017 22:40:25 GMT /myBkt8/" 17/03/02 14:40:25 DEBUG request: Sending Request: GET https://webscaledemo.netapp.com:8082 /myBkt8/ Parameters: (max-keys: 1, prefix: user/vardhan/, delimiter: /, ) Headers: (Authorization: AWS 2SNAJYEMQU45YPVYC89D:M8GbLXUuAJ2w5pGx4WJ6hJF3324=, User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.12.3 Java_HotSpot(TM)_64-Bit_Server_VM/25.60-b23/1.8.0_60, Date: Thu, 02 Mar 2017 22:40:25 GMT, Content-Type: application/x-www-form-urlencoded; charset=utf-8, ) 17/03/02 14:40:25 DEBUG PoolingClientConnectionManager: Connection request: [route: {s}->https://webscaledemo.netapp.com:8082][total kept alive: 0; route allocated: 0 of 15; total allocated: 0 of 15] 17/03/02 14:40:25 DEBUG PoolingClientConnectionManager: Connection leased: [id: 10][route: {s}->https://webscaledemo.netapp.com:8082][total kept alive: 0; route allocated: 1 of 15; total allocated: 1 of 15] 17/03/02 14:40:25 DEBUG DefaultClientConnectionOperator: Connecting to webscaledemo.netapp.com:8082 17/03/02 14:40:25 DEBUG PoolingClientConnectionManager: Closing connections idle longer than 60 SECONDS 17/03/02 14:40:25 DEBUG PoolingClientConnectionManager: Closing connections idle longer than 60 SECONDS 17/03/02 14:40:26 DEBUG RequestAddCookies: CookieSpec selected: default 17/03/02 14:40:26 DEBUG RequestAuthCache: Auth cache not set in the context 17/03/02 14:40:26 DEBUG RequestProxyAuthentication: Proxy auth state: UNCHALLENGED 17/03/02 14:40:26 DEBUG SdkHttpClient: Attempt 1 to execute request 17/03/02 14:40:26 DEBUG DefaultClientConnection: Sending request: GET /myBkt8/?max-keys=1=user%2Fvardhan%2F=%2F HTTP/1.1 17/03/02 14:40:26 DEBUG wire: >> "GET /myBkt8/?max-keys=1=user%2Fvardhan%2F=%2F HTTP/1.1[\r][\n]" 17/03/02 14:40:26 DEBUG wire: >> "Host: webscaledemo.netapp.com:8082[\r][\n]" 17/03/02 14:40:26 DEBUG wire: >> "Authorization: AWS 2SNAJYEMQU45YPVYC89D:M8GbLXUuAJ2w5pGx4WJ6hJF3324=[\r][\n]" 17/03/02 14:40:26 DEBUG wire: >> "User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.12.3 Java_HotSpot(TM)_64-Bit_Server_VM/25.60-b23/1.8.0_60[\r][\n]" 17/03/02 14:40:26 DEBUG wire: >> "Date: Thu, 02 Mar 2017 22:40:25 GMT[\r][\n]" 17/03/02 14:40:26 DEBUG wire: >> "Content-Type: application/x-www-form-urlencoded; charset=utf-8[\r][\n]" 17/03/02 14:40:26 DEBUG wire: >> "Connection: Keep-Alive[\r][\n]" 17/03/02 14:40:26 DEBUG wire: >> "[\r][\n]" 17/03/02 14:40:26 DEBUG headers: >> GET /myBkt8/?max-keys=1=user%2Fvardhan%2F=%2F HTTP/1.1 17/03/02 14:40:26 DEBUG headers: >> Host: webscaledemo.netapp.com:8082 17/03/02 14:40:26 DEBUG headers: >> Authorization: AWS 2SNAJYEMQU45YPVYC89D:M8GbLXUuAJ2w5pGx4WJ6hJF3324= 17/03/02 14:40:26 DEBUG headers: >> User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.12.3 Java_HotSpot(TM)_64-Bit_Server_VM/25.60-b23/1.8.0_60 17/03/02 14:40:26 DEBUG headers: >> Date: Thu, 02 Mar 2017 22:40:25 GMT 17/03/02 14:40:26 DEBUG headers: >> Content-Type: application/x-www-form-urlencoded; charset=utf-8 17/03/02 14:40:26 DEBUG headers: >> Connection: Keep-Alive 17/03/02 14:40:26 DEBUG wire: << "HTTP/1.1 200 OK[\r][\n]" 17/03/02 14:40:26 DEBUG wire: << "Date: Thu, 02 Mar 2017 22:40:26 GMT[\r][\n]" 17/03/02 14:40:26 DEBUG wire: << "Connection: KEEP-ALIVE[\r][\n]" 17/03/02 14:40:26 DEBUG wire: << "Server: StorageGRID/10.3.0.1[\r][\n]" 17/03/02 14:40:26 DEBUG wire: << "x-amz-request-id: 563477649[\r][\n]" 17/03/02 14:40:26 DEBUG wire: << "Content-Length: 266[\r][\n]" 17/03/02 14:40:26 DEBUG wire: << "Content-Type: application/xml[\r][\n]" 17/03/02 14:40:26 DEBUG wire: << "[\r][\n]" 17/03/02 14:40:26 DEBUG DefaultClientConnection: Receiving response: HTTP/1.1 200 OK 17/03/02 14:40:26 DEBUG headers: << HTTP/1.1 200 OK 17/03/02 14:40:26 DEBUG headers: << Date: Thu, 02 Mar 2017 22:40:26 GMT 17/03/02 14:40:26 DEBUG headers: << Connection: KEEP-ALIVE 17/03/02 14:40:26 DEBUG headers: <<
[jira] [Created] (HADOOP-14140) S3A Not Working 3rd party S3 Interface
Vishnu Vardhan created HADOOP-14140: --- Summary: S3A Not Working 3rd party S3 Interface Key: HADOOP-14140 URL: https://issues.apache.org/jira/browse/HADOOP-14140 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.3 Reporter: Vishnu Vardhan Priority: Blocker Hi: Connecting S3A to a 3rd party object store does not work. This is a publicly hosted grid and i can provide credentials if required. Please see the debug log below There are two problems - 1. Path Style setting is ignored, and S3A always uses host style addressing 2. Even when host style is specified, it is unable to proceed, see debug log 17/03/02 13:35:03 DEBUG HadoopRDD: Creating new JobConf and caching it for later re-use 17/03/02 13:35:03 DEBUG InternalConfig: Configuration override awssdk_config_override.json not found. 17/03/02 13:35:03 DEBUG AWSCredentialsProviderChain: Loading credentials from BasicAWSCredentialsProvider 17/03/02 13:35:03 DEBUG S3Signer: Calculated string to sign: "HEAD application/x-www-form-urlencoded; charset=utf-8 Thu, 02 Mar 2017 21:35:03 GMT /solidfire/" 17/03/02 13:35:03 DEBUG request: Sending Request: HEAD https://solidfire.vmasgwwebg01-tst.webscaledemo.netapp.com:8082 / Headers: (Authorization: AWS 2SNAJYEMQU45YPVYC89D:WO0R+mPeYoQ2V29L4dMUJSSSVsQ=, User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.12.3 Java_HotSpot(TM)_64-Bit_Server_VM/25.60-b23/1.8.0_60, Date: Thu, 02 Mar 2017 21:35:03 GMT, Content-Type: application/x-www-form-urlencoded; charset=utf-8, ) 17/03/02 13:35:03 DEBUG PoolingClientConnectionManager: Connection request: [route: {s}->https://solidfire.vmasgwwebg01-tst.webscaledemo.netapp.com:8082][total kept alive: 0; route allocated: 0 of 15; total allocated: 0 of 15] 17/03/02 13:35:03 DEBUG PoolingClientConnectionManager: Connection leased: [id: 0][route: {s}->https://solidfire.vmasgwwebg01-tst.webscaledemo.netapp.com:8082][total kept alive: 0; route allocated: 1 of 15; total allocated: 1 of 15] 17/03/02 13:35:03 DEBUG DefaultClientConnectionOperator: Connecting to solidfire.vmasgwwebg01-tst.webscaledemo.netapp.com:8082 17/03/02 13:35:03 DEBUG RequestAddCookies: CookieSpec selected: default 17/03/02 13:35:03 DEBUG RequestAuthCache: Auth cache not set in the context 17/03/02 13:35:03 DEBUG RequestProxyAuthentication: Proxy auth state: UNCHALLENGED 17/03/02 13:35:03 DEBUG SdkHttpClient: Attempt 1 to execute request 17/03/02 13:35:03 DEBUG DefaultClientConnection: Sending request: HEAD / HTTP/1.1 17/03/02 13:35:03 DEBUG wire: >> "HEAD / HTTP/1.1[\r][\n]" 17/03/02 13:35:03 DEBUG wire: >> "Host: solidfire.vmasgwwebg01-tst.webscaledemo.netapp.com:8082[\r][\n]" 17/03/02 13:35:03 DEBUG wire: >> "Authorization: AWS 2SNAJYEMQU45YPVYC89D:WO0R+mPeYoQ2V29L4dMUJSSSVsQ=[\r][\n]" 17/03/02 13:35:03 DEBUG wire: >> "User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.12.3 Java_HotSpot(TM)_64-Bit_Server_VM/25.60-b23/1.8.0_60[\r][\n]" 17/03/02 13:35:03 DEBUG wire: >> "Date: Thu, 02 Mar 2017 21:35:03 GMT[\r][\n]" 17/03/02 13:35:03 DEBUG wire: >> "Content-Type: application/x-www-form-urlencoded; charset=utf-8[\r][\n]" 17/03/02 13:35:03 DEBUG wire: >> "Connection: Keep-Alive[\r][\n]" 17/03/02 13:35:03 DEBUG wire: >> "[\r][\n]" 17/03/02 13:35:03 DEBUG headers: >> HEAD / HTTP/1.1 17/03/02 13:35:03 DEBUG headers: >> Host: solidfire.vmasgwwebg01-tst.webscaledemo.netapp.com:8082 17/03/02 13:35:03 DEBUG headers: >> Authorization: AWS 2SNAJYEMQU45YPVYC89D:WO0R+mPeYoQ2V29L4dMUJSSSVsQ= 17/03/02 13:35:03 DEBUG headers: >> User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.12.3 Java_HotSpot(TM)_64-Bit_Server_VM/25.60-b23/1.8.0_60 17/03/02 13:35:03 DEBUG headers: >> Date: Thu, 02 Mar 2017 21:35:03 GMT 17/03/02 13:35:03 DEBUG headers: >> Content-Type: application/x-www-form-urlencoded; charset=utf-8 17/03/02 13:35:03 DEBUG headers: >> Connection: Keep-Alive 17/03/02 13:35:03 DEBUG wire: << "HTTP/1.1 200 OK[\r][\n]" 17/03/02 13:35:03 DEBUG wire: << "Date: Thu, 02 Mar 2017 21:35:03 GMT[\r][\n]" 17/03/02 13:35:03 DEBUG wire: << "Connection: KEEP-ALIVE[\r][\n]" 17/03/02 13:35:03 DEBUG wire: << "Server: StorageGRID/10.3.0.1[\r][\n]" 17/03/02 13:35:03 DEBUG wire: << "x-amz-request-id: 640939184[\r][\n]" 17/03/02 13:35:03 DEBUG wire: << "Content-Length: 0[\r][\n]" 17/03/02 13:35:03 DEBUG wire: << "[\r][\n]" 17/03/02 13:35:03 DEBUG DefaultClientConnection: Receiving response: HTTP/1.1 200 OK 17/03/02 13:35:03 DEBUG headers: << HTTP/1.1 200 OK 17/03/02 13:35:03 DEBUG headers: << Date: Thu, 02 Mar 2017 21:35:03 GMT 17/03/02 13:35:03 DEBUG headers: << Connection: KEEP-ALIVE 17/03/02 13:35:03 DEBUG headers: << Server: StorageGRID/10.3.0.1 17/03/02 13:35:03 DEBUG headers: << x-amz-request-id: 640939184 17/03/02 13:35:03 DEBUG headers: << Content-Length: 0 17/03/02 13:35:03 DEBUG SdkHttpClient: Connection can be kept alive indefinitely 17/03/02 13:35:04 DEBUG
[jira] [Created] (HADOOP-14139) Tracing canonized server name from HTTP request during SPNEGO
Xiaoyu Yao created HADOOP-14139: --- Summary: Tracing canonized server name from HTTP request during SPNEGO Key: HADOOP-14139 URL: https://issues.apache.org/jira/browse/HADOOP-14139 Project: Hadoop Common Issue Type: Bug Components: security Reporter: Xiaoyu Yao Assignee: Hanisha Koneru Priority: Minor The serverName can be helpful to trouble shoot SPNEGO related authenticated issue. {code} final String serverName = InetAddress.getByName(request.getServerName()) .getCanonicalHostName(); {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/ [Mar 1, 2017 8:59:02 PM] (arp) HDFS-11479. Socket re-use address option should be used in [Mar 1, 2017 10:53:47 PM] (rkanter) YARN-5280. Allow YARN containers to run with Java Security Manager [Mar 2, 2017 4:10:24 AM] (yqlin) HDFS-11478. Update EC commands in HDFSCommands.md. Contributed by Yiqun [Mar 2, 2017 4:23:52 AM] (mingma) HDFS-11412. Maintenance minimum replication config value allowable range -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.webapp.TestTimelineWebServices hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-compile-root.txt [140K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-compile-root.txt [140K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-compile-root.txt [140K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [264K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [72K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt [28K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/245/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [44K] Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail:
[jira] [Created] (HADOOP-14138) Remove S3A ref from META-INF service discovery, rely on existing core-default entry
Steve Loughran created HADOOP-14138: --- Summary: Remove S3A ref from META-INF service discovery, rely on existing core-default entry Key: HADOOP-14138 URL: https://issues.apache.org/jira/browse/HADOOP-14138 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 2.9.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Critical As discussed in HADOOP-14132, the shaded AWS library is killing performance starting all hadoop operations, due to classloading on FS service discovery. This is despite the fact that there is an entry for fs.s3a.impl in core-default.xml, *we don't need service discovery here* Proposed: # cut the entry from {/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}} # when HADOOP-14132 is in, move to that, including declaring an XML file exclusively for s3a entries I want this one in first as its a major performance regression, and one we coula actually backport to 2.7.x, just to improve load time slightly there too -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org