[jira] [Resolved] (HADOOP-18132) S3 exponential backoff
[ https://issues.apache.org/jira/browse/HADOOP-18132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-18132. - Resolution: Not A Problem S3A already performs retries on S3 errors. For details, please check out https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Retry_and_Recovery. > S3 exponential backoff > -- > > Key: HADOOP-18132 > URL: https://issues.apache.org/jira/browse/HADOOP-18132 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Holden Karau >Priority: Major > > S3 API has limits which we can exceed when using a large number of > writers/readers/or listers. We should add randomized-exponential back-off to > the s3 client when it encounters: > > com.amazonaws.services.s3.model.AmazonS3Exception: Please reduce your request > rate. (Service: Amazon S3; Status Code: 503; Error Code: SlowDown; > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14961) Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed
[ https://issues.apache.org/jira/browse/HADOOP-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14961. - Resolution: Duplicate Fixed by HADOOP-14816. > Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed > -- > > Key: HADOOP-14961 > URL: https://issues.apache.org/jira/browse/HADOOP-14961 > Project: Hadoop Common > Issue Type: Bug > Components: build, test >Affects Versions: 3.1.0 >Reporter: John Zhuge >Priority: Major > > https://builds.apache.org/job/PreCommit-HADOOP-Build/13546/console > {noformat} > Downloading Oracle Java 8... > [0m[91m--2017-10-18 18:28:11-- > http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz > > [0m[91mResolving download.oracle.com (download.oracle.com)... > [0m[91m23.59.190.131, 23.59.190.130 > Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... > [0m[91mconnected. > HTTP request sent, awaiting response... [0m[91m302 Moved Temporarily > Location: > https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz > [following] > --2017-10-18 18:28:11-- > https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz > > [0m[91mResolving edelivery.oracle.com (edelivery.oracle.com)... > [0m[91m23.39.16.136, 2600:1409:a:39c::2d3e, 2600:1409:a:39e::2d3e > Connecting to edelivery.oracle.com > (edelivery.oracle.com)|23.39.16.136|:443... [0m[91mconnected. > [0m[91mHTTP request sent, awaiting response... [0m[91m302 Moved > Temporarily > Location: > http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c > [following] > --2017-10-18 18:28:11-- > http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c > > Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... > [0m[91mconnected. > HTTP request sent, awaiting response... [0m[91m404 Not Found > [0m[91m2017-10-18 18:28:12 ERROR 404: Not Found. > [0m[91mdownload failed > Oracle JDK 8 is NOT installed. > {noformat} > Looks like Oracle JDK 8u144 is no longer available for download using that > link. 8u151 and 8u152 are available. > Many of last 10 https://builds.apache.org/job/PreCommit-HADOOP-Build/ jobs > failed the same way, all on build host H1 and H6. > [~aw] has a patch available in HADOOP-14816 "Update Dockerfile to use Xenial" > for a long term fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14961) Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed
[ https://issues.apache.org/jira/browse/HADOOP-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reopened HADOOP-14961: - > Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed > -- > > Key: HADOOP-14961 > URL: https://issues.apache.org/jira/browse/HADOOP-14961 > Project: Hadoop Common > Issue Type: Bug > Components: build, test >Affects Versions: 3.1.0 >Reporter: John Zhuge >Priority: Major > > https://builds.apache.org/job/PreCommit-HADOOP-Build/13546/console > {noformat} > Downloading Oracle Java 8... > [0m[91m--2017-10-18 18:28:11-- > http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz > > [0m[91mResolving download.oracle.com (download.oracle.com)... > [0m[91m23.59.190.131, 23.59.190.130 > Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... > [0m[91mconnected. > HTTP request sent, awaiting response... [0m[91m302 Moved Temporarily > Location: > https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz > [following] > --2017-10-18 18:28:11-- > https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz > > [0m[91mResolving edelivery.oracle.com (edelivery.oracle.com)... > [0m[91m23.39.16.136, 2600:1409:a:39c::2d3e, 2600:1409:a:39e::2d3e > Connecting to edelivery.oracle.com > (edelivery.oracle.com)|23.39.16.136|:443... [0m[91mconnected. > [0m[91mHTTP request sent, awaiting response... [0m[91m302 Moved > Temporarily > Location: > http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c > [following] > --2017-10-18 18:28:11-- > http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c > > Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... > [0m[91mconnected. > HTTP request sent, awaiting response... [0m[91m404 Not Found > [0m[91m2017-10-18 18:28:12 ERROR 404: Not Found. > [0m[91mdownload failed > Oracle JDK 8 is NOT installed. > {noformat} > Looks like Oracle JDK 8u144 is no longer available for download using that > link. 8u151 and 8u152 are available. > Many of last 10 https://builds.apache.org/job/PreCommit-HADOOP-Build/ jobs > failed the same way, all on build host H1 and H6. > [~aw] has a patch available in HADOOP-14816 "Update Dockerfile to use Xenial" > for a long term fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-15012. - Resolution: Fixed Fix Version/s: 3.1.0 Committed to trunk together with HADOOP-14872. Code review was done there. {noformat} 6c32ddad302 HADOOP-14872. CryptoInputStream should implement unbuffer. Contributed by John Zhuge. bf6a660232b HADOOP-15012. Add readahead, dropbehind, and unbuffer to StreamCapabilities. Contributed by John Zhuge. {noformat} > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0 > > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15012) Enhance StreamCapabilities with READAHEAD, DROPBEHIND, and UNBUFFER
John Zhuge created HADOOP-15012: --- Summary: Enhance StreamCapabilities with READAHEAD, DROPBEHIND, and UNBUFFER Key: HADOOP-15012 URL: https://issues.apache.org/jira/browse/HADOOP-15012 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 2.9.0 Reporter: John Zhuge Priority: Major A split from HADOOP-14872 to track changes that enhance StreamCapabilities class with READAHEAD, DROPBEHIND, and UNBUFFER capability. Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14974) org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation fails in trunk
[ https://issues.apache.org/jira/browse/HADOOP-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14974. - Resolution: Fixed Fix Version/s: 3.1.0 Target Version/s: 3.1.0 > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation > fails in trunk > --- > > Key: HADOOP-14974 > URL: https://issues.apache.org/jira/browse/HADOOP-14974 > Project: Hadoop Common > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: John Zhuge >Priority: Blocker > Fix For: 3.1.0 > > > {code} > org.apache.hadoop.metrics2.MetricsException: Metrics source > QueueMetrics,q0=root already exists! > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:239) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueMetrics.forQueue(CSQueueMetrics.java:141) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.(AbstractCSQueue.java:131) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.(ParentQueue.java:90) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:267) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.initializeQueues(CapacitySchedulerQueueManager.java:158) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:639) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:331) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:391) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:756) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1152) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:317) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.serviceInit(MockRM.java:1313) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:161) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:140) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:136) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testExcessReservationThanNodeManagerCapacity(TestContainerAllocation.java:90) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized
[ https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reopened HADOOP-14954: - Reverted because it broke a bunch of YARN tests, e.g., TestContainerAllocation. > MetricsSystemImpl#init should increment refCount when already initialized > - > > Key: HADOOP-14954 > URL: https://issues.apache.org/jira/browse/HADOOP-14954 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-14954.001.patch, HADOOP-14954.002.patch, > HADOOP-14954.002a.patch > > > {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in > {{shutdown}}. > {code:java} > public synchronized MetricsSystem init(String prefix) { > if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) { > LOG.warn(this.prefix +" metrics system already initialized!"); > return this; > } > this.prefix = checkNotNull(prefix, "prefix"); > ++refCount; > {code} > Move {{++refCount}} to the beginning of this method. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14963) Add HTrace to ADLS connector
John Zhuge created HADOOP-14963: --- Summary: Add HTrace to ADLS connector Key: HADOOP-14963 URL: https://issues.apache.org/jira/browse/HADOOP-14963 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Add Apache HTrace support to Hadoop ADLS connector in order to support distributed tracing. Make sure the connector and the ADLS SDK support B3 Propagation so that tracer/span IDs are sent via HTTP request to ADLS backend. To build an entire distributed tracing solution for ADLS, we will also need these components: * ADLS backend should support one of the Tracers. See http://opentracing.io/documentation/pages/supported-tracers.html. * Zipkin Collector: Event Hub * Zipkin Storage: MySQL -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14961) Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed
John Zhuge created HADOOP-14961: --- Summary: Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed Key: HADOOP-14961 URL: https://issues.apache.org/jira/browse/HADOOP-14961 Project: Hadoop Common Issue Type: Bug Reporter: John Zhuge https://builds.apache.org/job/PreCommit-HADOOP-Build/13546/console {noformat} Downloading Oracle Java 8... [0m[91m--2017-10-18 18:28:11-- http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz [0m[91mResolving download.oracle.com (download.oracle.com)... [0m[91m23.59.190.131, 23.59.190.130 Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... [0m[91mconnected. HTTP request sent, awaiting response... [0m[91m302 Moved Temporarily Location: https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz [following] --2017-10-18 18:28:11-- https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz [0m[91mResolving edelivery.oracle.com (edelivery.oracle.com)... [0m[91m23.39.16.136, 2600:1409:a:39c::2d3e, 2600:1409:a:39e::2d3e Connecting to edelivery.oracle.com (edelivery.oracle.com)|23.39.16.136|:443... [0m[91mconnected. [0m[91mHTTP request sent, awaiting response... [0m[91m302 Moved Temporarily Location: http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c [following] --2017-10-18 18:28:11-- http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... [0m[91mconnected. HTTP request sent, awaiting response... [0m[91m404 Not Found [0m[91m2017-10-18 18:28:12 ERROR 404: Not Found. [0m[91mdownload failed Oracle JDK 8 is NOT installed. {noformat} Looks like Oracle JDK 8u144 is no longer available for download using that link. 8u151 and 8u152 are available. Many of last 10 https://builds.apache.org/job/PreCommit-HADOOP-Build/ jobs failed the same way, all on build host H1 and H6. [~aw] has a patch available in HADOOP-14816 "Update Dockerfile to use Xenial" for a long term fix. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized
John Zhuge created HADOOP-14954: --- Summary: MetricsSystemImpl#init should increment refCount when already initialized Key: HADOOP-14954 URL: https://issues.apache.org/jira/browse/HADOOP-14954 Project: Hadoop Common Issue Type: Bug Components: metrics Affects Versions: 2.7.0 Reporter: John Zhuge Priority: Minor {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in {{shutdown}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14925) hadoop-aliyun has missing dependencies
John Zhuge created HADOOP-14925: --- Summary: hadoop-aliyun has missing dependencies Key: HADOOP-14925 URL: https://issues.apache.org/jira/browse/HADOOP-14925 Project: Hadoop Common Issue Type: Bug Components: fs/oss Affects Versions: 3.0.0-beta1 Reporter: John Zhuge Priority: Minor Saw these errors uncovered by dist-tools-hooks-maker during build: {noformat} ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14924) hadoop-azure-datalake has missing dependencies
John Zhuge created HADOOP-14924: --- Summary: hadoop-azure-datalake has missing dependencies Key: HADOOP-14924 URL: https://issues.apache.org/jira/browse/HADOOP-14924 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 3.0.0-beta1 Reporter: John Zhuge Priority: Minor Saw these errors uncovered by dist-tools-hooks-maker during build: {noformat} ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14923) hadoop-azure has missing dependencies
John Zhuge created HADOOP-14923: --- Summary: hadoop-azure has missing dependencies Key: HADOOP-14923 URL: https://issues.apache.org/jira/browse/HADOOP-14923 Project: Hadoop Common Issue Type: Bug Components: fs/azure Affects Versions: 3.0.0-beta1 Reporter: John Zhuge Priority: Minor Saw these errors uncovered by dist-tools-hooks-maker during build: {noformat} ERROR: hadoop-azure has missing dependencies: jetty-util-ajax-9.3.19.v20170502.jar {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14917) AdlFileSystem should support getStorageStatistics
John Zhuge created HADOOP-14917: --- Summary: AdlFileSystem should support getStorageStatistics Key: HADOOP-14917 URL: https://issues.apache.org/jira/browse/HADOOP-14917 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge AdlFileSystem should support the storage statistics introduced by HADOOP-13065, so any execution framework gathering the statistics can include them, and tests can log them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14872) CryptoInputStream should implement unbuffer
John Zhuge created HADOOP-14872: --- Summary: CryptoInputStream should implement unbuffer Key: HADOOP-14872 URL: https://issues.apache.org/jira/browse/HADOOP-14872 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 2.6.4 Reporter: John Zhuge Discovered in IMPALA-5909. CryptoInputStream extending FSDataInputStream should implement unbuffer method * Release buffer and cache when instructed * Avoid calling super unbuffer method that throws UOE. Applications may not handle the UOE very well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
John Zhuge created HADOOP-14864: --- Summary: FSDataInputStream#unbuffer UOE exception should print the stream class name Key: HADOOP-14864 URL: https://issues.apache.org/jira/browse/HADOOP-14864 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 2.6.4 Reporter: John Zhuge Priority: Minor The current exception message: {noformat} org/apache/hadoop/fs/ failed: error: UnsupportedOperationException: this stream does not support unbuffering.java.lang.UnsupportedOperationException: this stream does not support unbuffering. at org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14862) Metrics for AdlFileSystem
John Zhuge created HADOOP-14862: --- Summary: Metrics for AdlFileSystem Key: HADOOP-14862 URL: https://issues.apache.org/jira/browse/HADOOP-14862 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Add a Metrics2 source {{AdlFileSystemInstrumentation}} for {{AdlFileSystem}}. Consider per-thread statistics data if possible. Atomic variables are not totally free in multi-core arch. Don't think Java can do per-cpu data structure. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14832) listing s3a bucket without credentials gives Interrupted error
John Zhuge created HADOOP-14832: --- Summary: listing s3a bucket without credentials gives Interrupted error Key: HADOOP-14832 URL: https://issues.apache.org/jira/browse/HADOOP-14832 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 3.0.0-beta1 Reporter: John Zhuge Priority: Minor In trunk pseudo distributed mode, without setting s3a credentials, listing an s3a bucket only gives "Interrupted" error : {noformat} $ hadoop fs -ls s3a://bucket/ ls: Interrupted {noformat} In comparison, branch-2 gives a much better error message: {noformat} (branch-2)$ hadoop_env hadoop fs -ls s3a://bucket/ ls: doesBucketExist on hdfs-cce: com.amazonaws.AmazonClientException: No AWS Credentials provided by BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Unable to load credentials from service endpoint {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14808) Hadoop keychain
John Zhuge created HADOOP-14808: --- Summary: Hadoop keychain Key: HADOOP-14808 URL: https://issues.apache.org/jira/browse/HADOOP-14808 Project: Hadoop Common Issue Type: New Feature Reporter: John Zhuge Extend the idea from HADOOP-6520 "UGI should load tokens from the environment" to a generic lightweight "keychain" design. Load keys (secrets) into a keychain in UGI (secret map) at startup. YARN will distribute them securely into each container. The Hadoop code running in the container can then retrieve the credentials from UGI. The use case is Bring Your Own Key (BYOK) credentials for cloud connectors (adl, wasb, s3a, etc.), while Hadoop authentication is still Kerberos. No configuration change, no admin involved. It will support YARN applications initially, e.g., DistCp, Tera Suite, Spark-on-Yarn, etc. Implementation is surprisingly simple because almost all pieces are in place: * Retrieve secrets from UGI using {{conf.getPassword}} backed by the existing Credential Provider class {{UserProvider}} * Reuse Credential Provider classes and interface to define local permanent or transient credential store, e.g., LocalJavaKeyStoreProvider * New: create a new transient Credential Provider that logs into AAD with username/password or device code, and then put the Client ID and Refresh Token into the keychain * New: create a new permanent Credential Provider based on Hadoop configuration XML, for dev/testing purpose. Links * HADOOP-11766 Generic token authentication support for Hadoop * HADOOP-11744 Support OAuth2 in Hadoop * HADOOP-10959 A Kerberos based token authentication approach * HADOOP-9392 Token based authentication and Single Sign On -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14794) Standalone MiniKdc server
John Zhuge created HADOOP-14794: --- Summary: Standalone MiniKdc server Key: HADOOP-14794 URL: https://issues.apache.org/jira/browse/HADOOP-14794 Project: Hadoop Common Issue Type: New Feature Components: security, test Affects Versions: 2.7.0 Reporter: John Zhuge Assignee: John Zhuge Add a new subcommand {{hadoop minikdc}} to start a standalone MiniKdc server. This will make it easier to test Kerberos in pseudo-distributed mode without an external KDC server. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14791) SimpleKdcServer: Fail to delete krb5 conf
John Zhuge created HADOOP-14791: --- Summary: SimpleKdcServer: Fail to delete krb5 conf Key: HADOOP-14791 URL: https://issues.apache.org/jira/browse/HADOOP-14791 Project: Hadoop Common Issue Type: Bug Components: minikdc Affects Versions: 3.0.0-beta1 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor Run MiniKdc in a terminal and then press Ctrl-C: {noformat} Do or kill to stop it --- ^C2017-08-19 22:52:23,607 INFO impl.DefaultInternalKdcServerImpl: Default Internal kdc server stopped. 2017-08-19 22:53:21,358 INFO server.SimpleKdcServer: Fail to delete krb5 conf. java.io.IOException 2017-08-19 22:53:22,363 INFO minikdc.MiniKdc: MiniKdc stopped. {noformat} The reason for "Fail to delete krb5 conf" is because MiniKdc renames SimpleKdcServer's krb5 conf file. During shutdown, SimpleKdcServer attempts to delete its krb5 conf file, and can not find it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14786) HTTP default servlets do not require authentication when kerberos is enabled
John Zhuge created HADOOP-14786: --- Summary: HTTP default servlets do not require authentication when kerberos is enabled Key: HADOOP-14786 URL: https://issues.apache.org/jira/browse/HADOOP-14786 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge The default HttpServer2 servlet /jmx, /conf, /logLevel, and /stack do not require authentication when Kerberos is enabled. {code:java|title=HttpServer2#addDefaultServlets} // set up default servlets addServlet("stacks", "/stacks", StackServlet.class); addServlet("logLevel", "/logLevel", LogLevel.Servlet.class); addServlet("jmx", "/jmx", JMXJsonServlet.class); addServlet("conf", "/conf", ConfServlet.class); {code} {code:java|title=HttpServer2#addServlet} public void addServlet(String name, String pathSpec, Class clazz) { addInternalServlet(name, pathSpec, clazz, false); addFilterPathMapping(pathSpec, webAppContext); {code} {code:java|title=Httpserver2#addInternalServlet} addInternalServlet(…, bool requireAuth) … if(requireAuth && UserGroupInformation.isSecurityEnabled()) { LOG.info("Adding Kerberos (SPNEGO) filter to " + name); {code} {{requireAuth}} is {{false}} for the default servlets inside {{addInternalServlet}}. The issue can be verified by running the following curl command against NameNode web address when Kerberos is enabled: {noformat} kdestroy curl --negotiate -u: -k -sS 'https://:9871/jmx' {noformat} Expect curl to fail, but it returns JMX anyway. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14438) Make ADLS doc of setting up client key up to date
[ https://issues.apache.org/jira/browse/HADOOP-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14438. - Resolution: Duplicate Assignee: John Zhuge Take care of both issues in HADOOP-14627. > Make ADLS doc of setting up client key up to date > - > > Key: HADOOP-14438 > URL: https://issues.apache.org/jira/browse/HADOOP-14438 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/adl >Reporter: Mingliang Liu >Assignee: John Zhuge > > In the doc {{hadoop-tools/hadoop-azure-datalake/src/site/markdown/index.md}}, > we have such a statement: > {code:title=Note down the properties you will need to auth} > ... > - Resource: Always https://management.core.windows.net/ , for all customers > {code} > Is the {{Resource}} useful here? It seems not necessary to me. > {code:title=Adding the service principal to your ADL Account} > - ... > - Select Users under Settings > ... > {code} > According to the portal, it should be "Access control (IAM)" under "Settings" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14765) AdlFsInputStream to implement CanUnbuffer
John Zhuge created HADOOP-14765: --- Summary: AdlFsInputStream to implement CanUnbuffer Key: HADOOP-14765 URL: https://issues.apache.org/jira/browse/HADOOP-14765 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Priority: Minor HBase relies on FileSystems implementing CanUnbuffer.unbuffer() to force input streams to free up remote connections (HBASE-9393Link). This works for HDFS, but not elsewhere. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14764) Über-jira adl:// Azure Data Lake Phase II: Performance and Testing
John Zhuge created HADOOP-14764: --- Summary: Über-jira adl:// Azure Data Lake Phase II: Performance and Testing Key: HADOOP-14764 URL: https://issues.apache.org/jira/browse/HADOOP-14764 Project: Hadoop Common Issue Type: Improvement Components: fs/adl Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge Uber JIRA to track things needed for Azure Data Lake to be considered stable and adl:// ready for wide use. Based on the experience with other object stores, the things which usually surface once a stabilizing FS is picked up and used are * handling of many GB files, up and down, be it: efficiency of read, when the writes take place, file leakage, time for close() and filesystem shutdown * resilience to transient failures * reporting of problems/diagnostics * security option tuning * race conditions * differences between implementation and what actual applications expect -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14754) TestCommonConfigurationFields failed: core-default.xml has 2 properties missing in class
John Zhuge created HADOOP-14754: --- Summary: TestCommonConfigurationFields failed: core-default.xml has 2 properties missing in class Key: HADOOP-14754 URL: https://issues.apache.org/jira/browse/HADOOP-14754 Project: Hadoop Common Issue Type: Bug Components: common, fs/azure Affects Versions: 2.9.0, 3.0.0-beta1 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor https://builds.apache.org/job/PreCommit-HADOOP-Build/13004/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt: {noformat} core-default.xml has 2 properties missing in class org.apache.hadoop.fs.CommonConfigurationKeys class org.apache.hadoop.fs.CommonConfigurationKeysPublic class org.apache.hadoop.fs.local.LocalConfigKeys class org.apache.hadoop.fs.ftp.FtpConfigKeys class org.apache.hadoop.ha.SshFenceByTcpPort class org.apache.hadoop.security.LdapGroupsMapping class org.apache.hadoop.ha.ZKFailoverController class org.apache.hadoop.security.ssl.SSLFactory class org.apache.hadoop.security.CompositeGroupsMapping class org.apache.hadoop.io.erasurecode.CodecUtil {noformat} Unfortunately, it does not show which 2 properties missing. Ran test manually got: {noformat} fs.wasbs.impl fs.wasb.impl {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14753) Add WASB FileContext tests
John Zhuge created HADOOP-14753: --- Summary: Add WASB FileContext tests Key: HADOOP-14753 URL: https://issues.apache.org/jira/browse/HADOOP-14753 Project: Hadoop Common Issue Type: Improvement Components: fs/azure, test Affects Versions: 2.8.0 Reporter: John Zhuge Priority: Minor Add FileContext contract tests for WASB. See ITestS3AFileContextURI and friends for example. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14737) Sort out hadoop-common contract-test-options.xml
John Zhuge created HADOOP-14737: --- Summary: Sort out hadoop-common contract-test-options.xml Key: HADOOP-14737 URL: https://issues.apache.org/jira/browse/HADOOP-14737 Project: Hadoop Common Issue Type: Bug Components: documentation, fs, test Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor Follow up to HADOOP-14103. Update hadoop-common testing.md in a similar fashion. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14721) Add StreamCapabilities support to Aliyun OSS
John Zhuge created HADOOP-14721: --- Summary: Add StreamCapabilities support to Aliyun OSS Key: HADOOP-14721 URL: https://issues.apache.org/jira/browse/HADOOP-14721 Project: Hadoop Common Issue Type: Sub-task Components: fs/oss Affects Versions: 3.0.0-alpha4 Reporter: John Zhuge -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14720) Add StreamCapabilities support to Swift
John Zhuge created HADOOP-14720: --- Summary: Add StreamCapabilities support to Swift Key: HADOOP-14720 URL: https://issues.apache.org/jira/browse/HADOOP-14720 Project: Hadoop Common Issue Type: Sub-task Components: fs/swift Affects Versions: 3.0.0-alpha4 Reporter: John Zhuge -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14719) Add StreamCapabilities support to WASB
John Zhuge created HADOOP-14719: --- Summary: Add StreamCapabilities support to WASB Key: HADOOP-14719 URL: https://issues.apache.org/jira/browse/HADOOP-14719 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.0.0-alpha4 Reporter: John Zhuge -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14718) Add StreamCapabilities support to ADLS
John Zhuge created HADOOP-14718: --- Summary: Add StreamCapabilities support to ADLS Key: HADOOP-14718 URL: https://issues.apache.org/jira/browse/HADOOP-14718 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 3.0.0-alpha4 Reporter: John Zhuge -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14712) Document support for AWS Snowball
John Zhuge created HADOOP-14712: --- Summary: Document support for AWS Snowball Key: HADOOP-14712 URL: https://issues.apache.org/jira/browse/HADOOP-14712 Project: Hadoop Common Issue Type: Sub-task Components: documentation, fs/s3 Affects Versions: 2.8.0 Environment: Document Hadoop support for AWS Snowball: * Commands and parameters * Performance tuning * Caveats * Troubleshooting Reporter: John Zhuge -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14711) Test data transfer between Hadoop and AWS Snowball
John Zhuge created HADOOP-14711: --- Summary: Test data transfer between Hadoop and AWS Snowball Key: HADOOP-14711 URL: https://issues.apache.org/jira/browse/HADOOP-14711 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 2.8.0 Reporter: John Zhuge Test data transfer between Hadoop and AWS Snowball: * fs -cp * DistCp * Scale tests -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14710) Uber-JIRA: Support AWS Snowball
John Zhuge created HADOOP-14710: --- Summary: Uber-JIRA: Support AWS Snowball Key: HADOOP-14710 URL: https://issues.apache.org/jira/browse/HADOOP-14710 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Support data transfer between Hadoop and [AWS Snowball|http://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14695) Allow disabling chunked encoding
John Zhuge created HADOOP-14695: --- Summary: Allow disabling chunked encoding Key: HADOOP-14695 URL: https://issues.apache.org/jira/browse/HADOOP-14695 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge [Using the Amazon S3 Adapter for Snowball |http://docs.aws.amazon.com/snowball/latest/ug/using-adapter.html] indicates that we need to disable chunked coding and set path style access. HADOOP-12963 enables setting path style access. This JIRA will enable disabling chunked encoding. A new property {{fs.s3a.disable.chunked.encoding}} is proposed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14679) Obtain ADLS access token provider type from credential provider
John Zhuge created HADOOP-14679: --- Summary: Obtain ADLS access token provider type from credential provider Key: HADOOP-14679 URL: https://issues.apache.org/jira/browse/HADOOP-14679 Project: Hadoop Common Issue Type: Improvement Components: fs/adl Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor Found it convenient to add {{fs.adl.oauth2.access.token.provider.type}} along with ADLS credentials to the credential store. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14678) AdlFilesystem#initialize swallows exception when getting user name
John Zhuge created HADOOP-14678: --- Summary: AdlFilesystem#initialize swallows exception when getting user name Key: HADOOP-14678 URL: https://issues.apache.org/jira/browse/HADOOP-14678 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor https://github.com/apache/hadoop/blob/5c61ad24887f76dfc5a5935b2c5dceb6bfd99417/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java#L122 It should log the exception. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14608) KMS JMX servlet path not backwards compatible
John Zhuge created HADOOP-14608: --- Summary: KMS JMX servlet path not backwards compatible Key: HADOOP-14608 URL: https://issues.apache.org/jira/browse/HADOOP-14608 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor HADOOP-13597 switched KMS from Tomcat to Jetty. The implementation changed JMX path from /kms/jmx to /jmx, which is inline with other HttpServer2 based servlets. If there is a desire for the same JMX path, please vote here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14519) Client$Connection#waitForWork may suffer spurious wakeup
John Zhuge created HADOOP-14519: --- Summary: Client$Connection#waitForWork may suffer spurious wakeup Key: HADOOP-14519 URL: https://issues.apache.org/jira/browse/HADOOP-14519 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Critical {{Client$Connection#waitForWork}} may suffer spurious wakeup because the {{wait}} is not surrounded by a loop. See [https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#wait()]. {code:title=Client$Connection#waitForWork} if (calls.isEmpty() && !shouldCloseConnection.get() && running.get()) { long timeout = maxIdleTime- (Time.now()-lastActivity.get()); if (timeout>0) { try { wait(timeout); << spurious wakeup } catch (InterruptedException e) {} } } {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14471) Upgrade Jetty to latest version
John Zhuge created HADOOP-14471: --- Summary: Upgrade Jetty to latest version Key: HADOOP-14471 URL: https://issues.apache.org/jira/browse/HADOOP-14471 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0-alpha4 Reporter: John Zhuge Assignee: John Zhuge The current Jetty version is {{9.3.11.v20160721}}. Should we upgrade it to the latest 9.3.x which is {{9.3.19.v20170502}}? Or 9.4? 9.3.x changes: https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/VERSION.txt 9.4.x changes: https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14464) hadoop-aws doc header warning #5 line wrapped
John Zhuge created HADOOP-14464: --- Summary: hadoop-aws doc header warning #5 line wrapped Key: HADOOP-14464 URL: https://issues.apache.org/jira/browse/HADOOP-14464 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4 Reporter: John Zhuge Assignee: John Zhuge Priority: Trivial The line was probably automatically wrapped by the editor: {code} Warning #5: The S3 client provided by Amazon EMR are not from the Apache Software foundation, and are only supported by Amazon. {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14421) TestAdlFileSystemContractLive#testListStatus assertion failed
[ https://issues.apache.org/jira/browse/HADOOP-14421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14421. - Resolution: Duplicate Fix Version/s: 2.8.1 Sorry for the false alarm. This issue is fixed by HADOOP-14230. > TestAdlFileSystemContractLive#testListStatus assertion failed > - > > Key: HADOOP-14421 > URL: https://issues.apache.org/jira/browse/HADOOP-14421 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: Atul Sikaria > Fix For: 2.8.1 > > > TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:273 > expected:<1> but was:<11> > {noformat} > Tests run: 32, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.118 sec > <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive > testListStatus(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive) > Time elapsed: 0.518 sec <<< FAILURE! > junit.framework.AssertionFailedError: expected:<1> but was:<11> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.Assert.assertEquals(Assert.java:241) > at junit.framework.TestCase.assertEquals(TestCase.java:409) > at > org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:273) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:60) > {noformat} > This is the first time we saw the issue. The test store {{rwj2dm}} was > created on the fly and destroyed after the test. > The code base does not have HADOOP-14230 which cleans up the test dir better. > Trying to determine whether this might help. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14435) TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed
[ https://issues.apache.org/jira/browse/HADOOP-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reopened HADOOP-14435: - Great idea! Re-opened this as a doc JIRA to add a new section {{Troubleshooting}} to {{index.md}}. Document what I encountered here. > TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed > -- > > Key: HADOOP-14435 > URL: https://issues.apache.org/jira/browse/HADOOP-14435 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, fs/adl >Affects Versions: 2.9.0, 3.0.0-alpha3 >Reporter: John Zhuge >Assignee: John Zhuge > > Saw the following assertion failure in branch-2 and trunk: > {noformat} > Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 80.189 sec > <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive > testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive) > Time elapsed: 0.71 sec <<< FAILURE! > junit.framework.AssertionFailedError: expected:<461> but was:<456> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:219) > at junit.framework.Assert.assertEquals(Assert.java:226) > at junit.framework.TestCase.assertEquals(TestCase.java:392) > at > org.apache.hadoop.fs.FileSystemContractBaseTest.testMkdirsWithUmask(FileSystemContractBaseTest.java:242) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:59) > Results : > Failed tests: > > TestAdlFileSystemContractLive.runTest:59->FileSystemContractBaseTest.testMkdirsWithUmask:242 > expected:<461> but was:<456> > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14435) TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed
[ https://issues.apache.org/jira/browse/HADOOP-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14435. - Resolution: Not A Bug Release Note: The "Other" entry in the default permissions of the ADL store can impact the file system contract test expecting certain permissions. > TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed > -- > > Key: HADOOP-14435 > URL: https://issues.apache.org/jira/browse/HADOOP-14435 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 2.9.0, 3.0.0-alpha3 >Reporter: John Zhuge >Assignee: John Zhuge > > Saw the following assertion failure in branch-2 and trunk: > {noformat} > Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 80.189 sec > <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive > testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive) > Time elapsed: 0.71 sec <<< FAILURE! > junit.framework.AssertionFailedError: expected:<461> but was:<456> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:219) > at junit.framework.Assert.assertEquals(Assert.java:226) > at junit.framework.TestCase.assertEquals(TestCase.java:392) > at > org.apache.hadoop.fs.FileSystemContractBaseTest.testMkdirsWithUmask(FileSystemContractBaseTest.java:242) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:59) > Results : > Failed tests: > > TestAdlFileSystemContractLive.runTest:59->FileSystemContractBaseTest.testMkdirsWithUmask:242 > expected:<461> but was:<456> > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14435) TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed
John Zhuge created HADOOP-14435: --- Summary: TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed Key: HADOOP-14435 URL: https://issues.apache.org/jira/browse/HADOOP-14435 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 2.9.0, 3.0.0-alpha3 Reporter: John Zhuge Assignee: John Zhuge {noformat} Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 80.189 sec <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive) Time elapsed: 0.71 sec <<< FAILURE! junit.framework.AssertionFailedError: expected:<461> but was:<456> at junit.framework.Assert.fail(Assert.java:57) at junit.framework.Assert.failNotEquals(Assert.java:329) at junit.framework.Assert.assertEquals(Assert.java:78) at junit.framework.Assert.assertEquals(Assert.java:219) at junit.framework.Assert.assertEquals(Assert.java:226) at junit.framework.TestCase.assertEquals(TestCase.java:392) at org.apache.hadoop.fs.FileSystemContractBaseTest.testMkdirsWithUmask(FileSystemContractBaseTest.java:242) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at junit.framework.TestCase.runTest(TestCase.java:176) at org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:59) Results : Failed tests: TestAdlFileSystemContractLive.runTest:59->FileSystemContractBaseTest.testMkdirsWithUmask:242 expected:<461> but was:<456> {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14421) TestAdlFileSystemContractLive#testListStatus assertion failed
John Zhuge created HADOOP-14421: --- Summary: TestAdlFileSystemContractLive#testListStatus assertion failed Key: HADOOP-14421 URL: https://issues.apache.org/jira/browse/HADOOP-14421 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: Atul Sikaria TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:273 expected:<1> but was:<11> {noformat} Tests run: 32, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.118 sec <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive testListStatus(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive) Time elapsed: 0.518 sec <<< FAILURE! junit.framework.AssertionFailedError: expected:<1> but was:<11> at junit.framework.Assert.fail(Assert.java:57) at junit.framework.Assert.failNotEquals(Assert.java:329) at junit.framework.Assert.assertEquals(Assert.java:78) at junit.framework.Assert.assertEquals(Assert.java:234) at junit.framework.Assert.assertEquals(Assert.java:241) at junit.framework.TestCase.assertEquals(TestCase.java:409) at org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:273) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at junit.framework.TestCase.runTest(TestCase.java:176) at org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:60) {noformat} This is the first time we saw the issue. The test store {{rwj2dm}} was created on the fly and destroyed after the test. The code base does not have HADOOP-14230 which cleans up the test dir better. Trying to determine whether this might help. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14417) Update cipher list for KMS
John Zhuge created HADOOP-14417: --- Summary: Update cipher list for KMS Key: HADOOP-14417 URL: https://issues.apache.org/jira/browse/HADOOP-14417 Project: Hadoop Common Issue Type: Improvement Components: kms, security Affects Versions: 2.9.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor In Oracle Linux 6.8 configurations, the curl command cannot connect to certain CDH services that run on Apache Tomcat when the cluster has been configured for TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services reject connection attempts because the default cipher configuration uses weak temporary server keys (based on Diffie-Hellman key exchange protocol). https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14352) Make some HttpServer2 SSL properties optional
John Zhuge created HADOOP-14352: --- Summary: Make some HttpServer2 SSL properties optional Key: HADOOP-14352 URL: https://issues.apache.org/jira/browse/HADOOP-14352 Project: Hadoop Common Issue Type: Improvement Components: kms Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor {{HttpServer2#loadSSLConfiguration}} loads 5 SSL properties but only keystore location and password are required, the rest of them, keystore keypassword, truststore location, and truststore password, can be optional. According to http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html: * If there is no keymanagerpassword, then the keystorepassword is used instead. * Trust store is typically set to the same path as the keystore. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14347) Make KMS Jetty connection backlog configurable
John Zhuge created HADOOP-14347: --- Summary: Make KMS Jetty connection backlog configurable Key: HADOOP-14347 URL: https://issues.apache.org/jira/browse/HADOOP-14347 Project: Hadoop Common Issue Type: Improvement Components: kms Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge HADOOP-14003 enabled the customization of Tomcat attribute {{protocol}}, {{acceptCount}}, and {{acceptorThreadCount}} for KMS in branch-2. See https://tomcat.apache.org/tomcat-6.0-doc/config/http.html. KMS switched from Tomcat to Jetty in trunk. Only {{acceptCount}} has a counterpart in Jetty, {{acceptQueueSize}}. See http://www.eclipse.org/jetty/documentation/9.3.x/configuring-connectors.html. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14344) Revert HADOOP-13606 swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-14344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14344. - Resolution: Fixed Fix Version/s: 3.0.0-alpha3 2.8.1 2.9.0 The revert patch is in HADOOP-13606: https://issues.apache.org/jira/secure/attachment/12856766/HADOOP-13606.002.patch > Revert HADOOP-13606 swift FS to add a service load metadata file > > > Key: HADOOP-14344 > URL: https://issues.apache.org/jira/browse/HADOOP-14344 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3 > > > Create the revert JIRA for release notes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14344) Revert HADOOP-13606 swift FS to add a service load metadata file
John Zhuge created HADOOP-14344: --- Summary: Revert HADOOP-13606 swift FS to add a service load metadata file Key: HADOOP-14344 URL: https://issues.apache.org/jira/browse/HADOOP-14344 Project: Hadoop Common Issue Type: Task Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge As titled -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list
John Zhuge created HADOOP-14341: --- Summary: Support multi-line value for ssl.server.exclude.cipher.list Key: HADOOP-14341 URL: https://issues.apache.org/jira/browse/HADOOP-14341 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.4 Reporter: John Zhuge Assignee: John Zhuge The multi-line value for {{ssl.server.exclude.cipher.list}} shown in {{ssl-server.xml.exmple}} does not work. The property value {code} ssl.server.exclude.cipher.list TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_RSA_WITH_RC4_128_MD5 Optional. The weak security cipher suites that you want excluded from SSL communication. {code} is actually parsed into: * "TLS_ECDHE_RSA_WITH_RC4_128_SHA" * "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA" * "\nSSL_RSA_WITH_DES_CBC_SHA" * "SSL_DHE_RSA_WITH_DES_CBC_SHA" * "\nSSL_RSA_EXPORT_WITH_RC4_40_MD5" * "SSL_RSA_EXPORT_WITH_DES40_CBC_SHA" * "\nSSL_RSA_WITH_RC4_128_MD5" -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14340) Enable KMS and HttpFS to exclude weak SSL ciphers
John Zhuge created HADOOP-14340: --- Summary: Enable KMS and HttpFS to exclude weak SSL ciphers Key: HADOOP-14340 URL: https://issues.apache.org/jira/browse/HADOOP-14340 Project: Hadoop Common Issue Type: Improvement Components: kms Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor HADOOP-12668 added {{HttpServer2$Builder#excludeCiphers}} to exclude SSL ciphers. Enable KMS and HttpFS to use this feature by modifying {{HttpServer2$Builder#loadSSLConfiguration}} calld by both. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14241) Add ADLS sensitive config keys to default list
[ https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reopened HADOOP-14241: - Reopen to run pre-commit > Add ADLS sensitive config keys to default list > -- > > Key: HADOOP-14241 > URL: https://issues.apache.org/jira/browse/HADOOP-14241 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/adl, security >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14241.001.patch, HADOOP-14241.002.patch, > HADOOP-14241.branch-2.002.patch > > > ADLS sensitive credential config keys should be added to the default list for > {{hadoop.security.sensitive-config-keys}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret
John Zhuge created HADOOP-14317: --- Summary: KMSWebServer$deprecateEnv may leak secret Key: HADOOP-14317 URL: https://issues.apache.org/jira/browse/HADOOP-14317 Project: Hadoop Common Issue Type: Bug Components: kms, security Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge May print secret in warning message: {code} LOG.warn("Environment variable {} = '{}' is deprecated and overriding" + " property {} = '{}', please set the property in {} instead.", varName, value, propName, propValue, confFile); {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14151) Swift treats 0-len file as directory
[ https://issues.apache.org/jira/browse/HADOOP-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14151. - Resolution: Duplicate > Swift treats 0-len file as directory > > > Key: HADOOP-14151 > URL: https://issues.apache.org/jira/browse/HADOOP-14151 > Project: Hadoop Common > Issue Type: Bug > Components: fs/swift >Affects Versions: 3.0.0-alpha3 >Reporter: John Zhuge > > Unit test {{TestSwiftContractRootDir#testRmNonEmptyRootDirNonRecursive}} > fails at {{assertIsFile(file)}}. This leads me to suspect swift treats 0-len > file as directory. Confirmed by the following experiment: > {noformat} > $ ls -l /tmp/zero /tmp/abc > -rw-rw-r-- 1 jzhuge wheel 4 Mar 7 13:19 /tmp/abc > -rw-rw-r-- 1 jzhuge wheel 0 Mar 7 13:19 /tmp/zero > $ bin/hadoop fs -put /tmp/zero /tmp/abc swift://jzswift.rackspace/ > 2017-03-07 13:24:09,321 INFO snative.SwiftNativeFileSystemStore: mv > jzswift/zero._COPYING_ swift://jzswift.rackspace/zero > $ bin/hadoop fs -touchz swift://jzswift.rackspace/touchz > $ bin/hadoop fs -ls swift://jzswift.rackspace/ > Found 3 items > -rw-rw-rw- 1 4 2017-03-07 13:36 swift://jzswift.rackspace/abc > drwxrwxrwx - 0 2017-03-07 13:28 swift://jzswift.rackspace/touchz > drwxrwxrwx - 0 2017-03-07 13:32 swift://jzswift.rackspace/zero > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14292) Transient TestAdlContractRootDirLive failure
[ https://issues.apache.org/jira/browse/HADOOP-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14292. - Resolution: Not A Problem Assignee: Atul Sikaria (was: Vishwajeet Dusane) Thanks [~snehav]! {{bobdir}} probably didn't have the permission for this test case to pass. This test case expects a clean account. Filed HADOOP-14304 so that the path will not be swallowed when a remote exception occurs. > Transient TestAdlContractRootDirLive failure > > > Key: HADOOP-14292 > URL: https://issues.apache.org/jira/browse/HADOOP-14292 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 3.0.0-alpha3 >Reporter: John Zhuge >Assignee: Atul Sikaria > > Got the test failure once, but could not reproduce it the second time. Maybe > a transient ADLS error? > {noformat} > Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.641 sec > <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive > testRecursiveRootListing(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive) > Time elapsed: 3.841 sec <<< ERROR! > org.apache.hadoop.security.AccessControlException: LISTSTATUS failed with > error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource > does not exist or the user is not authorized to perform the requested > operation.). > [db432517-4060-4d96-9aad-7309f8469489][2017-04-07T10:24:54.1708810-07:00] > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > com.microsoft.azure.datalake.store.ADLStoreClient.getRemoteException(ADLStoreClient.java:1144) > at > com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1106) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectoryInternal(ADLStoreClient.java:527) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:504) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:368) > at > org.apache.hadoop.fs.adl.AdlFileSystem.listStatus(AdlFileSystem.java:473) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1824) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1866) > at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:2028) > at > org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2027) > at > org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2010) > at > org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2168) > at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2145) > at > org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.(ContractTestUtils.java:1252) > at > org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:219) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14304) AdlStoreClient#getExceptionFromResponse should not swallow defaultMessage
John Zhuge created HADOOP-14304: --- Summary: AdlStoreClient#getExceptionFromResponse should not swallow defaultMessage Key: HADOOP-14304 URL: https://issues.apache.org/jira/browse/HADOOP-14304 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Discovered the issue in HADOOP-14292. In {{AdlStoreClient}}, {{enumerateDirectoryInternal}} called {{getExceptionFromResponse}} with {{defaultMessage}} set to {{"Error enumerating directory " + path}}. This useful message was swallowed at https://github.com/Azure/azure-data-lake-store-java/blob/2.1.4/src/main/java/com/microsoft/azure/datalake/store/ADLStoreClient.java#L1106. Actually {{getExceptionFromResponse}} swallows {{defaultMessage}} at several places. Suggest always displaying the {{defaultMessage}} in some way. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14294) Rename ADLS mountpoint properties
John Zhuge created HADOOP-14294: --- Summary: Rename ADLS mountpoint properties Key: HADOOP-14294 URL: https://issues.apache.org/jira/browse/HADOOP-14294 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Follow up to HADOOP-14038. Rename the prefix of {{dfs.adls..mountpoint}} and {{dfs.adls..hostname}} to {{fs.adl.}}. Borrow code from https://issues.apache.org/jira/secure/attachment/12857500/HADOOP-14038.006.patch and add a few unit tests. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14292) Transient TestAdlContractRootDirLive failure
John Zhuge created HADOOP-14292: --- Summary: Transient TestAdlContractRootDirLive failure Key: HADOOP-14292 URL: https://issues.apache.org/jira/browse/HADOOP-14292 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Assignee: Vishwajeet Dusane Got the test failure once, but could not reproduce it the second time. Maybe a transient ADLS error? {noformat} Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.641 sec <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive testRecursiveRootListing(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive) Time elapsed: 3.841 sec <<< ERROR! org.apache.hadoop.security.AccessControlException: LISTSTATUS failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.). [db432517-4060-4d96-9aad-7309f8469489][2017-04-07T10:24:54.1708810-07:00] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at com.microsoft.azure.datalake.store.ADLStoreClient.getRemoteException(ADLStoreClient.java:1144) at com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1106) at com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectoryInternal(ADLStoreClient.java:527) at com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:504) at com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:368) at org.apache.hadoop.fs.adl.AdlFileSystem.listStatus(AdlFileSystem.java:473) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1824) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1866) at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:2028) at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2027) at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2010) at org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2168) at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2145) at org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.(ContractTestUtils.java:1252) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:219) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14243) Add S3A sensitive config keys to default list
[ https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14243. - Resolution: Not A Problem {{fs.s3a.secret.key}} already on the default list. {{fs.s3a.access.key}} is not on the default list by design. > Add S3A sensitive config keys to default list > - > > Key: HADOOP-14243 > URL: https://issues.apache.org/jira/browse/HADOOP-14243 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, security >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > > S3A sensitive credential config keys should be added to the default list for > {{hadoop.security.sensitive-config-keys}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14259) Verify viewfs works with ADLS
John Zhuge created HADOOP-14259: --- Summary: Verify viewfs works with ADLS Key: HADOOP-14259 URL: https://issues.apache.org/jira/browse/HADOOP-14259 Project: Hadoop Common Issue Type: Test Components: fs/adl, viewfs Affects Versions: 2.8.0 Reporter: John Zhuge Priority: Minor Many clusters can share a single ADL store as the default filesystem. In order to prevent directories of the same names but from different clusters to collide, use viewfs over ADLS filesystem: * Set {{fs.defaultFS}} to {{viewfs://clusterX}} for cluster X * Set {{fs.defaultFS}} to {{viewfs://clusterY}} for cluster Y * The viewfs client mount table should have entry clusterX and ClusterY Tasks * Verify all filesystem operations work as expected, especially rename and concat * Verify homedir entry works -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14258) Verify and document ADLS client mount table feature
John Zhuge created HADOOP-14258: --- Summary: Verify and document ADLS client mount table feature Key: HADOOP-14258 URL: https://issues.apache.org/jira/browse/HADOOP-14258 Project: Hadoop Common Issue Type: Improvement Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Priority: Minor ADLS connector supports a simple form of client mount table (chrooted) so that multiple clusters can share a single store as the default filesystem without sharing any directories. Verify and document this feature. How to setup: * Set property {{dfs.adls..hostname}} to {{.azuredatalakestore.net}} * Set property {{dfs.adls..mountpoint}} to {{}} * URI {{adl:///...}} will be translated to {{adl://.azuredatalakestore.net/}} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14251) Credential provider should handle property key deprecation
John Zhuge created HADOOP-14251: --- Summary: Credential provider should handle property key deprecation Key: HADOOP-14251 URL: https://issues.apache.org/jira/browse/HADOOP-14251 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.6.0 Reporter: John Zhuge Assignee: John Zhuge The properties with old keys stored in a credential store can not be read via the new property keys. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14243) Add S3A sensitive keys to default Hadoop sensitive keys
John Zhuge created HADOOP-14243: --- Summary: Add S3A sensitive keys to default Hadoop sensitive keys Key: HADOOP-14243 URL: https://issues.apache.org/jira/browse/HADOOP-14243 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor S3A credential sensitive keys should be added to the default list for hadoop.security.sensitive-config-keys. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14242) Configure KMS Tomcat SSL property sslEnabledProtocols
John Zhuge created HADOOP-14242: --- Summary: Configure KMS Tomcat SSL property sslEnabledProtocols Key: HADOOP-14242 URL: https://issues.apache.org/jira/browse/HADOOP-14242 Project: Hadoop Common Issue Type: Improvement Components: kms Affects Versions: 2.6.0 Reporter: John Zhuge Assignee: John Zhuge Allow users to configure KMS Tomcat SSL property {{sslEnabledProtocols}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14241) Add ADLS credential keys to Hadoop sensitive key list
John Zhuge created HADOOP-14241: --- Summary: Add ADLS credential keys to Hadoop sensitive key list Key: HADOOP-14241 URL: https://issues.apache.org/jira/browse/HADOOP-14241 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor ADLS credential config keys should be added to the default value for {{hadoop.security.sensitive-config-keys}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14234) Improve ADLS FileSystem tests with JUnit4
John Zhuge created HADOOP-14234: --- Summary: Improve ADLS FileSystem tests with JUnit4 Key: HADOOP-14234 URL: https://issues.apache.org/jira/browse/HADOOP-14234 Project: Hadoop Common Issue Type: Improvement Components: fs/adl, test Affects Versions: 2.8.0 Reporter: John Zhuge Priority: Minor HADOOP-14180 switches FileSystem contract tests to JUnit4 and makes various enhancements. Improve ADLS FileSystem contract tests based on that. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up
John Zhuge created HADOOP-14230: --- Summary: TestAdlFileSystemContractLive fails to clean up Key: HADOOP-14230 URL: https://issues.apache.org/jira/browse/HADOOP-14230 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl, test Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor TestAdlFileSystemContractLive fails to clean up test directories after the tests. This is the leftover after {{testListStatus}}: {nonformat} $ bin/hadoop fs -ls -R / drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:17 /user drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:17 /user/jzhuge drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:17 /user/jzhuge/FileSystemContractBaseTest drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:17 /user/jzhuge/FileSystemContractBaseTest/testListStatus drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:17 /user/jzhuge/FileSystemContractBaseTest/testListStatus/a drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:17 /user/jzhuge/FileSystemContractBaseTest/testListStatus/b drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:17 /user/jzhuge/FileSystemContractBaseTest/testListStatus/c drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:17 /user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1 {noformat} This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}: {noformat} $ bin/hadoop fs -ls -R / drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:22 /user drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:22 /user/jzhuge drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:22 /user/jzhuge/FileSystemContractBaseTest drwxr-xr-x - ADLSAccessApp loginapp 0 2017-03-24 08:22 /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile -rw-r--r-- 1 ADLSAccessApp loginapp 2048 2017-03-24 08:22 /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14206) TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature
John Zhuge created HADOOP-14206: --- Summary: TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature Key: HADOOP-14206 URL: https://issues.apache.org/jira/browse/HADOOP-14206 Project: Hadoop Common Issue Type: Test Components: fs, test Affects Versions: 2.9.0 Reporter: John Zhuge https://builds.apache.org/job/PreCommit-HADOOP-Build/11862/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_121.txt: {noformat} Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.454 sec <<< FAILURE! - in org.apache.hadoop.fs.sftp.TestSFTPFileSystem testFileExists(org.apache.hadoop.fs.sftp.TestSFTPFileSystem) Time elapsed: 0.19 sec <<< ERROR! java.io.IOException: com.jcraft.jsch.JSchException: Session.connect: java.security.SignatureException: Invalid encoding for signature at com.jcraft.jsch.Session.connect(Session.java:565) at com.jcraft.jsch.Session.connect(Session.java:183) at org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:168) at org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149) at org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626) at org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) at org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:180) at org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149) at org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626) at org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190) {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14205) No FileSystem for scheme: adl
John Zhuge created HADOOP-14205: --- Summary: No FileSystem for scheme: adl Key: HADOOP-14205 URL: https://issues.apache.org/jira/browse/HADOOP-14205 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge {noformat} $ bin/hadoop fs -ls / ls: No FileSystem for scheme: adl {noformat} The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and {{fs.AbstractFileSystem.adl.impl}}. After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this error: {noformat} $ bin/hadoop fs -ls / -ls: Fatal internal error java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.adl.AdlFileSystem not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) at org.apache.hadoop.fs.shell.Command.run(Command.java:175) at org.apache.hadoop.fs.FsShell.run(FsShell.java:315) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:378) Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.adl.AdlFileSystem not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229) ... 18 more {noformat} The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14197) Fix ADLS doc section for credential provider
John Zhuge created HADOOP-14197: --- Summary: Fix ADLS doc section for credential provider Key: HADOOP-14197 URL: https://issues.apache.org/jira/browse/HADOOP-14197 Project: Hadoop Common Issue Type: Bug Components: documentation, fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge There are a few errors in section {{Protecting the Credentials with Credential Providers}} of {{index.md}}: * Should add {{dfs.adls.oauth2.client.id}} instead of {{dfs.adls.oauth2.credential}} to the cred store * Should add {{dfs.adls.oauth2.access.token.provider.type}} to core-site.xml or DistCp command line -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14185) Remove service loader config file for Har fs
John Zhuge created HADOOP-14185: --- Summary: Remove service loader config file for Har fs Key: HADOOP-14185 URL: https://issues.apache.org/jira/browse/HADOOP-14185 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 2.7.3 Reporter: John Zhuge Priority: Minor Per discussion in HADOOP-14132. Remove line {{org.apache.hadoop.fs.HarFileSystem}} from the service loader config file hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem and add property {{fs.har.impl}} to {{core-default.xml}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14184) Remove service loader config file for ftp fs
John Zhuge created HADOOP-14184: --- Summary: Remove service loader config file for ftp fs Key: HADOOP-14184 URL: https://issues.apache.org/jira/browse/HADOOP-14184 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 2.7.3 Reporter: John Zhuge Priority: Minor Per discussion in HADOOP-14132. Remove line {{org.apache.hadoop.fs.ftp.FTPFileSystem}} from the service loader config file hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem and add property {{fs.ftp.impl}} to {{core-default.xml}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14183) No service loader for wasb fs
John Zhuge created HADOOP-14183: --- Summary: No service loader for wasb fs Key: HADOOP-14183 URL: https://issues.apache.org/jira/browse/HADOOP-14183 Project: Hadoop Common Issue Type: Improvement Components: fs/azure Affects Versions: 2.7.3 Reporter: John Zhuge Priority: Minor Per discussion in HADOOP-14132. Remove the service loader config file hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem and add property {{fs.wasb.impl}} to {{core-default.xml}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14175) NPE when ADL store URI contains underscore
John Zhuge created HADOOP-14175: --- Summary: NPE when ADL store URI contains underscore Key: HADOOP-14175 URL: https://issues.apache.org/jira/browse/HADOOP-14175 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Priority: Minor Please note the underscore {{_}} in the store name {{jzhuge_adls}}. Same NPE wherever the underscore in the URI. {noformat} $ bin/hadoop fs -ls adl://jzhuge_adls.azuredatalakestore.net/ -ls: Fatal internal error java.lang.NullPointerException at org.apache.hadoop.fs.adl.AdlFileSystem.initialize(AdlFileSystem.java:145) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3257) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3306) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3274) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:249) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:232) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) at org.apache.hadoop.fs.shell.Command.run(Command.java:176) at org.apache.hadoop.fs.FsShell.run(FsShell.java:326) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:389) {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14174) Set default ADLS access token provider type to ClientCredential
John Zhuge created HADOOP-14174: --- Summary: Set default ADLS access token provider type to ClientCredential Key: HADOOP-14174 URL: https://issues.apache.org/jira/browse/HADOOP-14174 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Split off from a big patch in HADOOP-14038. Switch {{fs.adl.oauth2.access.token.provider.type}} default from {{Custom}} to {{ClientCredential}} and add ADLS properties to {{core-default.xml}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14173) Remove unused AdlConfKeys#ADL_EVENTS_TRACKING_SOURCE
John Zhuge created HADOOP-14173: --- Summary: Remove unused AdlConfKeys#ADL_EVENTS_TRACKING_SOURCE Key: HADOOP-14173 URL: https://issues.apache.org/jira/browse/HADOOP-14173 Project: Hadoop Common Issue Type: Sub-task Components: fs/adl Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Trivial Split off from a big patch in HADOOP-14038. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14152) Operation not authorized in swift unit test
John Zhuge created HADOOP-14152: --- Summary: Operation not authorized in swift unit test Key: HADOOP-14152 URL: https://issues.apache.org/jira/browse/HADOOP-14152 Project: Hadoop Common Issue Type: Bug Components: fs/swift Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Got this error during a full live unit tests. The tests lasted 63 minutes. This failure occurred at about 47 minute mark. {noformat} testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.fs.swift.TestSwiftFileSystemContract) Time elapsed: 1.888 sec <<< ERROR! org.apache.hadoop.fs.swift.exceptions.SwiftAuthenticationFailedException: Operation not authorized- current access token =AccessToken{id='AABYbCCD_cnfiEgeb-_8HEmFJMChOVX7mw-YmJyyB1aCVXdEAv7l4y58dM-TqOzD1zP6bjNTeVYx5GPRSuxzzEpSnDGFgjd1U9kLgL7LvQazirjoI9ggRq5Q4ZfWEWphJjjh2grC6Z3XAA', tenant=org.apache.hadoop.fs.swift.auth.entities.Tenant@42bc14c1, expires='2017-03-08T16:50:24.078Z'} at org.apache.hadoop.fs.swift.http.SwiftRestClient.buildException(SwiftRestClient.java:1466) at org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1384) at org.apache.hadoop.fs.swift.http.SwiftRestClient.headRequest(SwiftRestClient.java:999) at org.apache.hadoop.fs.swift.http.SwiftRestClient.createContainer(SwiftRestClient.java:1236) at org.apache.hadoop.fs.swift.http.SwiftRestClient.createDefaultContainer(SwiftRestClient.java:1221) at org.apache.hadoop.fs.swift.http.SwiftRestClient.access$1600(SwiftRestClient.java:94) at org.apache.hadoop.fs.swift.http.SwiftRestClient$AuthenticationPost.extractResult(SwiftRestClient.java:1199) at org.apache.hadoop.fs.swift.http.SwiftRestClient$AuthenticationPost.extractResult(SwiftRestClient.java:1067) at org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1388) at org.apache.hadoop.fs.swift.http.SwiftRestClient.authenticate(SwiftRestClient.java:1062) at org.apache.hadoop.fs.swift.http.SwiftRestClient.authIfNeeded(SwiftRestClient.java:1280) at org.apache.hadoop.fs.swift.http.SwiftRestClient.preRemoteCommand(SwiftRestClient.java:1296) at org.apache.hadoop.fs.swift.http.SwiftRestClient.headRequest(SwiftRestClient.java:998) at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.stat(SwiftNativeFileSystemStore.java:259) at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:214) at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:183) at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.getFileStatus(SwiftNativeFileSystem.java:173) at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.shouldCreate(SwiftNativeFileSystem.java:398) at org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.mkdirs(SwiftNativeFileSystem.java:333) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2229) at org.apache.hadoop.fs.FileSystemContractBaseTest.writeAndRead(FileSystemContractBaseTest.java:826) at org.apache.hadoop.fs.FileSystemContractBaseTest.writeReadAndDelete(FileSystemContractBaseTest.java:298) at org.apache.hadoop.fs.FileSystemContractBaseTest.testWriteReadAndDeleteTwoBlocks(FileSystemContractBaseTest.java:287) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at junit.framework.TestCase.runTest(TestCase.java:176) at junit.framework.TestCase.runBare(TestCase.java:141) at junit.framework.TestResult$1.protect(TestResult.java:122) at junit.framework.TestResult.runProtected(TestResult.java:142) at junit.framework.TestResult.run(TestResult.java:125) at junit.framework.TestCase.run(TestCase.java:129) at junit.framework.TestSuite.runTest(TestSuite.java:255) at junit.framework.TestSuite.run(TestSuite.java:250) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire.boo
[jira] [Created] (HADOOP-14151) Swift treats 0-len file as directory
John Zhuge created HADOOP-14151: --- Summary: Swift treats 0-len file as directory Key: HADOOP-14151 URL: https://issues.apache.org/jira/browse/HADOOP-14151 Project: Hadoop Common Issue Type: Bug Components: fs/swift Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Unit test {{TestSwiftContractRootDir#testRmNonEmptyRootDirNonRecursive}} fails at {{assertIsFile(file)}}. This leads me to suspect swift treats 0-len file as directory. Confirmed by the following experiment: {noformat} $ ls -l /tmp/zero /tmp/abc -rw-rw-r-- 1 jzhuge wheel 4 Mar 7 13:19 /tmp/abc -rw-rw-r-- 1 jzhuge wheel 0 Mar 7 13:19 /tmp/zero $ bin/hadoop fs -put /tmp/zero /tmp/abc swift://jzswift.rackspace/ 2017-03-07 13:24:09,321 INFO snative.SwiftNativeFileSystemStore: mv jzswift/zero._COPYING_ swift://jzswift.rackspace/zero $ bin/hadoop fs -touchz swift://jzswift.rackspace/touchz $ bin/hadoop fs -ls swift://jzswift.rackspace/ Found 3 items -rw-rw-rw- 1 4 2017-03-07 13:36 swift://jzswift.rackspace/abc drwxrwxrwx - 0 2017-03-07 13:28 swift://jzswift.rackspace/touchz drwxrwxrwx - 0 2017-03-07 13:32 swift://jzswift.rackspace/zero {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14149) Incorrect location of service provider configuration file on Azure Data lake Filesystem
[ https://issues.apache.org/jira/browse/HADOOP-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14149. - Resolution: Duplicate Thank you [~jinhyukch...@gmail.com] for reporting the issue and submitting a PR. However, HADOOP-14123 already detected the issue and planed to remove this file per discussion of HADOOP-14132. > Incorrect location of service provider configuration file on Azure Data lake > Filesystem > --- > > Key: HADOOP-14149 > URL: https://issues.apache.org/jira/browse/HADOOP-14149 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 3.0.0-alpha2 >Reporter: Jin Hyuk Chang > > Currently, the provider configuration file is in wrong location -- should be > under services folder -- and ADL file system cannot be loaded without > registering manually into the configuration. > https://docs.oracle.com/javase/tutorial/ext/basics/spi.html#register-service-providers -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-13606) swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reopened HADOOP-13606: - Assignee: John Zhuge (was: Steve Loughran) > swift FS to add a service load metadata file > > > Key: HADOOP-13606 > URL: https://issues.apache.org/jira/browse/HADOOP-13606 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: John Zhuge > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13606-branch-2-001.patch > > > add a metadata file giving the FS impl of swift; remove the entry from > core-default.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14131) kms.sh create erroneous dir
John Zhuge created HADOOP-14131: --- Summary: kms.sh create erroneous dir Key: HADOOP-14131 URL: https://issues.apache.org/jira/browse/HADOOP-14131 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 2.9.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor {{kms.sh start}} create dir {{$\{kms.log.dir\}}} on the current dir. Obvious the system property {{kms.log.dir}} is not set correctly, thus log4j fails to substitute the variable. HADOOP-14083 introduced the issue by mistakenly moving {{kms.log.dir}} from option {{-D}} to file {{catalina.properties}}. The same goes to other properties not just used by Tomcat. They should still be set by option {{-D}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14123) Make AdlFileSystem a service provider for FileSystem
John Zhuge created HADOOP-14123: --- Summary: Make AdlFileSystem a service provider for FileSystem Key: HADOOP-14123 URL: https://issues.apache.org/jira/browse/HADOOP-14123 Project: Hadoop Common Issue Type: Improvement Components: fs/adl Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Assignee: John Zhuge Add a provider-configuration file giving the FS impl of {{AdlFileSystem}}; remove the entry from core-default.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14122) Add ADLS to hadoop-cloud-storage-project
John Zhuge created HADOOP-14122: --- Summary: Add ADLS to hadoop-cloud-storage-project Key: HADOOP-14122 URL: https://issues.apache.org/jira/browse/HADOOP-14122 Project: Hadoop Common Issue Type: Improvement Components: fs/adl Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Add hadoop-azure-datalake to hadoop-cloud-storage-project. HADOOP-13687 did include hadoop-azure-datalake at one point. [~cnauroth], could you comment? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14092) Typo in hadoop-aws index.md
John Zhuge created HADOOP-14092: --- Summary: Typo in hadoop-aws index.md Key: HADOOP-14092 URL: https://issues.apache.org/jira/browse/HADOOP-14092 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Priority: Trivial In section {{Testing against different regions}}, {{contract-tests.xml}} should be {{contract-test-options.xml}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14060) KMS /logs servlet should have access control
[ https://issues.apache.org/jira/browse/HADOOP-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reopened HADOOP-14060: - > KMS /logs servlet should have access control > > > Key: HADOOP-14060 > URL: https://issues.apache.org/jira/browse/HADOOP-14060 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0-alpha3 >Reporter: John Zhuge >Assignee: John Zhuge > > HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine > for /conf, /jmx, /logLevel, and /stacks, but not for /logs. > The code in {{AdminAuthorizedServlet#doGet}} for /logs and > {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that > /logs should subject to the same access control as intended by the original > developer. > IMHO this could either be my misconfiguration or there is a bug somewhere in > {{HttpServer2}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14083) Old SSL clients should work with KMS
John Zhuge created HADOOP-14083: --- Summary: Old SSL clients should work with KMS Key: HADOOP-14083 URL: https://issues.apache.org/jira/browse/HADOOP-14083 Project: Hadoop Common Issue Type: Improvement Components: kms Affects Versions: 2.8.0, 2.7.4, 2.6.6 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor HADOOP-13812 upgraded Tomcat to 6.0.48 which filters weak ciphers. Old SSL clients such as curl stop working. The symptom is {{NSS error -12286}} when running {{curl -v}}. Instead of forcing the SSL clients to upgrade, we can configure Tomcat to explicitly allow enough weak ciphers so that old SSL clients can work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14073) Document default HttpServer2 servlets
John Zhuge created HADOOP-14073: --- Summary: Document default HttpServer2 servlets Key: HADOOP-14073 URL: https://issues.apache.org/jira/browse/HADOOP-14073 Project: Hadoop Common Issue Type: Improvement Components: documentation Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Priority: Minor Since many components (NN Web UI, YARN AHS, KMS, HttpFS, etc) now use HttpServer2 which provides default servlets /conf, /jmx, /logLevel, /stacks, /logs, and /static, it'd nice to have an independent markdown doc to describe authentication and authorization of these servlets. Related components can just link to this markdown doc. Good summary of HTTP servlets by [~yuanbo] for HADOOP-13119: https://issues.apache.org/jira/browse/HADOOP-13119?focusedCommentId=15635132&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15635132. I also made a poor attempt in https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm#L1086-L1129. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14060) KMS /logs servlet should have access control
John Zhuge created HADOOP-14060: --- Summary: KMS /logs servlet should have access control Key: HADOOP-14060 URL: https://issues.apache.org/jira/browse/HADOOP-14060 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Assignee: John Zhuge HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine for /conf, /jmx, /logLevel, and /stacks, but not for /logs. The code in {{AdminAuthorizedServlet#doGet}} for /logs and {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that /logs should subject to the same access control as intended by the original developer. IMHO this could either be my misconfiguration or there is a bug somewhere in {{HttpServer2}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14047) Require admin to access KMS instrumentation servlets
John Zhuge created HADOOP-14047: --- Summary: Require admin to access KMS instrumentation servlets Key: HADOOP-14047 URL: https://issues.apache.org/jira/browse/HADOOP-14047 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor Discovered during HDFS-10860 review. To require admin to access KMS instrumentation servlets, {{HttpServer2#setACL}} must be called. Add configuration property {{hadoop.httpfs.http.administrators}}, similar to {{dfs.cluster.administrators}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14030) PreCommit TestKDiag failure
John Zhuge created HADOOP-14030: --- Summary: PreCommit TestKDiag failure Key: HADOOP-14030 URL: https://issues.apache.org/jira/browse/HADOOP-14030 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge https://builds.apache.org/job/PreCommit-HADOOP-Build/11523/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt {noformat} Tests run: 13, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 2.175 sec <<< FAILURE! - in org.apache.hadoop.security.TestKDiag testKeytabAndPrincipal(org.apache.hadoop.security.TestKDiag) Time elapsed: 0.05 sec <<< ERROR! org.apache.hadoop.security.KerberosAuthException: Login failure for user: f...@example.com from keytab /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab javax.security.auth.login.LoginException: Unable to obtain password from user at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897) at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760) at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1355) at org.apache.hadoop.security.KDiag.loginFromKeytab(KDiag.java:630) at org.apache.hadoop.security.KDiag.execute(KDiag.java:396) at org.apache.hadoop.security.KDiag.run(KDiag.java:236) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.security.KDiag.exec(KDiag.java:1047) at org.apache.hadoop.security.TestKDiag.kdiag(TestKDiag.java:119) at org.apache.hadoop.security.TestKDiag.testKeytabAndPrincipal(TestKDiag.java:162) testFileOutput(org.apache.hadoop.security.TestKDiag) Time elapsed: 0.033 sec <<< ERROR! org.apache.hadoop.security.KerberosAuthException: Login failure for user: f...@example.com from keytab /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab javax.security.auth.login.LoginException: Unable to obtain password from user at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897) at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760) at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1355) at org.apache.hadoop.security.KDiag.loginFromKeytab(KDiag.java:630) at org.apache.hadoop.security.KDiag.execute(KDiag.java:396) at org.apache.hadoop.security.KDiag.run(KDiag.java:236) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.security.KDiag.exec(KDiag.java:1047) at org.apache.hadoop.security.TestKDiag.kdiag(TestKDiag.java:119) at org.apache.hadoop.security.TestKDiag.testFileOutput(TestKDiag.java:186) testLoadResource(org.apache.hadoop.security.Tes
[jira] [Created] (HADOOP-14018) hadoop-client-api-3.0.0-alpha2.jar misses LICENSE file
John Zhuge created HADOOP-14018: --- Summary: hadoop-client-api-3.0.0-alpha2.jar misses LICENSE file Key: HADOOP-14018 URL: https://issues.apache.org/jira/browse/HADOOP-14018 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Priority: Minor In the 3.0.0-alpha2-RC0, hadoop-client-api-3.0.0-alpha2.jar misses LICENSE file, but it does have {{META-INF/NOTICE}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14017) Integrate ADLS ACL
John Zhuge created HADOOP-14017: --- Summary: Integrate ADLS ACL Key: HADOOP-14017 URL: https://issues.apache.org/jira/browse/HADOOP-14017 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Assignee: John Zhuge Track the effort to integrate ADLS ACL which models after HDFS ACL. Both are based on POSIX ACL. Of course this will go too far without AuthN integration of some sort. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13992) Fix HttpServer2#loadSSLConfiguration
John Zhuge created HADOOP-13992: --- Summary: Fix HttpServer2#loadSSLConfiguration Key: HADOOP-13992 URL: https://issues.apache.org/jira/browse/HADOOP-13992 Project: Hadoop Common Issue Type: Bug Components: kms, security Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge HADOOP-13597 added {{HttpServer2#loadSSLConfiguration}} that deviated from the existing way of loading SSL configuration. See these methods: * DFSUtil#loadSslConfiguration * WebAppUtils#loadSslConfiguration * SSLFactory#readSSLConfiguration Fix {{HttpServer2#loadSSLConfiguration}} and related code in {{KMSWebServer}} and {{MiniKMS}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13990) Document KMS use of CredentialProvider API
John Zhuge created HADOOP-13990: --- Summary: Document KMS use of CredentialProvider API Key: HADOOP-13990 URL: https://issues.apache.org/jira/browse/HADOOP-13990 Project: Hadoop Common Issue Type: Improvement Components: documentation, kms Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge Priority: Trivial HADOOP-13597 actually enabled support for Credential Provider API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13987) Enhance SSLFactory support for Credential Provider
John Zhuge created HADOOP-13987: --- Summary: Enhance SSLFactory support for Credential Provider Key: HADOOP-13987 URL: https://issues.apache.org/jira/browse/HADOOP-13987 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.6.0 Reporter: John Zhuge Assignee: John Zhuge Testing CredentialProvider with KMS: populated the credentials file, added "hadoop.security.credential.provider.path" to core-site.xml, but "hadoop key list" failed due to incorrect password. So I added "hadoop.security.credential.provider.path" to ssl-client.xml, "hadoop key list" worked! -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13983) Print better error when accessing a non-existent store
John Zhuge created HADOOP-13983: --- Summary: Print better error when accessing a non-existent store Key: HADOOP-13983 URL: https://issues.apache.org/jira/browse/HADOOP-13983 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Priority: Minor -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13982) Print better error message when accessing a store without permission
John Zhuge created HADOOP-13982: --- Summary: Print better error message when accessing a store without permission Key: HADOOP-13982 URL: https://issues.apache.org/jira/browse/HADOOP-13982 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge The error message when accessing a store without permission is not user friendly: {noformat} $ hdfs dfs -ls adl://STORE.azuredatalakestore.net/ ls: Operation GETFILESTATUS failed with HTTP403 : null {noformat} Store {{STORE}} exists but Hadoop is configured with an SPI that does not have access to the store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org