[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388824#comment-15388824 ] Klaus Ma commented on HADOOP-13397: --- I think a dockerfile template is OK. We'd better to provide guidance to user how to run YARN in Docker. And as I mentioned in dev list, there's also some issue I'd like to confirm with you: in hdfs-site.xml, "dfs.namenode.datanode.registration.ip-hostname-check” is false; but it seems the master will use host IP when connect back to datanode instead of container IP. > Add dockerfile for Hadoop > - > > Key: HADOOP-13397 > URL: https://issues.apache.org/jira/browse/HADOOP-13397 > Project: Hadoop Common > Issue Type: Bug >Reporter: Klaus Ma > > For now, there's no community version Dockerfile in Hadoop; most of docker > images are provided by vendor, e.g. > 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/ > 2. From HortonWorks sequenceiq: > https://hub.docker.com/r/sequenceiq/hadoop-docker/ > 3. MapR provides the mapr-sandbox-base: > https://hub.docker.com/r/maprtech/mapr-sandbox-base/ > The proposal of this JIRA is to provide a community version Dockerfile in > Hadoop, and here's some requirement: > 1. Seperated docker image for master & agents, e.g. resource manager & node > manager > 2. Default configuration to start master & agent instead of configurating > manually > 3. Start Hadoop process as no-daemon > Here's my dockerfile to start master/agent: > https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn > I'd like to contribute it after polishing :). > Email Thread : > http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates wrong values
[ https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HADOOP-13405: - Description: description for "fs.s3a.acl.default" indicates its values are "private,public-read"; when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/' {{-ls: No enum constant com.amazonaws.services.s3.model.CannedAccessControlList.public-read}} while in amazon-sdk , {code} public enum CannedAccessControlList { Private("private"), PublicRead("public-read"), PublicReadWrite("public-read-write"), AuthenticatedRead("authenticated-read"), LogDeliveryWrite("log-delivery-write"), BucketOwnerRead("bucket-owner-read"), BucketOwnerFullControl("bucket-owner-full-control"); {code} so values should be enum values as "Private","PublicRead"... attached simple patch. was: description for "fs.s3a.acl.default" indicates its values are "private,public-read"; when set value be public-read, {{No enum constant com.amazonaws.services.s3.model.CannedAccessControlList.public-read}} while in amazon-sdk , {code} public enum CannedAccessControlList { Private("private"), PublicRead("public-read"), PublicReadWrite("public-read-write"), AuthenticatedRead("authenticated-read"), LogDeliveryWrite("log-delivery-write"), BucketOwnerRead("bucket-owner-read"), BucketOwnerFullControl("bucket-owner-full-control"); {code} so values should be enum values as "Private","PublicRead"... attached simple patch. > doc for “fs.s3a.acl.default” indicates wrong values > --- > > Key: HADOOP-13405 > URL: https://issues.apache.org/jira/browse/HADOOP-13405 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Shen Yinjie >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13405.patch > > > description for "fs.s3a.acl.default" indicates its values are > "private,public-read"; > when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/' > {{-ls: No enum constant > com.amazonaws.services.s3.model.CannedAccessControlList.public-read}} > while in amazon-sdk , > {code} > public enum CannedAccessControlList { > Private("private"), > PublicRead("public-read"), > PublicReadWrite("public-read-write"), > AuthenticatedRead("authenticated-read"), > LogDeliveryWrite("log-delivery-write"), > BucketOwnerRead("bucket-owner-read"), > BucketOwnerFullControl("bucket-owner-full-control"); > {code} > so values should be enum values as "Private","PublicRead"... > attached simple patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates wrong values
[ https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HADOOP-13405: - Status: Patch Available (was: Open) > doc for “fs.s3a.acl.default” indicates wrong values > --- > > Key: HADOOP-13405 > URL: https://issues.apache.org/jira/browse/HADOOP-13405 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Shen Yinjie >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13405.patch > > > description for "fs.s3a.acl.default" indicates its values are > "private,public-read"; > when set value be public-read, > {{No enum constant > com.amazonaws.services.s3.model.CannedAccessControlList.public-read}} > while in amazon-sdk , > {code} > public enum CannedAccessControlList { > Private("private"), > PublicRead("public-read"), > PublicReadWrite("public-read-write"), > AuthenticatedRead("authenticated-read"), > LogDeliveryWrite("log-delivery-write"), > BucketOwnerRead("bucket-owner-read"), > BucketOwnerFullControl("bucket-owner-full-control"); > {code} > so values should be enum values as "Private","PublicRead"... > attached simple patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates wrong values
[ https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HADOOP-13405: - Attachment: HADOOP-13405.patch > doc for “fs.s3a.acl.default” indicates wrong values > --- > > Key: HADOOP-13405 > URL: https://issues.apache.org/jira/browse/HADOOP-13405 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Shen Yinjie >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13405.patch > > > description for "fs.s3a.acl.default" indicates its values are > "private,public-read"; > when set value be public-read, > {{No enum constant > com.amazonaws.services.s3.model.CannedAccessControlList.public-read}} > while in amazon-sdk , > {code} > public enum CannedAccessControlList { > Private("private"), > PublicRead("public-read"), > PublicReadWrite("public-read-write"), > AuthenticatedRead("authenticated-read"), > LogDeliveryWrite("log-delivery-write"), > BucketOwnerRead("bucket-owner-read"), > BucketOwnerFullControl("bucket-owner-full-control"); > {code} > so values should be enum values as "Private","PublicRead"... > attached simple patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates wrong values
Shen Yinjie created HADOOP-13405: Summary: doc for “fs.s3a.acl.default” indicates wrong values Key: HADOOP-13405 URL: https://issues.apache.org/jira/browse/HADOOP-13405 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.8.0, 3.0.0-alpha2 Reporter: Shen Yinjie Priority: Minor Fix For: 3.0.0-alpha2 description for "fs.s3a.acl.default" indicates its values are "private,public-read"; when set value be public-read, {{No enum constant com.amazonaws.services.s3.model.CannedAccessControlList.public-read}} while in amazon-sdk , {code} public enum CannedAccessControlList { Private("private"), PublicRead("public-read"), PublicReadWrite("public-read-write"), AuthenticatedRead("authenticated-read"), LogDeliveryWrite("log-delivery-write"), BucketOwnerRead("bucket-owner-read"), BucketOwnerFullControl("bucket-owner-full-control"); {code} so values should be enum values as "Private","PublicRead"... attached simple patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13404) RPC call hangs when server side CPU overloaded
[ https://issues.apache.org/jira/browse/HADOOP-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388747#comment-15388747 ] Peter Shi commented on HADOOP-13404: I think there are 2 solution 1) add ping response in RPC server, and check the response in client side. Need client side and server side modification, which may have some compatibility issue. 2) add thread to scan the calls inside the connection, send timeout exception to the response if the call do not get response for a long time. This is only client side solution. > RPC call hangs when server side CPU overloaded > -- > > Key: HADOOP-13404 > URL: https://issues.apache.org/jira/browse/HADOOP-13404 > Project: Hadoop Common > Issue Type: Bug >Reporter: Peter Shi > > In our reliability test, in namenode, inject fault like cpu 100% consumed, > after fault injection, for existing connection, all the request will hangs > forever, not timeout. for new coming connection, it will failover to another > namenode in HA deployment. > There is no timeout mechanism for calls on established connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13404) RPC call hangs when server side CPU overloaded
Peter Shi created HADOOP-13404: -- Summary: RPC call hangs when server side CPU overloaded Key: HADOOP-13404 URL: https://issues.apache.org/jira/browse/HADOOP-13404 Project: Hadoop Common Issue Type: Bug Reporter: Peter Shi In our reliability test, in namenode, inject fault like cpu 100% consumed, after fault injection, for existing connection, all the request will hangs forever, not timeout. for new coming connection, it will failover to another namenode in HA deployment. There is no timeout mechanism for calls on established connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13041) Enhancement CoderUtil test code
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388680#comment-15388680 ] Kai Sasaki commented on HADOOP-13041: - [~drankye] I rebased on current trunk. Could you check again please? > Enhancement CoderUtil test code > --- > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch, HADOOP-13041.02.patch, > HADOOP-13041.03.patch > > > Enhancement missing test for {{CoderUtil}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements
[ https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388654#comment-15388654 ] Subramanyam Pattipaka commented on HADOOP-13403: [~cnauroth], I have submitted initial patch for review. HADOOP-13208 seems to generic implementation. Still going through details. My changes are specific to Native Azure FileSystem. I am using the flat listing option provided by Azure Storage client which returns all files and directories. > AzureNativeFileSystem rename/delete performance improvements > > > Key: HADOOP-13403 > URL: https://issues.apache.org/jira/browse/HADOOP-13403 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Reporter: Subramanyam Pattipaka > Attachments: HADOOP-13403-001.patch > > > WASB Performance Improvements > Problem > --- > Azure Native File system operations like rename/delete which has large number > of directories and/or files in the source directory are experiencing > performance issues. Here are possible reasons > a)We first list all files under source directory hierarchically. This is > a serial operation. > b)After collecting the entire list of files under a folder, we delete or > rename files one by one serially. > c)There is no logging information available for these costly operations > even in DEBUG mode leading to difficulty in understanding wasb performance > issues. > Proposal > - > Step 1: Rename and delete operations will generate a list all files under the > source folder. We need to use azure flat listing option to get list with > single request to azure store. We have introduced config > fs.azure.flatlist.enable to enable this option. The default value is 'false' > which means flat listing is disabled. > Step 2: Create thread pool and threads dynamically based on user > configuration. These thread pools will be deleted after operation is over. > We are introducing introducing two new configs > a) fs.azure.rename.threads : Config to set number of rename > threads. Default value is 0 which means no threading. > b) fs.azure.delete.threads: Config to set number of delete > threads. Default value is 0 which means no threading. > We have provided debug log information on number of threads not used > for the operation which can be useful . > Failure Scenarios: > If we fail to create thread pool due to ANY reason (for example trying > create with thread count with large value such as 100), we fall back to > serialization operation. > Step 3: Bob operations can be done in parallel using multiple threads > executing following snippet > while ((currentIndex = fileIndex.getAndIncrement()) < files.length) { > FileMetadata file = files[currentIndex]; > Rename/delete(file); > } > The above strategy depends on the fact that all files are stored in a > final array and each thread has to determine synchronized next index to do > the job. The advantage of this strategy is that even if user configures large > number of unusable threads, we always ensure that work doesn’t get serialized > due to lagging threads. > We are logging following information which can be useful for tuning > number of threads > a) Number of unusable threads > b) Time taken by each thread > c) Number of files processed by each thread > d) Total time taken for the operation > Failure Scenarios: > Failure to queue a thread execute request shouldn’t be an issue if we > can ensure at least one thread has completed execution successfully. If we > couldn't schedule one thread then we should take serialization path. > Exceptions raised while executing threads are still considered regular > exceptions and returned to client as operation failed. Exceptions raised > while stopping threads and deleting thread pool shouldn't can be ignored if > operation all files are done with out any issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements
[ https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subramanyam Pattipaka updated HADOOP-13403: --- Attachment: HADOOP-13403-001.patch > AzureNativeFileSystem rename/delete performance improvements > > > Key: HADOOP-13403 > URL: https://issues.apache.org/jira/browse/HADOOP-13403 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Reporter: Subramanyam Pattipaka > Attachments: HADOOP-13403-001.patch > > > WASB Performance Improvements > Problem > --- > Azure Native File system operations like rename/delete which has large number > of directories and/or files in the source directory are experiencing > performance issues. Here are possible reasons > a)We first list all files under source directory hierarchically. This is > a serial operation. > b)After collecting the entire list of files under a folder, we delete or > rename files one by one serially. > c)There is no logging information available for these costly operations > even in DEBUG mode leading to difficulty in understanding wasb performance > issues. > Proposal > - > Step 1: Rename and delete operations will generate a list all files under the > source folder. We need to use azure flat listing option to get list with > single request to azure store. We have introduced config > fs.azure.flatlist.enable to enable this option. The default value is 'false' > which means flat listing is disabled. > Step 2: Create thread pool and threads dynamically based on user > configuration. These thread pools will be deleted after operation is over. > We are introducing introducing two new configs > a) fs.azure.rename.threads : Config to set number of rename > threads. Default value is 0 which means no threading. > b) fs.azure.delete.threads: Config to set number of delete > threads. Default value is 0 which means no threading. > We have provided debug log information on number of threads not used > for the operation which can be useful . > Failure Scenarios: > If we fail to create thread pool due to ANY reason (for example trying > create with thread count with large value such as 100), we fall back to > serialization operation. > Step 3: Bob operations can be done in parallel using multiple threads > executing following snippet > while ((currentIndex = fileIndex.getAndIncrement()) < files.length) { > FileMetadata file = files[currentIndex]; > Rename/delete(file); > } > The above strategy depends on the fact that all files are stored in a > final array and each thread has to determine synchronized next index to do > the job. The advantage of this strategy is that even if user configures large > number of unusable threads, we always ensure that work doesn’t get serialized > due to lagging threads. > We are logging following information which can be useful for tuning > number of threads > a) Number of unusable threads > b) Time taken by each thread > c) Number of files processed by each thread > d) Total time taken for the operation > Failure Scenarios: > Failure to queue a thread execute request shouldn’t be an issue if we > can ensure at least one thread has completed execution successfully. If we > couldn't schedule one thread then we should take serialization path. > Exceptions raised while executing threads are still considered regular > exceptions and returned to client as operation failed. Exceptions raised > while stopping threads and deleting thread pool shouldn't can be ignored if > operation all files are done with out any issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects
[ https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388626#comment-15388626 ] Hudson commented on HADOOP-13382: - SUCCESS: Integrated in Hadoop-trunk-Commit #10133 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10133/]) HADOOP-13382. Remove unneeded commons-httpclient dependencies from POM (mfoley: rev 12aa184479675d6c9bd36fd8451f605ee9505b47) * hadoop-tools/hadoop-openstack/pom.xml * hadoop-project/pom.xml * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml > remove unneeded commons-httpclient dependencies from POM files in Hadoop and > sub-projects > - > > Key: HADOOP-13382 > URL: https://issues.apache.org/jira/browse/HADOOP-13382 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Matt Foley >Assignee: Matt Foley > Attachments: HADOOP-13382-branch-2.000.patch, > HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch > > > In branch-2.8 and later, the patches for various child and related bugs > listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, > HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of > "commons-httpclient" from Hadoop and its sub-projects (except for > hadoop-tools/hadoop-openstack; see HADOOP-11614). > However, after incorporating these patches, "commons-httpclient" is still > listed as a dependency in these POM files: > * hadoop-project/pom.xml > * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml > We wish to remove these, but since commons-httpclient is still used in many > files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to > * hadoop-tools/hadoop-openstack/pom.xml > (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is > removed from hadoop-openstack.) > In 2.8, this was mostly done by HADOOP-12552, but the version info formerly > inherited from hadoop-project/pom.xml also needs to be added, so that is in > the branch-2.8 version of the patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements
[ https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388594#comment-15388594 ] Chris Nauroth commented on HADOOP-13403: Hello [~pattipaka]. This sounds interesting. I won't be certain until I see the code changes, but some of it (particularly the "flat listing" option) sounds similar to optimizations done for S3A listings in HADOOP-13208. Cc'ing [~ste...@apache.org] who authored that patch, just FYI. > AzureNativeFileSystem rename/delete performance improvements > > > Key: HADOOP-13403 > URL: https://issues.apache.org/jira/browse/HADOOP-13403 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Reporter: Subramanyam Pattipaka > > WASB Performance Improvements > Problem > --- > Azure Native File system operations like rename/delete which has large number > of directories and/or files in the source directory are experiencing > performance issues. Here are possible reasons > a)We first list all files under source directory hierarchically. This is > a serial operation. > b)After collecting the entire list of files under a folder, we delete or > rename files one by one serially. > c)There is no logging information available for these costly operations > even in DEBUG mode leading to difficulty in understanding wasb performance > issues. > Proposal > - > Step 1: Rename and delete operations will generate a list all files under the > source folder. We need to use azure flat listing option to get list with > single request to azure store. We have introduced config > fs.azure.flatlist.enable to enable this option. The default value is 'false' > which means flat listing is disabled. > Step 2: Create thread pool and threads dynamically based on user > configuration. These thread pools will be deleted after operation is over. > We are introducing introducing two new configs > a) fs.azure.rename.threads : Config to set number of rename > threads. Default value is 0 which means no threading. > b) fs.azure.delete.threads: Config to set number of delete > threads. Default value is 0 which means no threading. > We have provided debug log information on number of threads not used > for the operation which can be useful . > Failure Scenarios: > If we fail to create thread pool due to ANY reason (for example trying > create with thread count with large value such as 100), we fall back to > serialization operation. > Step 3: Bob operations can be done in parallel using multiple threads > executing following snippet > while ((currentIndex = fileIndex.getAndIncrement()) < files.length) { > FileMetadata file = files[currentIndex]; > Rename/delete(file); > } > The above strategy depends on the fact that all files are stored in a > final array and each thread has to determine synchronized next index to do > the job. The advantage of this strategy is that even if user configures large > number of unusable threads, we always ensure that work doesn’t get serialized > due to lagging threads. > We are logging following information which can be useful for tuning > number of threads > a) Number of unusable threads > b) Time taken by each thread > c) Number of files processed by each thread > d) Total time taken for the operation > Failure Scenarios: > Failure to queue a thread execute request shouldn’t be an issue if we > can ensure at least one thread has completed execution successfully. If we > couldn't schedule one thread then we should take serialization path. > Exceptions raised while executing threads are still considered regular > exceptions and returned to client as operation failed. Exceptions raised > while stopping threads and deleting thread pool shouldn't can be ignored if > operation all files are done with out any issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects
[ https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Foley updated HADOOP-13382: Resolution: Fixed Status: Resolved (was: Patch Available) Thanks, [~cnauroth] and [~steve_l]. Committed as: * trunk - 12aa184479675d6c9bd * branch-2 - ea10e1384ff65e27521 * branch-2.8 - c96cb3fd48925b3eb2c > remove unneeded commons-httpclient dependencies from POM files in Hadoop and > sub-projects > - > > Key: HADOOP-13382 > URL: https://issues.apache.org/jira/browse/HADOOP-13382 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Matt Foley >Assignee: Matt Foley > Attachments: HADOOP-13382-branch-2.000.patch, > HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch > > > In branch-2.8 and later, the patches for various child and related bugs > listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, > HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of > "commons-httpclient" from Hadoop and its sub-projects (except for > hadoop-tools/hadoop-openstack; see HADOOP-11614). > However, after incorporating these patches, "commons-httpclient" is still > listed as a dependency in these POM files: > * hadoop-project/pom.xml > * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml > We wish to remove these, but since commons-httpclient is still used in many > files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to > * hadoop-tools/hadoop-openstack/pom.xml > (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is > removed from hadoop-openstack.) > In 2.8, this was mostly done by HADOOP-12552, but the version info formerly > inherited from hadoop-project/pom.xml also needs to be added, so that is in > the branch-2.8 version of the patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements
Subramanyam Pattipaka created HADOOP-13403: -- Summary: AzureNativeFileSystem rename/delete performance improvements Key: HADOOP-13403 URL: https://issues.apache.org/jira/browse/HADOOP-13403 Project: Hadoop Common Issue Type: Bug Components: azure Reporter: Subramanyam Pattipaka WASB Performance Improvements Problem --- Azure Native File system operations like rename/delete which has large number of directories and/or files in the source directory are experiencing performance issues. Here are possible reasons a) We first list all files under source directory hierarchically. This is a serial operation. b) After collecting the entire list of files under a folder, we delete or rename files one by one serially. c) There is no logging information available for these costly operations even in DEBUG mode leading to difficulty in understanding wasb performance issues. Proposal - Step 1: Rename and delete operations will generate a list all files under the source folder. We need to use azure flat listing option to get list with single request to azure store. We have introduced config fs.azure.flatlist.enable to enable this option. The default value is 'false' which means flat listing is disabled. Step 2: Create thread pool and threads dynamically based on user configuration. These thread pools will be deleted after operation is over. We are introducing introducing two new configs a) fs.azure.rename.threads : Config to set number of rename threads. Default value is 0 which means no threading. b) fs.azure.delete.threads: Config to set number of delete threads. Default value is 0 which means no threading. We have provided debug log information on number of threads not used for the operation which can be useful . Failure Scenarios: If we fail to create thread pool due to ANY reason (for example trying create with thread count with large value such as 100), we fall back to serialization operation. Step 3: Bob operations can be done in parallel using multiple threads executing following snippet while ((currentIndex = fileIndex.getAndIncrement()) < files.length) { FileMetadata file = files[currentIndex]; Rename/delete(file); } The above strategy depends on the fact that all files are stored in a final array and each thread has to determine synchronized next index to do the job. The advantage of this strategy is that even if user configures large number of unusable threads, we always ensure that work doesn’t get serialized due to lagging threads. We are logging following information which can be useful for tuning number of threads a) Number of unusable threads b) Time taken by each thread c) Number of files processed by each thread d) Total time taken for the operation Failure Scenarios: Failure to queue a thread execute request shouldn’t be an issue if we can ensure at least one thread has completed execution successfully. If we couldn't schedule one thread then we should take serialization path. Exceptions raised while executing threads are still considered regular exceptions and returned to client as operation failed. Exceptions raised while stopping threads and deleting thread pool shouldn't can be ignored if operation all files are done with out any issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.
[ https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388550#comment-15388550 ] Chris Nauroth commented on HADOOP-13402: Another special case of this is renaming to root. Even if the behavior described above was fixed, an operation like rename("/d1/d2/f1", "/") would fail due to this logic at the top of {{S3AFileSystem#innerRename}}: {code} if (srcKey.isEmpty() || dstKey.isEmpty()) { LOG.debug("rename: source {} or dest {}, is empty", srcKey, dstKey); return false; } {code} I think we can cover both cases within scope of this issue. > S3A should allow renaming to a pre-existing destination directory to move the > source path under that directory, similar to HDFS. > > > Key: HADOOP-13402 > URL: https://issues.apache.org/jira/browse/HADOOP-13402 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Priority: Minor > > In HDFS, a rename to a destination path that is a pre-existing directory is > interpreted as moving the source path relative to that pre-existing > directory. In S3A, this operation currently fails (does nothing and returns > {{false}}), unless that destination directory is empty. This issue proposes > to change S3A to allow this behavior, so that it more closely matches the > semantics of HDFS and other file systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper
[ https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388540#comment-15388540 ] Lei (Eddy) Xu commented on HADOOP-12928: Hi, [~ozawa] Zookeeper recently changed netty to {{3.10.5.Final}} as well.. https://issues.apache.org/jira/browse/ZOOKEEPER-2450. > Update netty to 3.10.5.Final to sync with zookeeper > --- > > Key: HADOOP-12928 > URL: https://issues.apache.org/jira/browse/HADOOP-12928 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.7.2 >Reporter: Hendy Irawan >Assignee: Lei (Eddy) Xu > Attachments: HADOOP-12928-branch-2.00.patch, HADOOP-12928.01.patch, > HADOOP-12928.02.patch, HDFS-12928.00.patch > > > Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper > 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927 > Pull request: https://github.com/apache/hadoop/pull/85 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.
[ https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388537#comment-15388537 ] Chris Nauroth commented on HADOOP-13402: Also relevant is the override of {{testRenameDirIntoExistingDir}} in {{TestS3AContractRename}}. S3A is the only file system that provides a special case override of that in its contract tests. > S3A should allow renaming to a pre-existing destination directory to move the > source path under that directory, similar to HDFS. > > > Key: HADOOP-13402 > URL: https://issues.apache.org/jira/browse/HADOOP-13402 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Priority: Minor > > In HDFS, a rename to a destination path that is a pre-existing directory is > interpreted as moving the source path relative to that pre-existing > directory. In S3A, this operation currently fails (does nothing and returns > {{false}}), unless that destination directory is empty. This issue proposes > to change S3A to allow this behavior, so that it more closely matches the > semantics of HDFS and other file systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.
[ https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13402: --- Reporter: Rajesh Balamohan (was: Chris Nauroth) Thank you to [~rajesh.balamohan] for identifying this during some Hive testing. > S3A should allow renaming to a pre-existing destination directory to move the > source path under that directory, similar to HDFS. > > > Key: HADOOP-13402 > URL: https://issues.apache.org/jira/browse/HADOOP-13402 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Priority: Minor > > In HDFS, a rename to a destination path that is a pre-existing directory is > interpreted as moving the source path relative to that pre-existing > directory. In S3A, this operation currently fails (does nothing and returns > {{false}}), unless that destination directory is empty. This issue proposes > to change S3A to allow this behavior, so that it more closely matches the > semantics of HDFS and other file systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.
[ https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388532#comment-15388532 ] Chris Nauroth commented on HADOOP-13402: For example, assuming pre-existing directory structure /d1/d2/f1, a call to rename("/d1/d2/f1", "/d1") results in moving f1 to absolute path /d1/f1. In S3A, this call fails because of this logic in {{S3AFileSystem#rename}}: {code} if (dstStatus.isDirectory() && !dstStatus.isEmptyDirectory()) { return false; } {code} That logic was introduced in HADOOP-10714. It sought to improve on the logic from the original HADOOP-10400 contribution by more closely matching old behavior of S3 and S3N. However, we still have this difference from the semantics of HDFS (and others). Note that this difference in behavior only occurs when the destination is a non-empty pre-existing directory. It works fine if the destination specifies the full path. Taking my example above, rename("/d1/d2/f1", "/d1") has the problem, but rename("/d1/d2/f1", "/d1/f1") works fine. Applications can use that as a workaround until we patch this. > S3A should allow renaming to a pre-existing destination directory to move the > source path under that directory, similar to HDFS. > > > Key: HADOOP-13402 > URL: https://issues.apache.org/jira/browse/HADOOP-13402 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Priority: Minor > > In HDFS, a rename to a destination path that is a pre-existing directory is > interpreted as moving the source path relative to that pre-existing > directory. In S3A, this operation currently fails (does nothing and returns > {{false}}), unless that destination directory is empty. This issue proposes > to change S3A to allow this behavior, so that it more closely matches the > semantics of HDFS and other file systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.
Chris Nauroth created HADOOP-13402: -- Summary: S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS. Key: HADOOP-13402 URL: https://issues.apache.org/jira/browse/HADOOP-13402 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Priority: Minor In HDFS, a rename to a destination path that is a pre-existing directory is interpreted as moving the source path relative to that pre-existing directory. In S3A, this operation currently fails (does nothing and returns {{false}}), unless that destination directory is empty. This issue proposes to change S3A to allow this behavior, so that it more closely matches the semantics of HDFS and other file systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13401) usability improvements of ApplicationClassLoader
[ https://issues.apache.org/jira/browse/HADOOP-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388515#comment-15388515 ] Vrushali C commented on HADOOP-13401: - [~sjlee0] just noticed that #2 has been done in HADOOP-11211 > usability improvements of ApplicationClassLoader > > > Key: HADOOP-13401 > URL: https://issues.apache.org/jira/browse/HADOOP-13401 > Project: Hadoop Common > Issue Type: Sub-task > Components: util >Reporter: Sangjin Lee > > Miscellaneous usability improvements for {{ApplicationClassLoader}}: > - Improve the system class override mechanism: today the override is a > wholesale replacement of the default; enable modifying the default > - Improve handling of addition and subtraction of system classes: today it is > sensitive to order > - other miscellaneous improvements that make using {{ApplicationClassLoader}} > easier -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388513#comment-15388513 ] Hudson commented on HADOOP-13240: - ABORTED: Integrated in Hadoop-trunk-Commit #10132 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10132/]) HADOOP-13240. TestAclCommands.testSetfaclValidations fail. Contributed (cnauroth: rev 43cf6b101dacd96bacfd199826b717f6946109af) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13399) deprecate the Configuration classloader
[ https://issues.apache.org/jira/browse/HADOOP-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388511#comment-15388511 ] Sean Busbey commented on HADOOP-13399: -- Deprecate in branch-2 and remove in one of the 3.0 alphas? > deprecate the Configuration classloader > --- > > Key: HADOOP-13399 > URL: https://issues.apache.org/jira/browse/HADOOP-13399 > Project: Hadoop Common > Issue Type: Sub-task > Components: util >Reporter: Sangjin Lee >Assignee: Sangjin Lee >Priority: Critical > > Today, anyone can simply call {{Configuration.setClassLoader()}} to set the > configuration classloader to any arbitrary classloader. This classloader is > then used to get a class or a resource through {{Configuration}} > ({{getClass()}} and {{getResource()}}). > In essence, the {{Configuration}} classloader is effectively a globally > shared classloader without contract. This is one step worse than TCCL in that > regard. > I propose to remove/deprecate {{setClassLoader()}} and {{getClassLoader()}} > and simply use TCCL (and then the classloader that loaded the > {{Configuration}} class) to load classes and resources. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency
[ https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388499#comment-15388499 ] $iddhe$h Divekar commented on HADOOP-11487: --- Np, will start watching it. Thanks for the help. > FileNotFound on distcp to s3n/s3a due to creation inconsistency > > > Key: HADOOP-11487 > URL: https://issues.apache.org/jira/browse/HADOOP-11487 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Affects Versions: 2.7.2 >Reporter: Paulo Motta > > I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm > getting the following exception: > {code:java} > 2015-01-16 20:53:18,187 ERROR [main] > org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying > hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz > java.io.FileNotFoundException: No such file or directory > 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.FileNotFoundException: No such file or > directory 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > {code} > However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. > So probably due to Amazon's S3 eventual consistency the job failure. > In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus > must use fs.s3.maxRetries property in order to avoid failures like this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13401) usability improvements of ApplicationClassLoader
[ https://issues.apache.org/jira/browse/HADOOP-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388505#comment-15388505 ] Vrushali C commented on HADOOP-13401: - Will take a stab at the first two points noted shortly > usability improvements of ApplicationClassLoader > > > Key: HADOOP-13401 > URL: https://issues.apache.org/jira/browse/HADOOP-13401 > Project: Hadoop Common > Issue Type: Sub-task > Components: util >Reporter: Sangjin Lee > > Miscellaneous usability improvements for {{ApplicationClassLoader}}: > - Improve the system class override mechanism: today the override is a > wholesale replacement of the default; enable modifying the default > - Improve handling of addition and subtraction of system classes: today it is > sensitive to order > - other miscellaneous improvements that make using {{ApplicationClassLoader}} > easier -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency
[ https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388497#comment-15388497 ] Chris Nauroth commented on HADOOP-11487: bq. Is patch for HADOOP-13345 available for general use ? No, that's just a prototype right now. It's still under development. I can't recommend running it. > FileNotFound on distcp to s3n/s3a due to creation inconsistency > > > Key: HADOOP-11487 > URL: https://issues.apache.org/jira/browse/HADOOP-11487 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Affects Versions: 2.7.2 >Reporter: Paulo Motta > > I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm > getting the following exception: > {code:java} > 2015-01-16 20:53:18,187 ERROR [main] > org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying > hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz > java.io.FileNotFoundException: No such file or directory > 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.FileNotFoundException: No such file or > directory 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > {code} > However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. > So probably due to Amazon's S3 eventual consistency the job failure. > In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus > must use fs.s3.maxRetries property in order to avoid failures like this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388496#comment-15388496 ] John Zhuge commented on HADOOP-13240: - Thanks [~jojochuang] and [~cnauroth] for the review and commit. > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency
[ https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388474#comment-15388474 ] $iddhe$h Divekar commented on HADOOP-11487: --- Cool thanks, will take a look. Is patch for HADOOP-13345 available for general use ? > FileNotFound on distcp to s3n/s3a due to creation inconsistency > > > Key: HADOOP-11487 > URL: https://issues.apache.org/jira/browse/HADOOP-11487 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Affects Versions: 2.7.2 >Reporter: Paulo Motta > > I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm > getting the following exception: > {code:java} > 2015-01-16 20:53:18,187 ERROR [main] > org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying > hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz > java.io.FileNotFoundException: No such file or directory > 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.FileNotFoundException: No such file or > directory 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > {code} > However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. > So probably due to Amazon's S3 eventual consistency the job failure. > In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus > must use fs.s3.maxRetries property in order to avoid failures like this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13240: --- Component/s: test > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388453#comment-15388453 ] Hadoop QA commented on HADOOP-13207: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-13207 does not apply to branch-2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819472/HADOOP-13207-branch-2-016.patch | | JIRA Issue | HADOOP-13207 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10060/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch, HADOOP-13207-branch-2-015.patch, > HADOOP-13207-branch-2-016.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13240: --- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I have committed this to trunk, branch-2 and branch-2.8. [~jzhuge], thank you for the patch. [~jojochuang], thank you for your review comments. > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13401) usability improvements of ApplicationClassLoader
[ https://issues.apache.org/jira/browse/HADOOP-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388438#comment-15388438 ] Sangjin Lee commented on HADOOP-13401: -- I think I'll include that work in HADOOP-13398. It needs to be addressed as a whole. > usability improvements of ApplicationClassLoader > > > Key: HADOOP-13401 > URL: https://issues.apache.org/jira/browse/HADOOP-13401 > Project: Hadoop Common > Issue Type: Sub-task > Components: util >Reporter: Sangjin Lee > > Miscellaneous usability improvements for {{ApplicationClassLoader}}: > - Improve the system class override mechanism: today the override is a > wholesale replacement of the default; enable modifying the default > - Improve handling of addition and subtraction of system classes: today it is > sensitive to order > - other miscellaneous improvements that make using {{ApplicationClassLoader}} > easier -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Attachment: HADOOP-13207-branch-2-016.patch Patch 016; rebased the work onto a branch-2 with HADOOP-12009 picked in. Note that this patch looses the permissions of a couple of the glob classes. That's because downstream of this hadoop-common patch I've been working on a fast-s3a globber. No performance improvements there, but they needed access to the globber helper classes > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch, HADOOP-13207-branch-2-015.patch, > HADOOP-13207-branch-2-016.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Status: Patch Available (was: Open) > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch, HADOOP-13207-branch-2-015.patch, > HADOOP-13207-branch-2-016.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Status: Open (was: Patch Available) > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch, HADOOP-13207-branch-2-015.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency
[ https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388426#comment-15388426 ] Chris Nauroth commented on HADOOP-11487: bq. Does listStatus falls outside above consistency ? Yes, it does. {{FileSystem#listStatus}} maps to an operation listing the keys in an S3 bucket. For that listing operation, the consistency model you quoted does not apply. Instead, it follows an eventual consistency model. There may be propagation delays between creating a key and that key becoming visible in listings. There are more details on this behavior in the AWS S3 consistency model doc: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel > FileNotFound on distcp to s3n/s3a due to creation inconsistency > > > Key: HADOOP-11487 > URL: https://issues.apache.org/jira/browse/HADOOP-11487 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Affects Versions: 2.7.2 >Reporter: Paulo Motta > > I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm > getting the following exception: > {code:java} > 2015-01-16 20:53:18,187 ERROR [main] > org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying > hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz > java.io.FileNotFoundException: No such file or directory > 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.FileNotFoundException: No such file or > directory 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > {code} > However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. > So probably due to Amazon's S3 eventual consistency the job failure. > In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus > must use fs.s3.maxRetries property in order to avoid failures like this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388420#comment-15388420 ] Hadoop QA commented on HADOOP-13207: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HADOOP-13207 does not apply to branch-2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819468/HADOOP-13207-branch-2-015.patch | | JIRA Issue | HADOOP-13207 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10059/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch, HADOOP-13207-branch-2-015.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order & fix FileSystemContractBaseTest:testListStatus
[ https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12009: Fix Version/s: 2.8.0 Component/s: fs documentation I've just backported this to 2.8+ > Clarify FileSystem.listStatus() sorting order & fix > FileSystemContractBaseTest:testListStatus > -- > > Key: HADOOP-12009 > URL: https://issues.apache.org/jira/browse/HADOOP-12009 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, fs, test >Reporter: Jakob Homan >Assignee: J.Andreina >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, > HADOOP-12009.1.patch > > > FileSystem.listStatus does not guarantee that implementations will return > sorted entries: > {code} /** >* List the statuses of the files/directories in the given path if the path > is >* a directory. >* >* @param f given path >* @return the statuses of the files/directories in the given patch >* @throws FileNotFoundException when the path does not exist; >* IOException see specific implementation >*/ > public abstract FileStatus[] listStatus(Path f) throws > FileNotFoundException, > IOException;{code} > However, FileSystemContractBaseTest, expects the elements to come back sorted: > {code}Path[] testDirs = { path("/test/hadoop/a"), > path("/test/hadoop/b"), > path("/test/hadoop/c/1"), }; > > // ... > paths = fs.listStatus(path("/test/hadoop")); > assertEquals(3, paths.length); > assertEquals(path("/test/hadoop/a"), paths[0].getPath()); > assertEquals(path("/test/hadoop/b"), paths[1].getPath()); > assertEquals(path("/test/hadoop/c"), paths[2].getPath());{code} > We should pass this test as long as all the paths are there, regardless of > their ordering. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency
[ https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388417#comment-15388417 ] $iddhe$h Divekar commented on HADOOP-11487: --- Hi Chris, Thanks for replying. As per AWS forum all of the S3 regions now support read-after-write consistency for new objects added to Amazon s3. https://forums.aws.amazon.com/ann.jspa?annID=3112 Does listStatus falls outside above consistency ? For Hadoop 2.7 we started using s3a as per spark recommendations but but after moving to s3a we started using 3x degradation, hence moved backed to s3n. When will be the patch available for general use ? > FileNotFound on distcp to s3n/s3a due to creation inconsistency > > > Key: HADOOP-11487 > URL: https://issues.apache.org/jira/browse/HADOOP-11487 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Affects Versions: 2.7.2 >Reporter: Paulo Motta > > I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm > getting the following exception: > {code:java} > 2015-01-16 20:53:18,187 ERROR [main] > org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying > hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz > java.io.FileNotFoundException: No such file or directory > 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.FileNotFoundException: No such file or > directory 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > {code} > However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. > So probably due to Amazon's S3 eventual consistency the job failure. > In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus > must use fs.s3.maxRetries property in order to avoid failures like this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388406#comment-15388406 ] Dima Spivak commented on HADOOP-13397: -- Just as an FYI, we've got some work that's nearly done in HBASE-12721 that supports starting distributed HBase clusters on a single host using multiple Docker containers. We also get Hadoop working in this as a prerequisite to getting HBase up, so might be worth taking a look if we might be able to link up somewhere. > Add dockerfile for Hadoop > - > > Key: HADOOP-13397 > URL: https://issues.apache.org/jira/browse/HADOOP-13397 > Project: Hadoop Common > Issue Type: Bug >Reporter: Klaus Ma > > For now, there's no community version Dockerfile in Hadoop; most of docker > images are provided by vendor, e.g. > 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/ > 2. From HortonWorks sequenceiq: > https://hub.docker.com/r/sequenceiq/hadoop-docker/ > 3. MapR provides the mapr-sandbox-base: > https://hub.docker.com/r/maprtech/mapr-sandbox-base/ > The proposal of this JIRA is to provide a community version Dockerfile in > Hadoop, and here's some requirement: > 1. Seperated docker image for master & agents, e.g. resource manager & node > manager > 2. Default configuration to start master & agent instead of configurating > manually > 3. Start Hadoop process as no-daemon > Here's my dockerfile to start master/agent: > https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn > I'd like to contribute it after polishing :). > Email Thread : > http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS
[ https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388389#comment-15388389 ] Sean Busbey commented on HADOOP-13396: -- If it's pluggable with multiple format types, then that sounds fine by me so long as the default is a plain text greppable option as Allen mentions. I'd be happy to make an Avro one to provide a third implementation example for the pluggable API. > Add json format audit logging to KMS > > > Key: HADOOP-13396 > URL: https://issues.apache.org/jira/browse/HADOOP-13396 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS
[ https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388381#comment-15388381 ] Allen Wittenauer commented on HADOOP-13396: --- bq. The requirement for this comes from a post-processing tool, which accepts only json format logs. Why can't we spit out fixed field and then use a tool to convert that to JSON so that this other mystery tool can process it? bq. However, adding a plugin to the KMS audit Is the intent to output multiple format types then? bq. one can easily extend the current logging with the formats needed, while maintaining backwards-compatibility. This isn't really true in practice. See, e.g., http://linuxjedi.co.uk/posts/2014/Oct/31/why-json-is-bad-for-applications/ . Lots of other articles on this topic elsewhere. > Add json format audit logging to KMS > > > Key: HADOOP-13396 > URL: https://issues.apache.org/jira/browse/HADOOP-13396 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Attachment: HADOOP-13207-branch-2-015.patch Patch 015: Patch 014 diffed properly > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch, HADOOP-13207-branch-2-015.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Status: Patch Available (was: Open) > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch, HADOOP-13207-branch-2-015.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Status: Open (was: Patch Available) mistakenly used {{ git diff branch-2..HEAD }} instead of {{git diff branch-2...HEAD}}. Thanks to chris for spotting > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13401) usability improvements of ApplicationClassLoader
[ https://issues.apache.org/jira/browse/HADOOP-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388370#comment-15388370 ] Vrushali C commented on HADOOP-13401: - Noting from discussion on HADOOP-13070 - existing ApplicationClassLoader implementation doesn't cover ClassLoader.getResources() > usability improvements of ApplicationClassLoader > > > Key: HADOOP-13401 > URL: https://issues.apache.org/jira/browse/HADOOP-13401 > Project: Hadoop Common > Issue Type: Sub-task > Components: util >Reporter: Sangjin Lee > > Miscellaneous usability improvements for {{ApplicationClassLoader}}: > - Improve the system class override mechanism: today the override is a > wholesale replacement of the default; enable modifying the default > - Improve handling of addition and subtraction of system classes: today it is > sensitive to order > - other miscellaneous improvements that make using {{ApplicationClassLoader}} > easier -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388351#comment-15388351 ] Steve Loughran commented on HADOOP-12981: - +1 > Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and > core-default.xml > --- > > Key: HADOOP-12981 > URL: https://issues.apache.org/jira/browse/HADOOP-12981 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, tools >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: aws > Attachments: HADOOP-12981.001.patch > > > It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not > used. Those properties are prefixed by {{s3native}}, and the current s3native > properties are all prefixed by {{fs.s3n}}, so this is likely not used > currently. Additionally, core-default.xml has the description of these unused > properties: > {noformat} > > > s3native.stream-buffer-size > 4096 > The size of buffer to stream files. > The size of this buffer should probably be a multiple of hardware > page size (4096 on Intel x86), and it determines how much data is > buffered during read and write operations. > > > s3native.bytes-per-checksum > 512 > The number of bytes per checksum. Must not be larger than > s3native.stream-buffer-size > > > s3native.client-write-packet-size > 65536 > Packet size for clients to write > > > s3native.blocksize > 67108864 > Block size > > > s3native.replication > 3 > Replication factor > > {noformat} > I think they should be removed (or deprecated) to avoid confusion if these > properties are defunct. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS
[ https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388338#comment-15388338 ] Xiao Chen commented on HADOOP-13396: Thanks Sean and Allen for the comments. The requirement for this comes from a post-processing tool, which accepts only json format logs. I'm aware there's an option to let that tool to do the conversion from current text format audit-log to json by itself. However, adding a plugin to the KMS audit also feels reasonable to me. With the plugin logger, one can easily extend the current logging with the formats needed, while maintaining backwards-compatibility. I'm working on the patch, will post it soon to express the idea. > Add json format audit logging to KMS > > > Key: HADOOP-13396 > URL: https://issues.apache.org/jira/browse/HADOOP-13396 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Status: Patch Available (was: Open) > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Attachment: HADOOP-13207-branch-2-014.patch Patch 014; all uses of \n dealt with; proofread applied, and another "it's" -> "its" fixed > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch, > HADOOP-13207-branch-2-014.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator
[ https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13207: Status: Open (was: Patch Available) > Specify FileSystem listStatus, listFiles and RemoteIterator > --- > > Key: HADOOP-13207 > URL: https://issues.apache.org/jira/browse/HADOOP-13207 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13207-branch-2-001.patch, > HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, > HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, > HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, > HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, > HADOOP-13207-branch-2-010.patch, HADOOP-13207-branch-2-013.patch > > > The many `listStatus`, `listLocatedStatus` and `listFiles` operations have > not been completely covered in the FS specification. There's lots of implicit > use of {{listStatus()}} path, but no coverage or tests of the others. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS
[ https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388316#comment-15388316 ] Allen Wittenauer commented on HADOOP-13396: --- Binary formats are pretty useless without tools written to actually process them. So to me, the #1 requirement here is that the standard sysadmin toolkit needs to be usable here, e.g., grep. Being a fixed field format was one of the absolute keys to success of the HDFS audit log and one of the reasons why people still use it over other solutions like the weirdo notification thing. With that in mind, if JSON is really wanted, then it needs to get printed on a single line. > Add json format audit logging to KMS > > > Key: HADOOP-13396 > URL: https://issues.apache.org/jira/browse/HADOOP-13396 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency
[ https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388286#comment-15388286 ] Chris Nauroth commented on HADOOP-11487: Hello [~$iddhe$h]. The stack trace indicates a problem during a {{FileSystem#listStatus}} call. The listing calls against S3 are subject to eventual consistency. The goals of the S3Guard project, tracked in issue HADOOP-13345, would help address this scenario. However, please note that this effort is targeted to the S3A file system, which is where our ongoing development effort on Hadoop S3 integration is happening. (Your stack trace indicates you are currently using S3N.) > FileNotFound on distcp to s3n/s3a due to creation inconsistency > > > Key: HADOOP-11487 > URL: https://issues.apache.org/jira/browse/HADOOP-11487 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Affects Versions: 2.7.2 >Reporter: Paulo Motta > > I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm > getting the following exception: > {code:java} > 2015-01-16 20:53:18,187 ERROR [main] > org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying > hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz > java.io.FileNotFoundException: No such file or directory > 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.FileNotFoundException: No such file or > directory 's3n://s3-bucket/file.gz' > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) > at > org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > {code} > However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. > So probably due to Amazon's S3 eventual consistency the job failure. > In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus > must use fs.s3.maxRetries property in order to avoid failures like this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS
[ https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388285#comment-15388285 ] Sean Busbey commented on HADOOP-13396: -- if we want a structured audit log, why not something more compact and precise like Avro? > Add json format audit logging to KMS > > > Key: HADOOP-13396 > URL: https://issues.apache.org/jira/browse/HADOOP-13396 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13401) usability improvements of ApplicationClassLoader
Sangjin Lee created HADOOP-13401: Summary: usability improvements of ApplicationClassLoader Key: HADOOP-13401 URL: https://issues.apache.org/jira/browse/HADOOP-13401 Project: Hadoop Common Issue Type: Sub-task Components: util Reporter: Sangjin Lee Miscellaneous usability improvements for {{ApplicationClassLoader}}: - Improve the system class override mechanism: today the override is a wholesale replacement of the default; enable modifying the default - Improve handling of addition and subtraction of system classes: today it is sensitive to order - other miscellaneous improvements that make using {{ApplicationClassLoader}} easier -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13400) update the ApplicationClassLoader implementation in line with latest Java ClassLoader implementation
Sangjin Lee created HADOOP-13400: Summary: update the ApplicationClassLoader implementation in line with latest Java ClassLoader implementation Key: HADOOP-13400 URL: https://issues.apache.org/jira/browse/HADOOP-13400 Project: Hadoop Common Issue Type: Sub-task Components: util Reporter: Sangjin Lee The current {{ApplicationClassLoader}} implementation is aged, and does not reflect the latest java {{ClassLoader}} implementation. One example is the use of the fine-grained classloading lock. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13399) deprecate the Configuration classloader
Sangjin Lee created HADOOP-13399: Summary: deprecate the Configuration classloader Key: HADOOP-13399 URL: https://issues.apache.org/jira/browse/HADOOP-13399 Project: Hadoop Common Issue Type: Sub-task Components: util Reporter: Sangjin Lee Assignee: Sangjin Lee Priority: Critical Today, anyone can simply call {{Configuration.setClassLoader()}} to set the configuration classloader to any arbitrary classloader. This classloader is then used to get a class or a resource through {{Configuration}} ({{getClass()}} and {{getResource()}}). In essence, the {{Configuration}} classloader is effectively a globally shared classloader without contract. This is one step worse than TCCL in that regard. I propose to remove/deprecate {{setClassLoader()}} and {{getClassLoader()}} and simply use TCCL (and then the classloader that loaded the {{Configuration}} class) to load classes and resources. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388249#comment-15388249 ] Timothy St. Clair commented on HADOOP-13397: I'm good with just "Dockerfile templates" in upstream, start simple and expand, from there downstream providers can work out their own logistics. > Add dockerfile for Hadoop > - > > Key: HADOOP-13397 > URL: https://issues.apache.org/jira/browse/HADOOP-13397 > Project: Hadoop Common > Issue Type: Bug >Reporter: Klaus Ma > > For now, there's no community version Dockerfile in Hadoop; most of docker > images are provided by vendor, e.g. > 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/ > 2. From HortonWorks sequenceiq: > https://hub.docker.com/r/sequenceiq/hadoop-docker/ > 3. MapR provides the mapr-sandbox-base: > https://hub.docker.com/r/maprtech/mapr-sandbox-base/ > The proposal of this JIRA is to provide a community version Dockerfile in > Hadoop, and here's some requirement: > 1. Seperated docker image for master & agents, e.g. resource manager & node > manager > 2. Default configuration to start master & agent instead of configurating > manually > 3. Start Hadoop process as no-daemon > Here's my dockerfile to start master/agent: > https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn > I'd like to contribute it after polishing :). > Email Thread : > http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13398) prevent user classes from loading classes in the parent classpath with ApplicationClassLoader
[ https://issues.apache.org/jira/browse/HADOOP-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HADOOP-13398: - Description: Today, a user class is able to trigger loading a class from Hadoop's dependencies, with or without the use of {{ApplicationClassLoader}}, and it creates an implicit dependence from users' code on Hadoop's dependencies, and as a result dependency conflicts. We should modify {{ApplicationClassLoader}} to prevent a user class from loading a class from the parent classpath. This should also cover resource loading (and as a corollary {{ServiceLoader}}). was: Today, a user class is able to trigger loading a class from Hadoop's dependencies, with or without the use of {{ApplicationClassLoader}}, and it creates an implicit dependence from users' code on Hadoop's dependencies, and as a result dependency conflicts. We should modify {{ApplicationClassLoader}} to prevent a user class from loading a class from the parent classpath. > prevent user classes from loading classes in the parent classpath with > ApplicationClassLoader > - > > Key: HADOOP-13398 > URL: https://issues.apache.org/jira/browse/HADOOP-13398 > Project: Hadoop Common > Issue Type: Sub-task > Components: util >Reporter: Sangjin Lee >Assignee: Sangjin Lee >Priority: Critical > > Today, a user class is able to trigger loading a class from Hadoop's > dependencies, with or without the use of {{ApplicationClassLoader}}, and it > creates an implicit dependence from users' code on Hadoop's dependencies, and > as a result dependency conflicts. > We should modify {{ApplicationClassLoader}} to prevent a user class from > loading a class from the parent classpath. > This should also cover resource loading (and as a corollary > {{ServiceLoader}}). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13398) prevent user classes from loading classes in the parent classpath with ApplicationClassLoader
Sangjin Lee created HADOOP-13398: Summary: prevent user classes from loading classes in the parent classpath with ApplicationClassLoader Key: HADOOP-13398 URL: https://issues.apache.org/jira/browse/HADOOP-13398 Project: Hadoop Common Issue Type: Sub-task Components: util Reporter: Sangjin Lee Assignee: Sangjin Lee Priority: Critical Today, a user class is able to trigger loading a class from Hadoop's dependencies, with or without the use of {{ApplicationClassLoader}}, and it creates an implicit dependence from users' code on Hadoop's dependencies, and as a result dependency conflicts. We should modify {{ApplicationClassLoader}} to prevent a user class from loading a class from the parent classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency
[ https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388227#comment-15388227 ] $iddhe$h Divekar edited comment on HADOOP-11487 at 7/21/16 7:03 PM: Hi, We are processing data on US west and still seeing consistency issue. As per forums US west should not be having consistency issue but we are doing update of a table. Not sure if 'read-after-write' consistency will take care of 'read-after-update' consistency also. Will 9565 help us here. Below is the back trace of the issue we are seeing when we write some tables in parquet format from Apache Spark to S3n. org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:154) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:106) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:106) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:106) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:189) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:239) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:221) at com.foo.vAnalytics.xyz_load$.main(xyz_load.scala:130) at com.foo.vAnalytics.xyz_load.main(xyz_load.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:104) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:95) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:38) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.runSubtask(LocalContainerLauncher.java:317) at org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.run(LocalContainerLauncher.java:232) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.FileNotFoundException: File s3n://foo-hive/warehouse/fooabcxyz0719/_temporary/0/task_201607210010_0005_m_41 does not exist. at org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:506) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:360) at
[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n/s3a due to creation inconsistency
[ https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388227#comment-15388227 ] $iddhe$h Divekar commented on HADOOP-11487: --- Hi, We are processing data on US west and still seeing consistency issue. As per forums US west should not be having consistency issue but we are doing update of a table. Not sure if 'read-after-write' consistency will take care of 'read-after-update' consistency also. Will 9565 help us here. Below is the back trace of the issue we are seeing. org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:154) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:106) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:106) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:106) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:189) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:239) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:221) at com.foo.vAnalytics.xyz_load$.main(xyz_load.scala:130) at com.foo.vAnalytics.xyz_load.main(xyz_load.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:104) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:95) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:38) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.runSubtask(LocalContainerLauncher.java:317) at org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.run(LocalContainerLauncher.java:232) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.FileNotFoundException: File s3n://foo-hive/warehouse/fooabcxyz0719/_temporary/0/task_201607210010_0005_m_41 does not exist. at org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:506) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:360) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:310) at
[jira] [Updated] (HADOOP-13397) Add dockerfile for Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13397: -- Description: For now, there's no community version Dockerfile in Hadoop; most of docker images are provided by vendor, e.g. 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/ 2. From HortonWorks sequenceiq: https://hub.docker.com/r/sequenceiq/hadoop-docker/ 3. MapR provides the mapr-sandbox-base: https://hub.docker.com/r/maprtech/mapr-sandbox-base/ The proposal of this JIRA is to provide a community version Dockerfile in Hadoop, and here's some requirement: 1. Seperated docker image for master & agents, e.g. resource manager & node manager 2. Default configuration to start master & agent instead of configurating manually 3. Start Hadoop process as no-daemon Here's my dockerfile to start master/agent: https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn I'd like to contribute it after polishing :). Email Thread : http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E was: For now, there's no community version Dockerfile in Hadoop; most of docker images are provided by vendor, e.g. 1. Official image from Cloudera is the quickstart image: https://hub.docker.com/r/cloudera/quickstart/ 2. From HortonWorks sequenceiq: https://hub.docker.com/r/sequenceiq/hadoop-docker/ 3. MapR provides the mapr-sandbox-base: https://hub.docker.com/r/maprtech/mapr-sandbox-base/ The proposal of this JIRA is to provide a community version Dockerfile in Hadoop, and here's some requirement: 1. Seperated docker image for master & agents, e.g. resource manager & node manager 2. Default configuration to start master & agent instead of configurating manually 3. Start Hadoop process as no-daemon Here's my dockerfile to start master/agent: https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn I'd like to contribute it after polishing :). Email Thread : http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E > Add dockerfile for Hadoop > - > > Key: HADOOP-13397 > URL: https://issues.apache.org/jira/browse/HADOOP-13397 > Project: Hadoop Common > Issue Type: Bug >Reporter: Klaus Ma > > For now, there's no community version Dockerfile in Hadoop; most of docker > images are provided by vendor, e.g. > 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/ > 2. From HortonWorks sequenceiq: > https://hub.docker.com/r/sequenceiq/hadoop-docker/ > 3. MapR provides the mapr-sandbox-base: > https://hub.docker.com/r/maprtech/mapr-sandbox-base/ > The proposal of this JIRA is to provide a community version Dockerfile in > Hadoop, and here's some requirement: > 1. Seperated docker image for master & agents, e.g. resource manager & node > manager > 2. Default configuration to start master & agent instead of configurating > manually > 3. Start Hadoop process as no-daemon > Here's my dockerfile to start master/agent: > https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn > I'd like to contribute it after polishing :). > Email Thread : > http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388200#comment-15388200 ] Allen Wittenauer commented on HADOOP-13397: --- A couple of things: a) I and I know others as well have some rather large licensing questions around Docker images. They effectively act as a binary distribution and it is very much against ASF rules to distribute GPL and other Category X components. It makes me extremely uncomfortable to move forward without some clarification from legal. (Yes, I know other ASF projects are publishing images on docker hub. Hopefully that means that there is a JIRA issue in the LEGAL project to point to.) This is a blocking issue that really needs to get clarified before further time investment. b) I'm going to change the description in this issue from "Official image from Cloudera" to "Cloudera's image". Cloudera can't make an "official image" for Apache Hadoop, so let's clear up any potential confusion before it starts. c) Is this actually useful in reality? The vast vast vast majority of Apache Hadoop deployments add a wide variety of additional components on top of Apache Hadoop to the point that even making a base image still seems like it wouldn't be particularly usable without downstream conflict resolution. It may be useful to make Dockerfile templates, but full blown images? Hmm.. I'm going to need some convincing. d) Upon working with the existing Dockerfile and porting it over to support the ASF PowerPC build machines (HADOOP-13329) we need to be aware that we're going to need more than one Dockerfile per hardware platform. We made that mistake with start-build-env.sh (which we'll fix as part of 13329), but we should avoid it here. (We've gotten some poking from the ARM64 folks as well.) e) This is going to hit upon the larger issue of distributed configuration management, which is going to be extremely tricky to make consumable, never mind what types of configurations are actually supported: security? persistent storage? Then there are client configs--which, it's worthwhile pointing out, not even the vendor tools handle particularly well. f) I think a much more attainable goal to start is making a single Dockerfile that runs all of the Apache Hadoop daemons as a single node configuration. That's a highly desirable thing to have for a variety of reasons. If there is still heavy interest in breaking it apart, it gives a base working example before proceeding further to tease out the various daemons. > Add dockerfile for Hadoop > - > > Key: HADOOP-13397 > URL: https://issues.apache.org/jira/browse/HADOOP-13397 > Project: Hadoop Common > Issue Type: Bug >Reporter: Klaus Ma > > For now, there's no community version Dockerfile in Hadoop; most of docker > images are provided by vendor, e.g. > 1. Official image from Cloudera is the quickstart image: > https://hub.docker.com/r/cloudera/quickstart/ > 2. From HortonWorks sequenceiq: > https://hub.docker.com/r/sequenceiq/hadoop-docker/ > 3. MapR provides the mapr-sandbox-base: > https://hub.docker.com/r/maprtech/mapr-sandbox-base/ > The proposal of this JIRA is to provide a community version Dockerfile in > Hadoop, and here's some requirement: > 1. Seperated docker image for master & agents, e.g. resource manager & node > manager > 2. Default configuration to start master & agent instead of configurating > manually > 3. Start Hadoop process as no-daemon > Here's my dockerfile to start master/agent: > https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn > I'd like to contribute it after polishing :). > Email Thread : > http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13240: Target Version/s: 2.8.0 (was: 3.0.0-alpha2) > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388111#comment-15388111 ] John Zhuge commented on HADOOP-13240: - I switched before finding a nice solution in {{TemporaryFolder}} JUnit rule for this case. Target it 2.8.0 now. > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388095#comment-15388095 ] Chris Nauroth commented on HADOOP-13240: +1 for patch 004. This one works fine for branch-2.8 too. [~jzhuge], I see you switched the target version to 3.0.0-alpha2. Now that we have patch compatible with branch-2.8, do you mind if I target this to 2.8.0? Let me know if there is any reason you don't want it there. > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13380) TestBasicDiskValidator should not write data to /tmp
[ https://issues.apache.org/jira/browse/HADOOP-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated HADOOP-13380: -- Status: Patch Available (was: Open) > TestBasicDiskValidator should not write data to /tmp > > > Key: HADOOP-13380 > URL: https://issues.apache.org/jira/browse/HADOOP-13380 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Lei (Eddy) Xu >Assignee: Yufei Gu >Priority: Minor > Attachments: HADOOP-13380.001.patch > > > In {{TestBasicDiskValidator}}, the following code is confusing > {code} >File localDir = File.createTempFile("test", "tmp"); > try { >if (isDir) { >// reuse the file path generated by File#createTempFile to create a dir > localDir.delete(); >localDir.mkdir(); > } > {code} > Btw, as suggested in https://wiki.apache.org/hadoop/CodeReviewChecklist, unit > test should not write data into {{/tmp}}: > bq. * unit tests do not write any temporary files to /tmp (instead, the tests > should write to the location specified by the test.build.data system property) > Finally, should use {{Files}} in these file creation / deletion, so that any > error can be thrown as {{IOE}}. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13380) TestBasicDiskValidator should not write data to /tmp
[ https://issues.apache.org/jira/browse/HADOOP-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated HADOOP-13380: -- Attachment: HADOOP-13380.001.patch > TestBasicDiskValidator should not write data to /tmp > > > Key: HADOOP-13380 > URL: https://issues.apache.org/jira/browse/HADOOP-13380 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Lei (Eddy) Xu >Assignee: Yufei Gu >Priority: Minor > Attachments: HADOOP-13380.001.patch > > > In {{TestBasicDiskValidator}}, the following code is confusing > {code} >File localDir = File.createTempFile("test", "tmp"); > try { >if (isDir) { >// reuse the file path generated by File#createTempFile to create a dir > localDir.delete(); >localDir.mkdir(); > } > {code} > Btw, as suggested in https://wiki.apache.org/hadoop/CodeReviewChecklist, unit > test should not write data into {{/tmp}}: > bq. * unit tests do not write any temporary files to /tmp (instead, the tests > should write to the location specified by the test.build.data system property) > Finally, should use {{Files}} in these file creation / deletion, so that any > error can be thrown as {{IOE}}. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies
[ https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388072#comment-15388072 ] Sangjin Lee commented on HADOOP-13070: -- One other aspect that needs to be addressed (that hasn't been spelled out) is the resource loading. The POC here doesn't cover the resource loading. The call patterns for resource loading are bit more varied as there are 3 distinct entry points: - {{ClassLoader.getResource()}} - {{ClassLoader.getResourceAsStream()}} - {{ClassLoader.getResources()}} I also find that the existing {{ApplicationClassLoader}} implementation doesn't cover {{ClassLoader.getResources()}}. :) > classloading isolation improvements for cleaner and stricter dependencies > - > > Key: HADOOP-13070 > URL: https://issues.apache.org/jira/browse/HADOOP-13070 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Reporter: Sangjin Lee >Assignee: Sangjin Lee >Priority: Critical > Attachments: HADOOP-13070.poc.01.patch, Test.java, TestDriver.java, > classloading-improvements-ideas-v.3.pdf, classloading-improvements-ideas.pdf, > classloading-improvements-ideas.v.2.pdf, lib.jar > > > Related to HADOOP-11656, we would like to make a number of improvements in > terms of classloading isolation so that user-code can run safely without > worrying about dependency collisions with the Hadoop dependencies. > By the same token, it should raised the quality of the user code and its > specified classpath so that users get clear signals if they specify incorrect > classpaths. > This will contain a proposal that will include several improvements some of > which may not be backward compatible. As such, it should be targeted to the > next major revision of Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13041) Enhancement CoderUtil test code
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388052#comment-15388052 ] Hadoop QA commented on HADOOP-13041: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 32s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819368/HADOOP-13041.03.patch | | JIRA Issue | HADOOP-13041 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2665ec2d006a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 557a245 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10057/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10057/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Enhancement CoderUtil test code > --- > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch, HADOOP-13041.02.patch, > HADOOP-13041.03.patch > > > Enhancement missing test for {{CoderUtil}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail:
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387983#comment-15387983 ] John Zhuge commented on HADOOP-13240: - Timed out in unit test org.apache.hadoop.http.TestHttpServerLifecycle. Unrelated. > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13041) Enhancement CoderUtil test code
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13041: Attachment: HADOOP-13041.03.patch > Enhancement CoderUtil test code > --- > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch, HADOOP-13041.02.patch, > HADOOP-13041.03.patch > > > Enhancement missing test for {{CoderUtil}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387855#comment-15387855 ] Timothy St. Clair commented on HADOOP-13397: It would be ideal if apache/projects could publish artifacts, including Docker images, to hub as part of their release process. > Add dockerfile for Hadoop > - > > Key: HADOOP-13397 > URL: https://issues.apache.org/jira/browse/HADOOP-13397 > Project: Hadoop Common > Issue Type: Bug >Reporter: Klaus Ma > > For now, there's no community version Dockerfile in Hadoop; most of docker > images are provided by vendor, e.g. > 1. Official image from Cloudera is the quickstart image: > https://hub.docker.com/r/cloudera/quickstart/ > 2. From HortonWorks sequenceiq: > https://hub.docker.com/r/sequenceiq/hadoop-docker/ > 3. MapR provides the mapr-sandbox-base: > https://hub.docker.com/r/maprtech/mapr-sandbox-base/ > The proposal of this JIRA is to provide a community version Dockerfile in > Hadoop, and here's some requirement: > 1. Seperated docker image for master & agents, e.g. resource manager & node > manager > 2. Default configuration to start master & agent instead of configurating > manually > 3. Start Hadoop process as no-daemon > Here's my dockerfile to start master/agent: > https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn > I'd like to contribute it after polishing :). > Email Thread : > http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11149) Increase the timeout of TestZKFailoverController
[ https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-11149: Fix Version/s: 2.7.4 Thanks, Steve! I committed this to branch-2.7 as well. > Increase the timeout of TestZKFailoverController > > > Key: HADOOP-11149 > URL: https://issues.apache.org/jira/browse/HADOOP-11149 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: Jenkins >Reporter: Rajat Jain >Assignee: Steve Loughran > Fix For: 2.8.0, 2.7.4 > > Attachments: HADOOP-11149-001.patch, HADOOP-11149-002.patch > > > {code} > Running org.apache.hadoop.ha.TestZKFailoverController > Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec > <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController > testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController) Time > elapsed: 25.045 sec <<< ERROR! > java.lang.Exception: test timed out after 25000 milliseconds > at java.lang.Object.wait(Native Method) > at > org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467) > at > org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657) > at > org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61) > at > org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602) > at > org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621) > at > org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599) > at > org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94) > at > org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448) > Results : > Tests in error: > TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 » test > time... > {code} > Running on centos6.5 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed
[ https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387801#comment-15387801 ] Hadoop QA commented on HADOOP-7363: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s{color} | {color:green} root generated 0 new + 708 unchanged - 1 fixed = 708 total (was 709) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 20s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819308/HADOOP-7363.03.patch | | JIRA Issue | HADOOP-7363 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ccdde068dda8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 557a245 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10056/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10056/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestRawLocalFileSystemContract is needed > > > Key: HADOOP-7363 > URL: https://issues.apache.org/jira/browse/HADOOP-7363 > Project: Hadoop Common > Issue Type: Test > Components: fs >Affects Versions: 3.0.0-alpha2 >Reporter: Matt Foley >Assignee: Andras Bokor > Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, > HADOOP-7363.03.patch > > > FileSystemContractBaseTest is supposed to be run with each concrete >
[jira] [Commented] (HADOOP-13329) Dockerfile doesn't work on Linux/ppc
[ https://issues.apache.org/jira/browse/HADOOP-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387774#comment-15387774 ] Allen Wittenauer commented on HADOOP-13329: --- I'm using a modified version of this Dockerfile for the PPC build on Jenkins. One big thing: I opted not to do the LevelDB hack because I was under the impression that YARN is supposed to treat that requirement as optional. It's causing massive test failure, but let's see what the community does with it. > Dockerfile doesn't work on Linux/ppc > > > Key: HADOOP-13329 > URL: https://issues.apache.org/jira/browse/HADOOP-13329 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Amir Sanjar > Attachments: HADOOP-13329.2.patch, HADOOP-13329.3.patch, > HADOOP-13329.patch > > > We need to rework how the Dockerfile is built to support both Linux/x86 and > Linux/PowerPC. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed
[ https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387773#comment-15387773 ] Andras Bokor commented on HADOOP-7363: -- Can somebody review my patch? I got my +1 from Hadoop QA. > TestRawLocalFileSystemContract is needed > > > Key: HADOOP-7363 > URL: https://issues.apache.org/jira/browse/HADOOP-7363 > Project: Hadoop Common > Issue Type: Test > Components: fs >Affects Versions: 3.0.0-alpha2 >Reporter: Matt Foley >Assignee: Andras Bokor > Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, > HADOOP-7363.03.patch > > > FileSystemContractBaseTest is supposed to be run with each concrete > FileSystem implementation to insure adherence to the "contract" for > FileSystem behavior. However, currently only HDFS and S3 do so. > RawLocalFileSystem, at least, needs to be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13041) Enhancement CoderUtil test code
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387737#comment-15387737 ] Hadoop QA commented on HADOOP-13041: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 40s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 0s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 0s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 22s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 39s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 47s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819312/HADOOP-13041.02.patch | | JIRA Issue | HADOOP-13041 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 17346cd7bbfd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 557a245 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/10055/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/10055/artifact/patchprocess/patch-compile-root.txt | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/10055/artifact/patchprocess/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10055/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/10055/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10055/artifact/patchprocess/patch-findbugs-hadoop-common-project_hadoop-common.txt | | unit |
[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed
[ https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387740#comment-15387740 ] Hadoop QA commented on HADOOP-7363: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 1s{color} | {color:green} root generated 0 new + 708 unchanged - 1 fixed = 708 total (was 709) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 38s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819308/HADOOP-7363.03.patch | | JIRA Issue | HADOOP-7363 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 73aba80d7dd7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 557a245 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10054/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10054/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestRawLocalFileSystemContract is needed > > > Key: HADOOP-7363 > URL: https://issues.apache.org/jira/browse/HADOOP-7363 > Project: Hadoop Common > Issue Type: Test > Components: fs >Affects Versions: 3.0.0-alpha2 >Reporter: Matt Foley >Assignee: Andras Bokor > Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, > HADOOP-7363.03.patch > > > FileSystemContractBaseTest is supposed to be run with each concrete >
[jira] [Commented] (HADOOP-13061) Refactor erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387701#comment-15387701 ] Kai Zheng commented on HADOOP-13061: Re-assigned this to Kai per some discussion. Hi [~lewuathe], to have the idea how to do the refactoring, please understand how HADOOP-13010 did. > Refactor erasure coders > --- > > Key: HADOOP-13061 > URL: https://issues.apache.org/jira/browse/HADOOP-13061 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Kai Sasaki > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13061) Refactor erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-13061: --- Assignee: Kai Sasaki (was: Kai Zheng) > Refactor erasure coders > --- > > Key: HADOOP-13061 > URL: https://issues.apache.org/jira/browse/HADOOP-13061 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Kai Sasaki > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387695#comment-15387695 ] Kai Zheng commented on HADOOP-13200: Great, Kai! Let's discuss about it in that issue. I will assign it to you. > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Kai Zheng > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13394) Swift should have proper HttpClient dependencies
[ https://issues.apache.org/jira/browse/HADOOP-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HADOOP-13394. - Resolution: Duplicate Close as dup of HADOOP-11614 > Swift should have proper HttpClient dependencies > > > Key: HADOOP-13394 > URL: https://issues.apache.org/jira/browse/HADOOP-13394 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu > > In hadoop-tools/hadoop-openstack/pom.xml : > {code} > > commons-httpclient > commons-httpclient > compile > > {code} > The dependency should be migrated to httpclient of org.apache.httpcomponents -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387607#comment-15387607 ] Kai Sasaki commented on HADOOP-13200: - [~drankye] Sure, I can work on HADOOP-13061 and make a initial patch based on above discussion. Though customizability and configuration is the main topic of this JIRA, HADOOP-13061 refactoring needs to also consider that point, I think. I will list up the main refactoring point in HADOOP-13061. Please give me some feedback if necessary. Thanks! > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Kai Zheng > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13041) Enhancement CoderUtil test code
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13041: Attachment: HADOOP-13041.02.patch > Enhancement CoderUtil test code > --- > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch, HADOOP-13041.02.patch > > > Enhancement missing test for {{CoderUtil}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13394) Swift should have proper HttpClient dependencies
[ https://issues.apache.org/jira/browse/HADOOP-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387575#comment-15387575 ] Steve Loughran commented on HADOOP-13394: - isn't this HADOOP-11614 ? Because it's a lot more than just changing the Pom > Swift should have proper HttpClient dependencies > > > Key: HADOOP-13394 > URL: https://issues.apache.org/jira/browse/HADOOP-13394 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu > > In hadoop-tools/hadoop-openstack/pom.xml : > {code} > > commons-httpclient > commons-httpclient > compile > > {code} > The dependency should be migrated to httpclient of org.apache.httpcomponents -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects
[ https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387566#comment-15387566 ] Steve Loughran commented on HADOOP-13382: - +1 > remove unneeded commons-httpclient dependencies from POM files in Hadoop and > sub-projects > - > > Key: HADOOP-13382 > URL: https://issues.apache.org/jira/browse/HADOOP-13382 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Matt Foley >Assignee: Matt Foley > Attachments: HADOOP-13382-branch-2.000.patch, > HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch > > > In branch-2.8 and later, the patches for various child and related bugs > listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, > HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of > "commons-httpclient" from Hadoop and its sub-projects (except for > hadoop-tools/hadoop-openstack; see HADOOP-11614). > However, after incorporating these patches, "commons-httpclient" is still > listed as a dependency in these POM files: > * hadoop-project/pom.xml > * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml > We wish to remove these, but since commons-httpclient is still used in many > files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to > * hadoop-tools/hadoop-openstack/pom.xml > (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is > removed from hadoop-openstack.) > In 2.8, this was mostly done by HADOOP-12552, but the version info formerly > inherited from hadoop-project/pom.xml also needs to be added, so that is in > the branch-2.8 version of the patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-7363) TestRawLocalFileSystemContract is needed
[ https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-7363: - Attachment: HADOOP-7363.03.patch Uploading patch 03 to eliminate Hadoop QA warnings. > TestRawLocalFileSystemContract is needed > > > Key: HADOOP-7363 > URL: https://issues.apache.org/jira/browse/HADOOP-7363 > Project: Hadoop Common > Issue Type: Test > Components: fs >Affects Versions: 3.0.0-alpha2 >Reporter: Matt Foley >Assignee: Andras Bokor > Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, > HADOOP-7363.03.patch > > > FileSystemContractBaseTest is supposed to be run with each concrete > FileSystem implementation to insure adherence to the "contract" for > FileSystem behavior. However, currently only HDFS and S3 do so. > RawLocalFileSystem, at least, needs to be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line
[ https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387492#comment-15387492 ] Hadoop QA commented on HADOOP-13332: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 16 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 5m 24s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 3s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 50s{color} | {color:orange} root: The patch generated 5 new + 1569 unchanged - 6 fixed = 1574 total (was 1575) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 6m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 26s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 17m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green}
[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed
[ https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387483#comment-15387483 ] Hadoop QA commented on HADOOP-7363: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 56s{color} | {color:red} root generated 1 new + 708 unchanged - 1 fixed = 709 total (was 709) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 3 new + 35 unchanged - 0 fixed = 38 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 13s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819295/HADOOP-7363.02.patch | | JIRA Issue | HADOOP-7363 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b65402f33702 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 557a245 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/10053/artifact/patchprocess/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10053/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10053/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/10053/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10053/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestRawLocalFileSystemContract is needed > > >
[jira] [Created] (HADOOP-13397) Add dockerfile for Hadoop
Klaus Ma created HADOOP-13397: - Summary: Add dockerfile for Hadoop Key: HADOOP-13397 URL: https://issues.apache.org/jira/browse/HADOOP-13397 Project: Hadoop Common Issue Type: Bug Reporter: Klaus Ma For now, there's no community version Dockerfile in Hadoop; most of docker images are provided by vendor, e.g. 1. Official image from Cloudera is the quickstart image: https://hub.docker.com/r/cloudera/quickstart/ 2. From HortonWorks sequenceiq: https://hub.docker.com/r/sequenceiq/hadoop-docker/ 3. MapR provides the mapr-sandbox-base: https://hub.docker.com/r/maprtech/mapr-sandbox-base/ The proposal of this JIRA is to provide a community version Dockerfile in Hadoop, and here's some requirement: 1. Seperated docker image for master & agents, e.g. resource manager & node manager 2. Default configuration to start master & agent instead of configurating manually 3. Start Hadoop process as no-daemon Here's my dockerfile to start master/agent: https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn I'd like to contribute it after polishing :). Email Thread : http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13395) Enhance TestKMSAudit
[ https://issues.apache.org/jira/browse/HADOOP-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387433#comment-15387433 ] Hadoop QA commented on HADOOP-13395: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 58s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch generated 10 new + 5 unchanged - 2 fixed = 15 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819279/HADOOP-13395.01.patch | | JIRA Issue | HADOOP-13395 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 87d78d056336 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 557a245 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10052/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-kms.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10052/testReport/ | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10052/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Enhance TestKMSAudit > > > Key: HADOOP-13395 > URL: https://issues.apache.org/jira/browse/HADOOP-13395 > Project: Hadoop Common > Issue Type: Test > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor >
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387432#comment-15387432 ] Hadoop QA commented on HADOOP-13240: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 34 unchanged - 11 fixed = 34 total (was 45) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 38s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 58m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819281/HADOOP-13240.004.patch | | JIRA Issue | HADOOP-13240 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 04af89dc8ba6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 557a245 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10051/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10051/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10051/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects
[jira] [Comment Edited] (HADOOP-7363) TestRawLocalFileSystemContract is needed
[ https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387426#comment-15387426 ] Andras Bokor edited comment on HADOOP-7363 at 7/21/16 9:26 AM: --- My path does not apply since HADOOP-12709. I am attaching [^HADOOP-7363.02.patch] for rebase. was (Author: boky01): Attach [^HADOOP-7363.02.patch] for rebase. > TestRawLocalFileSystemContract is needed > > > Key: HADOOP-7363 > URL: https://issues.apache.org/jira/browse/HADOOP-7363 > Project: Hadoop Common > Issue Type: Test > Components: fs >Affects Versions: 3.0.0-alpha2 >Reporter: Matt Foley >Assignee: Andras Bokor > Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch > > > FileSystemContractBaseTest is supposed to be run with each concrete > FileSystem implementation to insure adherence to the "contract" for > FileSystem behavior. However, currently only HDFS and S3 do so. > RawLocalFileSystem, at least, needs to be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-7363) TestRawLocalFileSystemContract is needed
[ https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-7363: - Attachment: HADOOP-7363.02.patch Attach [^HADOOP-7363.02.patch] for rebase. > TestRawLocalFileSystemContract is needed > > > Key: HADOOP-7363 > URL: https://issues.apache.org/jira/browse/HADOOP-7363 > Project: Hadoop Common > Issue Type: Test > Components: fs >Affects Versions: 3.0.0-alpha2 >Reporter: Matt Foley >Assignee: Andras Bokor > Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch > > > FileSystemContractBaseTest is supposed to be run with each concrete > FileSystem implementation to insure adherence to the "contract" for > FileSystem behavior. However, currently only HDFS and S3 do so. > RawLocalFileSystem, at least, needs to be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13041) Enhancement CoderUtil test code
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387372#comment-15387372 ] Kai Sasaki commented on HADOOP-13041: - [~drankye] Sure I'll do that. > Enhancement CoderUtil test code > --- > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch > > > Enhancement missing test for {{CoderUtil}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13041) Enhancement CoderUtil test code
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387360#comment-15387360 ] Hadoop QA commented on HADOOP-13041: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 47s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 7s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 7s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 52s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 57s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12799766/HADOOP-13041.01.patch | | JIRA Issue | HADOOP-13041 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 301a4819e3ab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 557a245 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/10050/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/10050/artifact/patchprocess/patch-compile-root.txt | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/10050/artifact/patchprocess/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10050/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/10050/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10050/artifact/patchprocess/patch-findbugs-hadoop-common-project_hadoop-common.txt | | unit |
[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13240: Attachment: HADOOP-13240.004.patch Thanks for the review [~jojochuang]! You brought up a valid concern that different test cases could share the same file on the local file system. In Patch 004 I use JUnit {{TemporaryFolder rule}} to solve the problem. The rule also cleans up the temp folder after test run. Wish we could design a similar mechanism when the common file system is an Hadoop file system, e.g., on MiniDFSCluster. Patch 004: * Incorporate review comments > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, > HADOOP-13240.003.patch, HADOOP-13240.004.patch > > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387320#comment-15387320 ] Kai Zheng commented on HADOOP-13200: Kai, It would be great if you could share some workload. Would you want to work on HADOOP-13061 doing the refactoring for erasure coders? If yes, I can transfer it to you. Thanks. > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Kai Zheng > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org