Re: Unassigned Hadoop jiras with patch available
I was told the filter is private. I am sorry. This one should be good: https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC On Wed, Jul 31, 2019 at 3:02 PM Wei-Chiu Chuang wrote: > I am using this jira filter to find jiras with patch available but > unassigned. > > https://issues.apache.org/jira/issues/?filter=12346814=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC > > In most cases, these jiras are unassigned because the contributor who > posted the patch are the first-timers and do not have the contributor role > in the jira. It's very common for those folks to get overlooked. > > Hadoop PMCs, if you have the JIRA administrator permission, please help > grant contributor access to these contributors. You help keep the project > more friendly to new-comers. > > You can do so by going to JIRA --> (upper right, click on the gear next to > your profile avatar) --> Projects --> click on the project (say Hadoop > HDFS) --> Roles --> View Project Roles --> Add users to a role --> add to > Contributor list, or if the Contributor list is full, add to Contributor1 > list. > > Or you can go to > https://issues.apache.org/jira/plugins/servlet/project-config/HDFS/roles to > add the contributor access for HDFS. Similarly goes for Hadoop Common and > other sub-projects. >
Unassigned Hadoop jiras with patch available
I am using this jira filter to find jiras with patch available but unassigned. https://issues.apache.org/jira/issues/?filter=12346814=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC In most cases, these jiras are unassigned because the contributor who posted the patch are the first-timers and do not have the contributor role in the jira. It's very common for those folks to get overlooked. Hadoop PMCs, if you have the JIRA administrator permission, please help grant contributor access to these contributors. You help keep the project more friendly to new-comers. You can do so by going to JIRA --> (upper right, click on the gear next to your profile avatar) --> Projects --> click on the project (say Hadoop HDFS) --> Roles --> View Project Roles --> Add users to a role --> add to Contributor list, or if the Contributor list is full, add to Contributor1 list. Or you can go to https://issues.apache.org/jira/plugins/servlet/project-config/HDFS/roles to add the contributor access for HDFS. Similarly goes for Hadoop Common and other sub-projects.
[jira] [Created] (HADOOP-16482) S3A doesn't actually verify paths have the correct authority
Steve Loughran created HADOOP-16482: --- Summary: S3A doesn't actually verify paths have the correct authority Key: HADOOP-16482 URL: https://issues.apache.org/jira/browse/HADOOP-16482 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.1.0 Reporter: Steve Loughran Probably been around a *long* time, but we've never noticed, assuming that {{Path.makeQualified(uri, workingDir)}} did the right thing. You can provide any s3a URI to an S3 command and it'll get mapped to the current bucket without any validation that the authorities are equal. Oops. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16481) ITestS3GuardDDBRootOperations.test_300_MetastorePrune needs to set region
Steve Loughran created HADOOP-16481: --- Summary: ITestS3GuardDDBRootOperations.test_300_MetastorePrune needs to set region Key: HADOOP-16481 URL: https://issues.apache.org/jira/browse/HADOOP-16481 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.3.0 Reporter: Steve Loughran Assignee: Steve Loughran The new test {{ITestS3GuardDDBRootOperations.test_300_MetastorePrune}} fails if you don't explicitly set the region {code} [ERROR] test_300_MetastorePrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations) Time elapsed: 0.845 s <<< ERROR! org.apache.hadoop.util.ExitUtil$ExitException: No region found from -region flag, config, or S3 bucket at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations.test_300_MetastorePrune(ITestS3GuardDDBRootOperations.java:186) {code} it should be picked up from the test filesystem. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16480) S3 Select Exceptions are not being converted to IOEs
Steve Loughran created HADOOP-16480: --- Summary: S3 Select Exceptions are not being converted to IOEs Key: HADOOP-16480 URL: https://issues.apache.org/jira/browse/HADOOP-16480 Project: Hadoop Common Issue Type: Sub-task Reporter: Steve Loughran Network outage seems to have raised a SelectObjectContentEventException exception; it's not been translated to an IOE. Issue: recoverable or not? A normal input stream would try to recover by re-opening at the current position, but to restart a seek you'd have to repeat the entire streaming. For now, fail. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1214/ [Jul 30, 2019 11:39:48 AM] (stevel) HADOOP-15910. Fix Javadoc for LdapAuthenticationHandler#ENABLE_START_TLS [Jul 30, 2019 11:47:55 AM] (stevel) HADOOP-16469. Update committers.md [Jul 30, 2019 4:28:26 PM] (github) HDDS-1872. Fix entry clean up from openKeyTable during complete MPU. [Jul 30, 2019 4:47:39 PM] (ayushsaxena) HDFS-14677. TestDataNodeHotSwapVolumes#testAddVolumesConcurrently fails [Jul 30, 2019 5:23:23 PM] (tmarq) HADOOP-16460: ABFS: fix for Sever Name Indication (SNI) [Jul 30, 2019 6:22:45 PM] (weichiu) HADOOP-16452. Increase ipc.maximum.data.length default from 64MB to [Jul 30, 2019 6:58:36 PM] (ericp) YARN-9596: QueueMetrics has incorrect metrics when labelled partitions [Jul 30, 2019 8:41:16 PM] (xyao) HDDS-1834. parent directories not found in secure setup due to ACL [Jul 30, 2019 8:45:27 PM] (inigoiri) HDFS-14449. Expose total number of DT in JMX for Namenode. Contributed [Jul 30, 2019 10:42:55 PM] (xkrogen) HDFS-13783. Add an option to the Balancer to make it run as a [Jul 30, 2019 11:01:17 PM] (weichiu) HDFS-14034. Support getQuotaUsage API in WebHDFS. Contributed by Chao [Jul 30, 2019 11:50:06 PM] (weichiu) HDFS-14419. Avoid repeated calls to the listOpenFiles function. [Jul 30, 2019 11:52:42 PM] (weichiu) HDFS-14569. Result of crypto -listZones is not formatted properly. -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] FindBugs : module:hadoop-tools/hadoop-aws Inconsistent synchronization of org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% of time Unsynchronized access at LocalMetadataStore.java:75% of time Unsynchronized access at LocalMetadataStore.java:[line 623] Failed junit tests : hadoop.ipc.TestIPC hadoop.hdfs.server.balancer.TestBalancer hadoop.hdfs.server.datanode.TestLargeBlockReport hadoop.hdfs.tools.TestDFSZKFailoverController hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1214/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1214/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1214/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1214/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1214/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1214/artifact/out/diff-patch-pylint.txt [216K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1214/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs:
[jira] [Created] (HADOOP-16479) FileStatus.getModificationTime returns localized time instead of UTC
Joan Sala Reixach created HADOOP-16479: -- Summary: FileStatus.getModificationTime returns localized time instead of UTC Key: HADOOP-16479 URL: https://issues.apache.org/jira/browse/HADOOP-16479 Project: Hadoop Common Issue Type: Bug Components: fs/azure Affects Versions: 3.2.0 Reporter: Joan Sala Reixach Attachments: image-2019-07-31-18-21-53-023.png, image-2019-07-31-18-23-37-349.png As per javadoc, the method FileStatus.getModificationTime() should return the time in UTC, but it returns the time in the JVM timezone. The issue origins in AzureBlobFileSystemStore.getFileStatus() itself, since parseLastModifiedTime() returns a wrong date. I have created a file in Azure Data Lake Gen2 and when I look at it through the Azure Explorer it shows the correct modification time, but the method returns -2 hours time (I am in CET = UTC+2). Azure Explorer last modified time: !image-2019-07-31-18-21-53-023.png|width=460,height=45! AbfsClient parseLastModifiedTime: !image-2019-07-31-18-23-37-349.png|width=459,height=284! It shows 15:21 CEST as utcDate, when it should be 15:21 UTC, which results in the 2 hour loss. DateFormat.parse uses a localized calendar to parse dates which might be the source of the issue. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller
Steve Loughran created HADOOP-16478: --- Summary: S3Guard bucket-info fails if the bucket location is denied to the caller Key: HADOOP-16478 URL: https://issues.apache.org/jira/browse/HADOOP-16478 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.2.0 Reporter: Steve Loughran IF you call "Hadoop s3guard bucket info" on a bucket and you don't have permission to list the bucket location, then you get a stack trace, with all other diagnostics being missing. Preferred: catch the exception, warn its unknown and only log@ debug -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/ [Jul 30, 2019 6:36:56 PM] (weichiu) HDFS-14464. Remove unnecessary log message from DFSInputStream. [Jul 30, 2019 8:31:02 PM] (ericp) YARN-9596: QueueMetrics has incorrect metrics when labelled partitions -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.server.namenode.TestNamenodeCapacityReport hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [156K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/399/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [320K]
[jira] [Created] (HADOOP-16477) S3 delegation token tests fail if fs.s3a.encryption.key set
Steve Loughran created HADOOP-16477: --- Summary: S3 delegation token tests fail if fs.s3a.encryption.key set Key: HADOOP-16477 URL: https://issues.apache.org/jira/browse/HADOOP-16477 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.3.0 Reporter: Steve Loughran if you set an s3a encryption key, the Session and Role DelegationToken tests fail...the test setup needs to unset that key for config and bucket -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16476) Intermittent failure of ITestS3GuardConcurrentOps#testConcurrentTableCreations
Gabor Bota created HADOOP-16476: --- Summary: Intermittent failure of ITestS3GuardConcurrentOps#testConcurrentTableCreations Key: HADOOP-16476 URL: https://issues.apache.org/jira/browse/HADOOP-16476 Project: Hadoop Common Issue Type: Sub-task Reporter: Gabor Bota Test is failing intermittently. One possible solution would be to wait (retry) more because the table will be deleted eventually - it's not there after the whole test run. {noformat} [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 142.471 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps [ERROR] testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps) Time elapsed: 142.286 s <<< ERROR! java.lang.IllegalArgumentException: Table s3guard.test.testConcurrentTableCreations-1265635747 is not deleted. at com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:505) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.deleteTable(ITestS3GuardConcurrentOps.java:87) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:178) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: com.amazonaws.waiters.WaiterTimedOutException: Reached maximum attempts without transitioning to the desired state at com.amazonaws.waiters.WaiterExecution.pollResource(WaiterExecution.java:86) at com.amazonaws.waiters.WaiterImpl.run(WaiterImpl.java:88) at com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:502) ... 16 more {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16464) S3Guard in auth mode doesn't raise AccessDeniedException on read of 0-byte file
[ https://issues.apache.org/jira/browse/HADOOP-16464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16464. - Resolution: Won't Fix I don't see this being fixable. The whole point of auth mode is we don't need to go near the store, and for a zero byte file, we can just return -1, always. Closing as a WONTFIX; if people find this in the wild then we can point them as this JIRA > S3Guard in auth mode doesn't raise AccessDeniedException on read of 0-byte > file > --- > > Key: HADOOP-16464 > URL: https://issues.apache.org/jira/browse/HADOOP-16464 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Minor > > This falls out of auth mode knowing the length of a file and skipping any S3 > checks: it is not an error to read a 0-byte file, regardless of the > readability of the file. > * there's no check in open() > * and read() just returns -1. > I don't see if that is fixable, or if it merits. Maybe just include in the > release notes -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: Aug Hadoop Community Meetup in China
Hi 俊平: My WeChat ID is chenrui_momo, and my phone number is +86 18092542752, will send you topic info when we get in touch Thank you. Sheng Liu 于2019年7月31日周三 下午4:00写道: > Hi, all. > > Any update about this Meetup arrangement? do we have decided the > presentation slots during Meetup? and the exact meeting place/time ? > > Thanks > Liu sheng > > > Rui Chen 于2019年7月24日周三 上午10:27写道: > > > Hi Junping > > > > My team and I work for helping more open source projects run on ARM > server, > > foucs on big data projects in this stage, like: Hadoop, Spark, Flink and > so > > on, > > I would like to share a topic about it with local community, include: > > current progress, faced issue and future plan. > > > > Thank you holding the meetup and glad to see Apache folks in the meetup > :) > > > > RuiChen > > > > https://openlabtesting.org/ > > https://github.com/theopenlab > > > > Weiwei Yang 于2019年7月23日周二 下午5:43写道: > > > > > Hi Junping > > > > > > Thanks. I would like to get a slot to talk about our new open source > > > project: YuniKorn. > > > > > > Thanks > > > Weiwei > > > On Jul 23, 2019, 5:08 PM +0800, 俊平堵 , wrote: > > > > Thanks for these positive feedbacks! The local community has voted > the > > > date and location to be 8/10, Beijing. So please book your time ahead > if > > > you have interest to join. > > > > I have gathered a few topics, and some candidate places for hosting > > this > > > meetup. If you would like to propose more topics, please nominate it > here > > > or ping me before this weekend (7/28, CST time). > > > > Will update here when I have more to share. thx! > > > > > > > > > > > > > > > > > > > > <> > > > > > > > > <> > > > > > > > > > > > > > > > > Thanks, > > > > > > > > Junping > > > > > > > > > 俊平堵 于2019年7月18日周四 下午3:28写道: > > > > > > Hi, all! > > > > > > I am glad to let you know that we are organizing > > > Hadoop Contributors Meetup in China on Aug. > > > > > > > > > > > > This could be the first time hadoop community meetup in China and > > > many attendees are expected to come from big data pioneers, such as: > > > Cloudera, Tencent, Alibaba, Xiaomi, Didi, JD, Meituan, Toutiao, Sina, > > etc. > > > > > > > > > > > > We're still working out the details, such as dates, contents and > > > locations. Here is a quick survey: > > https://www.surveymonkey.com/r/Y99RT3W > > > where you can vote your prefer dates and locations if you would like to > > > attend - the survey will end in July. 21. 12PM China Standard Time, and > > > result will go public in next day. > > > > > > > > > > > > Also, please feel free to reach out to me if you have a topic to > > > propose for the meetup. Will send out an update later with more > details > > > when I get more to share. Thanks! > > > > > > > > > > > > Cheers, > > > > > > > > > > > > Junping > > > > > >
Re: Aug Hadoop Community Meetup in China
Hi, all. Any update about this Meetup arrangement? do we have decided the presentation slots during Meetup? and the exact meeting place/time ? Thanks Liu sheng Rui Chen 于2019年7月24日周三 上午10:27写道: > Hi Junping > > My team and I work for helping more open source projects run on ARM server, > foucs on big data projects in this stage, like: Hadoop, Spark, Flink and so > on, > I would like to share a topic about it with local community, include: > current progress, faced issue and future plan. > > Thank you holding the meetup and glad to see Apache folks in the meetup :) > > RuiChen > > https://openlabtesting.org/ > https://github.com/theopenlab > > Weiwei Yang 于2019年7月23日周二 下午5:43写道: > > > Hi Junping > > > > Thanks. I would like to get a slot to talk about our new open source > > project: YuniKorn. > > > > Thanks > > Weiwei > > On Jul 23, 2019, 5:08 PM +0800, 俊平堵 , wrote: > > > Thanks for these positive feedbacks! The local community has voted the > > date and location to be 8/10, Beijing. So please book your time ahead if > > you have interest to join. > > > I have gathered a few topics, and some candidate places for hosting > this > > meetup. If you would like to propose more topics, please nominate it here > > or ping me before this weekend (7/28, CST time). > > > Will update here when I have more to share. thx! > > > > > > > > > > > > > > > <> > > > > > > <> > > > > > > > > > > > > Thanks, > > > > > > Junping > > > > > > > 俊平堵 于2019年7月18日周四 下午3:28写道: > > > > > Hi, all! > > > > > I am glad to let you know that we are organizing > > Hadoop Contributors Meetup in China on Aug. > > > > > > > > > > This could be the first time hadoop community meetup in China and > > many attendees are expected to come from big data pioneers, such as: > > Cloudera, Tencent, Alibaba, Xiaomi, Didi, JD, Meituan, Toutiao, Sina, > etc. > > > > > > > > > > We're still working out the details, such as dates, contents and > > locations. Here is a quick survey: > https://www.surveymonkey.com/r/Y99RT3W > > where you can vote your prefer dates and locations if you would like to > > attend - the survey will end in July. 21. 12PM China Standard Time, and > > result will go public in next day. > > > > > > > > > > Also, please feel free to reach out to me if you have a topic to > > propose for the meetup. Will send out an update later with more details > > when I get more to share. Thanks! > > > > > > > > > > Cheers, > > > > > > > > > > Junping > > >