Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks Andrew! +1 (binding) * Verified signatures and checksums * Built from source on CentOS 7.2 with OpenJDK 1.8.0_131 with -Pnative * Deployed pseudo-distributed cluster and ran some example job * The change log and the release notes look good. Regards, Akira On 2017/06/30 11:40, Andrew Wang wrote: Hi all, As always, thanks to the many, many contributors who helped with this release! I've prepared an RC0 for 3.0.0-alpha4: http://home.apache.org/~wang/3.0.0-alpha4-RC0/ The standard 5-day vote would run until midnight on Tuesday, July 4th. Given that July 4th is a holiday in the US, I expect this vote might have to be extended, but I'd like to close the vote relatively soon after. I've done my traditional testing of a pseudo-distributed cluster with a single task pi job, which was successful. Normally my testing would end there, but I'm slightly more confident this time. At Cloudera, we've successfully packaged and deployed a snapshot from a few days ago, and run basic smoke tests. Some bugs found from this include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and the revert of HDFS-11696, which broke NN QJM HA setup. Vijay is working on a test run with a fuller test suite (the results of which we can hopefully post soon). My +1 to start, Best, Andrew - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
RE: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
+1 (non-binding) -Verified checksums and signatures -Built from source and installed HA cluster -Ran basic shell operations -Ran sample jobs. -Verified the HttpFSServerWebServer. -Brahma Reddy Battula -Original Message- From: Andrew Wang [mailto:andrew.w...@cloudera.com] Sent: 30 June 2017 10:41 To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 Hi all, As always, thanks to the many, many contributors who helped with this release! I've prepared an RC0 for 3.0.0-alpha4: http://home.apache.org/~wang/3.0.0-alpha4-RC0/ The standard 5-day vote would run until midnight on Tuesday, July 4th. Given that July 4th is a holiday in the US, I expect this vote might have to be extended, but I'd like to close the vote relatively soon after. I've done my traditional testing of a pseudo-distributed cluster with a single task pi job, which was successful. Normally my testing would end there, but I'm slightly more confident this time. At Cloudera, we've successfully packaged and deployed a snapshot from a few days ago, and run basic smoke tests. Some bugs found from this include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and the revert of HDFS-11696, which broke NN QJM HA setup. Vijay is working on a test run with a fuller test suite (the results of which we can hopefully post soon). My +1 to start, Best, Andrew
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks Andrew! - Deployed binary artifacts in a pseudo-distributed cluster (MacOS Sierra, Java 1.8.0_91) - Ran pi job - Clicked around the web UIs - Tried log aggregation - Played a bit with HDFS - Tried yarn top +1 (binding) - Robert On Thu, Jul 6, 2017 at 4:21 PM, Hanisha Koneru wrote: > Thanks for the hard work Andrew! > > - Built from source on Mac OS X 10.11.6 with Java 1.8.0_91 > - Built from source on CentOS Linux 7.3.161, with Java 1.8.0_92, with and > without native > - Deployed a 10 node cluster on docker containers > - Tested basic dfs operations > - Tested basic erasure coding (adding files, recovering corrupted > files) > - Tested some dfsadmin operations : report, triggerblockreport > > +1 (non-binding) > > > > Thanks, > Hanisha > > > > > > > > > On 7/6/17, 3:57 PM, "Lei Xu" wrote: > > >+1 (binding) > > > >Ran the following tests: > >* Deploy a pesudo cluster using tar ball, run pi. > >* Verified MD5 of tar balls for both src and dist. > >* Build src tarball with -Pdist,tar > > > >Thanks Andrew for the efforts! > > > >On Thu, Jul 6, 2017 at 3:44 PM, Andrew Wang > wrote: > >> Thanks all for the votes so far! > >> > >> I think we're still at a single binding +1 from myself, so I'll leave > this > >> vote open until we reach the minimum threshold of 3. I'm still hoping to > >> can push the release out before the weekend. > >> > >> On Thu, Jul 6, 2017 at 2:58 PM, Vijaya Krishna Kalluru Subbarao < > >> vij...@cloudera.com> wrote: > >> > >>> Ran Smokes and BVTs covering basic sanity testing(10+ tests ran) for > all > >>> these components: > >>> > >>>- Mapreduce(compression, archives, pipes, JHS), > >>>- Avro(AvroMapreduce, HadoopAvro, HiveAvro, SqoopAvro), > >>>- HBase(Balancer, compression, ImportExport, Snapshots, Schema > >>>change), > >>>- Oozie(Hive, Pig, Spark), > >>>- Pig(PigAvro, PigParquet, PigCompression), > >>>- Search(SolrCtlBasic, SolrRequestForwading, SolrSSLConfiguration). > >>> > >>> +1 non-binding. > >>> > >>> Regards, > >>> Vijay > >>> > >>> On Thu, Jul 6, 2017 at 2:39 PM, Eric Badger > >>> > wrote: > >>> > - Verified all checksums signatures > - Built from src on macOS 10.12.5 with Java 1.8.0u65 > - Deployed single node pseudo cluster > - Successfully ran sleep and pi jobs > - Navigated the various UIs > > +1 (non-binding) > > Thanks, > > Eric > > On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri > wrote: > > > > Thanks for the hard work on this! +1 (non-binding) > > - Built from source tarball on OS X w/ Java 1.8.0_45. > - Deployed mini/pseudo cluster. > - Ran grep and wordcount examples. > - Poked around ResourceManager and JobHistory UIs. > - Ran all s3a integration tests in US West 2. > > > > On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: > > > Thanks Andrew! > > +1 (non-binding) > > > >- Verified md5's, checked tarball sizes are reasonable > >- Built source tarball and deployed a pseudo-distributed cluster > with > >hdfs/kms > >- Tested basic hdfs/kms operations > >- Sanity checked webuis/logs > > > > > > -Xiao > > > > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge > wrote: > > > > > +1 (non-binding) > > > > > > > > >- Verified checksums and signatures of the tarballs > > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 > > >- Cloud connectors: > > > - A few S3A integration tests > > > - A few ADL live unit tests > > >- Deployed both binary and built source to a pseudo cluster, > passed > > the > > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > > > - HDFS basic and ACL > > > - DistCp basic > > > - WordCount (skipped in Kerberos mode) > > > - KMS and HttpFS basic > > > > > > Thanks Andrew for the great effort! > > > > > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne < > erichadoo...@yahoo.com. > > > invalid> > > > wrote: > > > > > > > Thanks Andrew. > > > > I downloaded the source, built it, and installed it onto a > pseudo > > > > distributed 4-node cluster. > > > > > > > > I ran mapred and streaming test cases, including sleep and > wordcount. > > > > +1 (non-binding) > > > > -Eric > > > > > > > > From: Andrew Wang > > > > To: "common-...@hadoop.apache.org" < > common-...@hadoop.apache.org>; > " > > > > hdfs-dev@hadoop.apache.org" ; " > > > > mapreduce-...@hadoop.apache.org" org>; > " > > > > yarn-...@hadoop.apache.org" > > > > Sent: Thursday, June 29, 2017 9:41 PM > > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 > > > > > > > > Hi all, > > > > > > > > As a
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks for the hard work Andrew! - Built from source on Mac OS X 10.11.6 with Java 1.8.0_91 - Built from source on CentOS Linux 7.3.161, with Java 1.8.0_92, with and without native - Deployed a 10 node cluster on docker containers - Tested basic dfs operations - Tested basic erasure coding (adding files, recovering corrupted files) - Tested some dfsadmin operations : report, triggerblockreport +1 (non-binding) Thanks, Hanisha On 7/6/17, 3:57 PM, "Lei Xu" wrote: >+1 (binding) > >Ran the following tests: >* Deploy a pesudo cluster using tar ball, run pi. >* Verified MD5 of tar balls for both src and dist. >* Build src tarball with -Pdist,tar > >Thanks Andrew for the efforts! > >On Thu, Jul 6, 2017 at 3:44 PM, Andrew Wang wrote: >> Thanks all for the votes so far! >> >> I think we're still at a single binding +1 from myself, so I'll leave this >> vote open until we reach the minimum threshold of 3. I'm still hoping to >> can push the release out before the weekend. >> >> On Thu, Jul 6, 2017 at 2:58 PM, Vijaya Krishna Kalluru Subbarao < >> vij...@cloudera.com> wrote: >> >>> Ran Smokes and BVTs covering basic sanity testing(10+ tests ran) for all >>> these components: >>> >>>- Mapreduce(compression, archives, pipes, JHS), >>>- Avro(AvroMapreduce, HadoopAvro, HiveAvro, SqoopAvro), >>>- HBase(Balancer, compression, ImportExport, Snapshots, Schema >>>change), >>>- Oozie(Hive, Pig, Spark), >>>- Pig(PigAvro, PigParquet, PigCompression), >>>- Search(SolrCtlBasic, SolrRequestForwading, SolrSSLConfiguration). >>> >>> +1 non-binding. >>> >>> Regards, >>> Vijay >>> >>> On Thu, Jul 6, 2017 at 2:39 PM, Eric Badger >> > wrote: >>> - Verified all checksums signatures - Built from src on macOS 10.12.5 with Java 1.8.0u65 - Deployed single node pseudo cluster - Successfully ran sleep and pi jobs - Navigated the various UIs +1 (non-binding) Thanks, Eric On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri wrote: Thanks for the hard work on this! +1 (non-binding) - Built from source tarball on OS X w/ Java 1.8.0_45. - Deployed mini/pseudo cluster. - Ran grep and wordcount examples. - Poked around ResourceManager and JobHistory UIs. - Ran all s3a integration tests in US West 2. On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: > Thanks Andrew! > +1 (non-binding) > >- Verified md5's, checked tarball sizes are reasonable >- Built source tarball and deployed a pseudo-distributed cluster with >hdfs/kms >- Tested basic hdfs/kms operations >- Sanity checked webuis/logs > > > -Xiao > > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge wrote: > > > +1 (non-binding) > > > > > >- Verified checksums and signatures of the tarballs > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 > >- Cloud connectors: > > - A few S3A integration tests > > - A few ADL live unit tests > >- Deployed both binary and built source to a pseudo cluster, passed > the > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > > - HDFS basic and ACL > > - DistCp basic > > - WordCount (skipped in Kerberos mode) > > - KMS and HttpFS basic > > > > Thanks Andrew for the great effort! > > > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne >>> > > invalid> > > wrote: > > > > > Thanks Andrew. > > > I downloaded the source, built it, and installed it onto a pseudo > > > distributed 4-node cluster. > > > > > > I ran mapred and streaming test cases, including sleep and wordcount. > > > +1 (non-binding) > > > -Eric > > > > > > From: Andrew Wang > > > To: "common-...@hadoop.apache.org" ; " > > > hdfs-dev@hadoop.apache.org" ; " > > > mapreduce-...@hadoop.apache.org" ; " > > > yarn-...@hadoop.apache.org" > > > Sent: Thursday, June 29, 2017 9:41 PM > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 > > > > > > Hi all, > > > > > > As always, thanks to the many, many contributors who helped with this > > > release! I've prepared an RC0 for 3.0.0-alpha4: > > > > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > > > > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > > > Given that July 4th is a holiday in the US, I expect this vote might > have > > > to be extended, but I'd like to close the vote relatively soon after. > > > > > > I've done my traditional testing of a pseudo-distributed cluster with a > > > single task pi job, which was successful. > > > > > > Normally my testing w
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
+1 (binding) Ran the following tests: * Deploy a pesudo cluster using tar ball, run pi. * Verified MD5 of tar balls for both src and dist. * Build src tarball with -Pdist,tar Thanks Andrew for the efforts! On Thu, Jul 6, 2017 at 3:44 PM, Andrew Wang wrote: > Thanks all for the votes so far! > > I think we're still at a single binding +1 from myself, so I'll leave this > vote open until we reach the minimum threshold of 3. I'm still hoping to > can push the release out before the weekend. > > On Thu, Jul 6, 2017 at 2:58 PM, Vijaya Krishna Kalluru Subbarao < > vij...@cloudera.com> wrote: > >> Ran Smokes and BVTs covering basic sanity testing(10+ tests ran) for all >> these components: >> >>- Mapreduce(compression, archives, pipes, JHS), >>- Avro(AvroMapreduce, HadoopAvro, HiveAvro, SqoopAvro), >>- HBase(Balancer, compression, ImportExport, Snapshots, Schema >>change), >>- Oozie(Hive, Pig, Spark), >>- Pig(PigAvro, PigParquet, PigCompression), >>- Search(SolrCtlBasic, SolrRequestForwading, SolrSSLConfiguration). >> >> +1 non-binding. >> >> Regards, >> Vijay >> >> On Thu, Jul 6, 2017 at 2:39 PM, Eric Badger > > wrote: >> >>> - Verified all checksums signatures >>> - Built from src on macOS 10.12.5 with Java 1.8.0u65 >>> - Deployed single node pseudo cluster >>> - Successfully ran sleep and pi jobs >>> - Navigated the various UIs >>> >>> +1 (non-binding) >>> >>> Thanks, >>> >>> Eric >>> >>> On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri >>> wrote: >>> >>> >>> >>> Thanks for the hard work on this! +1 (non-binding) >>> >>> - Built from source tarball on OS X w/ Java 1.8.0_45. >>> - Deployed mini/pseudo cluster. >>> - Ran grep and wordcount examples. >>> - Poked around ResourceManager and JobHistory UIs. >>> - Ran all s3a integration tests in US West 2. >>> >>> >>> >>> On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: >>> >>> > Thanks Andrew! >>> > +1 (non-binding) >>> > >>> >- Verified md5's, checked tarball sizes are reasonable >>> >- Built source tarball and deployed a pseudo-distributed cluster with >>> >hdfs/kms >>> >- Tested basic hdfs/kms operations >>> >- Sanity checked webuis/logs >>> > >>> > >>> > -Xiao >>> > >>> > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge >>> wrote: >>> > >>> > > +1 (non-binding) >>> > > >>> > > >>> > >- Verified checksums and signatures of the tarballs >>> > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 >>> > >- Cloud connectors: >>> > > - A few S3A integration tests >>> > > - A few ADL live unit tests >>> > >- Deployed both binary and built source to a pseudo cluster, passed >>> > the >>> > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: >>> > > - HDFS basic and ACL >>> > > - DistCp basic >>> > > - WordCount (skipped in Kerberos mode) >>> > > - KMS and HttpFS basic >>> > > >>> > > Thanks Andrew for the great effort! >>> > > >>> > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne >> > > invalid> >>> > > wrote: >>> > > >>> > > > Thanks Andrew. >>> > > > I downloaded the source, built it, and installed it onto a pseudo >>> > > > distributed 4-node cluster. >>> > > > >>> > > > I ran mapred and streaming test cases, including sleep and >>> wordcount. >>> > > > +1 (non-binding) >>> > > > -Eric >>> > > > >>> > > > From: Andrew Wang >>> > > > To: "common-...@hadoop.apache.org" ; >>> " >>> > > > hdfs-dev@hadoop.apache.org" ; " >>> > > > mapreduce-...@hadoop.apache.org" ; >>> " >>> > > > yarn-...@hadoop.apache.org" >>> > > > Sent: Thursday, June 29, 2017 9:41 PM >>> > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 >>> > > > >>> > > > Hi all, >>> > > > >>> > > > As always, thanks to the many, many contributors who helped with >>> this >>> > > > release! I've prepared an RC0 for 3.0.0-alpha4: >>> > > > >>> > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ >>> > > > >>> > > > The standard 5-day vote would run until midnight on Tuesday, July >>> 4th. >>> > > > Given that July 4th is a holiday in the US, I expect this vote might >>> > have >>> > > > to be extended, but I'd like to close the vote relatively soon >>> after. >>> > > > >>> > > > I've done my traditional testing of a pseudo-distributed cluster >>> with a >>> > > > single task pi job, which was successful. >>> > > > >>> > > > Normally my testing would end there, but I'm slightly more confident >>> > this >>> > > > time. At Cloudera, we've successfully packaged and deployed a >>> snapshot >>> > > from >>> > > > a few days ago, and run basic smoke tests. Some bugs found from this >>> > > > include HDFS-11956, which fixes backwards compat with Hadoop 2 >>> clients, >>> > > and >>> > > > the revert of HDFS-11696, which broke NN QJM HA setup. >>> > > > >>> > > > Vijay is working on a test run with a fuller test suite (the >>> results of >>> > > > which we can hopefully post soon). >>> > > > >>> > > > My +1 to start, >>> > > > >>> > > > Best, >>> > > > Andrew >>> > > > >>> > >
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks all for the votes so far! I think we're still at a single binding +1 from myself, so I'll leave this vote open until we reach the minimum threshold of 3. I'm still hoping to can push the release out before the weekend. On Thu, Jul 6, 2017 at 2:58 PM, Vijaya Krishna Kalluru Subbarao < vij...@cloudera.com> wrote: > Ran Smokes and BVTs covering basic sanity testing(10+ tests ran) for all > these components: > >- Mapreduce(compression, archives, pipes, JHS), >- Avro(AvroMapreduce, HadoopAvro, HiveAvro, SqoopAvro), >- HBase(Balancer, compression, ImportExport, Snapshots, Schema >change), >- Oozie(Hive, Pig, Spark), >- Pig(PigAvro, PigParquet, PigCompression), >- Search(SolrCtlBasic, SolrRequestForwading, SolrSSLConfiguration). > > +1 non-binding. > > Regards, > Vijay > > On Thu, Jul 6, 2017 at 2:39 PM, Eric Badger > wrote: > >> - Verified all checksums signatures >> - Built from src on macOS 10.12.5 with Java 1.8.0u65 >> - Deployed single node pseudo cluster >> - Successfully ran sleep and pi jobs >> - Navigated the various UIs >> >> +1 (non-binding) >> >> Thanks, >> >> Eric >> >> On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri >> wrote: >> >> >> >> Thanks for the hard work on this! +1 (non-binding) >> >> - Built from source tarball on OS X w/ Java 1.8.0_45. >> - Deployed mini/pseudo cluster. >> - Ran grep and wordcount examples. >> - Poked around ResourceManager and JobHistory UIs. >> - Ran all s3a integration tests in US West 2. >> >> >> >> On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: >> >> > Thanks Andrew! >> > +1 (non-binding) >> > >> >- Verified md5's, checked tarball sizes are reasonable >> >- Built source tarball and deployed a pseudo-distributed cluster with >> >hdfs/kms >> >- Tested basic hdfs/kms operations >> >- Sanity checked webuis/logs >> > >> > >> > -Xiao >> > >> > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge >> wrote: >> > >> > > +1 (non-binding) >> > > >> > > >> > >- Verified checksums and signatures of the tarballs >> > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 >> > >- Cloud connectors: >> > > - A few S3A integration tests >> > > - A few ADL live unit tests >> > >- Deployed both binary and built source to a pseudo cluster, passed >> > the >> > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: >> > > - HDFS basic and ACL >> > > - DistCp basic >> > > - WordCount (skipped in Kerberos mode) >> > > - KMS and HttpFS basic >> > > >> > > Thanks Andrew for the great effort! >> > > >> > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne > > > invalid> >> > > wrote: >> > > >> > > > Thanks Andrew. >> > > > I downloaded the source, built it, and installed it onto a pseudo >> > > > distributed 4-node cluster. >> > > > >> > > > I ran mapred and streaming test cases, including sleep and >> wordcount. >> > > > +1 (non-binding) >> > > > -Eric >> > > > >> > > > From: Andrew Wang >> > > > To: "common-...@hadoop.apache.org" ; >> " >> > > > hdfs-dev@hadoop.apache.org" ; " >> > > > mapreduce-...@hadoop.apache.org" ; >> " >> > > > yarn-...@hadoop.apache.org" >> > > > Sent: Thursday, June 29, 2017 9:41 PM >> > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 >> > > > >> > > > Hi all, >> > > > >> > > > As always, thanks to the many, many contributors who helped with >> this >> > > > release! I've prepared an RC0 for 3.0.0-alpha4: >> > > > >> > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ >> > > > >> > > > The standard 5-day vote would run until midnight on Tuesday, July >> 4th. >> > > > Given that July 4th is a holiday in the US, I expect this vote might >> > have >> > > > to be extended, but I'd like to close the vote relatively soon >> after. >> > > > >> > > > I've done my traditional testing of a pseudo-distributed cluster >> with a >> > > > single task pi job, which was successful. >> > > > >> > > > Normally my testing would end there, but I'm slightly more confident >> > this >> > > > time. At Cloudera, we've successfully packaged and deployed a >> snapshot >> > > from >> > > > a few days ago, and run basic smoke tests. Some bugs found from this >> > > > include HDFS-11956, which fixes backwards compat with Hadoop 2 >> clients, >> > > and >> > > > the revert of HDFS-11696, which broke NN QJM HA setup. >> > > > >> > > > Vijay is working on a test run with a fuller test suite (the >> results of >> > > > which we can hopefully post soon). >> > > > >> > > > My +1 to start, >> > > > >> > > > Best, >> > > > Andrew >> > > > >> > > > >> > > > >> > > > >> > > >> > > >> > > >> > > -- >> > > John >> > > >> > >> >> - >> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org >> >> > > > -- > Thanks, > Vijay >
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
- Verified all checksums signatures - Built from src on macOS 10.12.5 with Java 1.8.0u65 - Deployed single node pseudo cluster - Successfully ran sleep and pi jobs - Navigated the various UIs +1 (non-binding) Thanks, Eric On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri wrote: Thanks for the hard work on this! +1 (non-binding) - Built from source tarball on OS X w/ Java 1.8.0_45. - Deployed mini/pseudo cluster. - Ran grep and wordcount examples. - Poked around ResourceManager and JobHistory UIs. - Ran all s3a integration tests in US West 2. On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: > Thanks Andrew! > +1 (non-binding) > >- Verified md5's, checked tarball sizes are reasonable >- Built source tarball and deployed a pseudo-distributed cluster with >hdfs/kms >- Tested basic hdfs/kms operations >- Sanity checked webuis/logs > > > -Xiao > > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge wrote: > > > +1 (non-binding) > > > > > >- Verified checksums and signatures of the tarballs > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 > >- Cloud connectors: > > - A few S3A integration tests > > - A few ADL live unit tests > >- Deployed both binary and built source to a pseudo cluster, passed > the > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > > - HDFS basic and ACL > > - DistCp basic > > - WordCount (skipped in Kerberos mode) > > - KMS and HttpFS basic > > > > Thanks Andrew for the great effort! > > > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne > invalid> > > wrote: > > > > > Thanks Andrew. > > > I downloaded the source, built it, and installed it onto a pseudo > > > distributed 4-node cluster. > > > > > > I ran mapred and streaming test cases, including sleep and wordcount. > > > +1 (non-binding) > > > -Eric > > > > > > From: Andrew Wang > > > To: "common-...@hadoop.apache.org" ; " > > > hdfs-dev@hadoop.apache.org" ; " > > > mapreduce-...@hadoop.apache.org" ; " > > > yarn-...@hadoop.apache.org" > > > Sent: Thursday, June 29, 2017 9:41 PM > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 > > > > > > Hi all, > > > > > > As always, thanks to the many, many contributors who helped with this > > > release! I've prepared an RC0 for 3.0.0-alpha4: > > > > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > > > > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > > > Given that July 4th is a holiday in the US, I expect this vote might > have > > > to be extended, but I'd like to close the vote relatively soon after. > > > > > > I've done my traditional testing of a pseudo-distributed cluster with a > > > single task pi job, which was successful. > > > > > > Normally my testing would end there, but I'm slightly more confident > this > > > time. At Cloudera, we've successfully packaged and deployed a snapshot > > from > > > a few days ago, and run basic smoke tests. Some bugs found from this > > > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, > > and > > > the revert of HDFS-11696, which broke NN QJM HA setup. > > > > > > Vijay is working on a test run with a fuller test suite (the results of > > > which we can hopefully post soon). > > > > > > My +1 to start, > > > > > > Best, > > > Andrew > > > > > > > > > > > > > > > > > > > > -- > > John > > > - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks for the hard work on this! +1 (non-binding) - Built from source tarball on OS X w/ Java 1.8.0_45. - Deployed mini/pseudo cluster. - Ran grep and wordcount examples. - Poked around ResourceManager and JobHistory UIs. - Ran all s3a integration tests in US West 2. On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: > Thanks Andrew! > +1 (non-binding) > >- Verified md5's, checked tarball sizes are reasonable >- Built source tarball and deployed a pseudo-distributed cluster with >hdfs/kms >- Tested basic hdfs/kms operations >- Sanity checked webuis/logs > > > -Xiao > > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge wrote: > > > +1 (non-binding) > > > > > >- Verified checksums and signatures of the tarballs > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 > >- Cloud connectors: > > - A few S3A integration tests > > - A few ADL live unit tests > >- Deployed both binary and built source to a pseudo cluster, passed > the > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > > - HDFS basic and ACL > > - DistCp basic > > - WordCount (skipped in Kerberos mode) > > - KMS and HttpFS basic > > > > Thanks Andrew for the great effort! > > > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne > invalid> > > wrote: > > > > > Thanks Andrew. > > > I downloaded the source, built it, and installed it onto a pseudo > > > distributed 4-node cluster. > > > > > > I ran mapred and streaming test cases, including sleep and wordcount. > > > +1 (non-binding) > > > -Eric > > > > > > From: Andrew Wang > > > To: "common-...@hadoop.apache.org" ; " > > > hdfs-dev@hadoop.apache.org" ; " > > > mapreduce-...@hadoop.apache.org" ; " > > > yarn-...@hadoop.apache.org" > > > Sent: Thursday, June 29, 2017 9:41 PM > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 > > > > > > Hi all, > > > > > > As always, thanks to the many, many contributors who helped with this > > > release! I've prepared an RC0 for 3.0.0-alpha4: > > > > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > > > > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > > > Given that July 4th is a holiday in the US, I expect this vote might > have > > > to be extended, but I'd like to close the vote relatively soon after. > > > > > > I've done my traditional testing of a pseudo-distributed cluster with a > > > single task pi job, which was successful. > > > > > > Normally my testing would end there, but I'm slightly more confident > this > > > time. At Cloudera, we've successfully packaged and deployed a snapshot > > from > > > a few days ago, and run basic smoke tests. Some bugs found from this > > > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, > > and > > > the revert of HDFS-11696, which broke NN QJM HA setup. > > > > > > Vijay is working on a test run with a fuller test suite (the results of > > > which we can hopefully post soon). > > > > > > My +1 to start, > > > > > > Best, > > > Andrew > > > > > > > > > > > > > > > > > > > > -- > > John > > >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/ [Jul 5, 2017 6:10:57 PM] (liuml07) HDFS-12089. Fix ambiguous NN retry log message in WebHDFS. Contributed [Jul 5, 2017 6:16:56 PM] (jzhuge) HADOOP-14608. KMS JMX servlet path not backwards compatible. Contributed -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverControllerStress hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.namenode.ha.TestHASafeMode hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.yarn.sls.nodemanager.TestNMSimulator Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-mvninstall-root.txt [616K] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-compile-root.txt [20K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-compile-root.txt [20K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-compile-root.txt [20K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-assemblies.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [152K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [792K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [68K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [76K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt [28K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/
[jira] [Created] (HDFS-12097) libhdfs++: add Clang build and tests to the CI system
Anatoli Shein created HDFS-12097: Summary: libhdfs++: add Clang build and tests to the CI system Key: HDFS-12097 URL: https://issues.apache.org/jira/browse/HDFS-12097 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Anatoli Shein For better portability we should add a Clang build and tests of libhdfs++ library to the CI system. To accomplish that the Dockerfile will need to be updated with the environment setup, and the maven files should be updated to build libhdfs++ using Clang and then run the tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks Andrew! +1 (non-binding) - Verified md5's, checked tarball sizes are reasonable - Built source tarball and deployed a pseudo-distributed cluster with hdfs/kms - Tested basic hdfs/kms operations - Sanity checked webuis/logs -Xiao On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge wrote: > +1 (non-binding) > > >- Verified checksums and signatures of the tarballs >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 >- Cloud connectors: > - A few S3A integration tests > - A few ADL live unit tests >- Deployed both binary and built source to a pseudo cluster, passed the >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > - HDFS basic and ACL > - DistCp basic > - WordCount (skipped in Kerberos mode) > - KMS and HttpFS basic > > Thanks Andrew for the great effort! > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne invalid> > wrote: > > > Thanks Andrew. > > I downloaded the source, built it, and installed it onto a pseudo > > distributed 4-node cluster. > > > > I ran mapred and streaming test cases, including sleep and wordcount. > > +1 (non-binding) > > -Eric > > > > From: Andrew Wang > > To: "common-...@hadoop.apache.org" ; " > > hdfs-dev@hadoop.apache.org" ; " > > mapreduce-...@hadoop.apache.org" ; " > > yarn-...@hadoop.apache.org" > > Sent: Thursday, June 29, 2017 9:41 PM > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 > > > > Hi all, > > > > As always, thanks to the many, many contributors who helped with this > > release! I've prepared an RC0 for 3.0.0-alpha4: > > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > > Given that July 4th is a holiday in the US, I expect this vote might have > > to be extended, but I'd like to close the vote relatively soon after. > > > > I've done my traditional testing of a pseudo-distributed cluster with a > > single task pi job, which was successful. > > > > Normally my testing would end there, but I'm slightly more confident this > > time. At Cloudera, we've successfully packaged and deployed a snapshot > from > > a few days ago, and run basic smoke tests. Some bugs found from this > > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, > and > > the revert of HDFS-11696, which broke NN QJM HA setup. > > > > Vijay is working on a test run with a fuller test suite (the results of > > which we can hopefully post soon). > > > > My +1 to start, > > > > Best, > > Andrew > > > > > > > > > > > > -- > John >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/456/ [Jul 5, 2017 10:35:18 AM] (vinayakumarb) HADOOP-13414. Hide Jetty Server version header in HTTP responses. [Jul 5, 2017 6:10:57 PM] (liuml07) HDFS-12089. Fix ambiguous NN retry log message in WebHDFS. Contributed [Jul 5, 2017 6:16:56 PM] (jzhuge) HADOOP-14608. KMS JMX servlet path not backwards compatible. Contributed -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs-client Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2888] org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 302] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1339] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:[line 133] Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2085] Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 642] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 719] Hard coded reference to an absolute pathname in org.apache.hadoop.y
[jira] [Resolved] (HDFS-12096) Ozone: Bucket versioning design document
[ https://issues.apache.org/jira/browse/HDFS-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang resolved HDFS-12096. Resolution: Duplicate > Ozone: Bucket versioning design document > > > Key: HDFS-12096 > URL: https://issues.apache.org/jira/browse/HDFS-12096 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: Ozone Bucket Versioning v1.pdf > > > This JIRA is opened for the discussion of the bucket versioning design. > The bucket versioning is the ability to hold multiple versions objects of a > key in a bucket. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12096) Ozone: Bucket versioning design document
Weiwei Yang created HDFS-12096: -- Summary: Ozone: Bucket versioning design document Key: HDFS-12096 URL: https://issues.apache.org/jira/browse/HDFS-12096 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Weiwei Yang Assignee: Weiwei Yang This JIRA is opened for the discussion of the bucket versioning design. The bucket versioning is the ability to hold multiple versions objects of a key in a bucket. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12095) Block Storage: Fix cblock unit test failures in TestLocalBlockCache, TestCBlockReadWrite and TestBufferManager
Mukul Kumar Singh created HDFS-12095: Summary: Block Storage: Fix cblock unit test failures in TestLocalBlockCache, TestCBlockReadWrite and TestBufferManager Key: HDFS-12095 URL: https://issues.apache.org/jira/browse/HDFS-12095 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Fix For: HDFS-7240 Cblock test TestLocalBlockCache, TestCBlockReadWrite and TestBufferManager are failing quite frequently. Please find the analysis for each of these failure as following *TestCBlockReadWrite* - Times out in the Jenkins run however it passes successfully in local runs *TestCBlockReadWrite* - This test fails because of invalid assertion, as there can be time triggered flushes which are triggered and got finished, hence number of completed flushes need not be '1' {code} java.lang.AssertionError: expected:<1> but was:<13> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.cblock.TestCBlockReadWrite.testContainerWrites(TestCBlockReadWrite.java:245) {code} *TestBufferManager* - This assertion failed because there might be flushes which are in progress. A sleep need to be introduced to ensure that the entire dirty buffer is flused {code} java.lang.AssertionError: expected:<16384> but was:<15976> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.cblock.TestBufferManager.testMultipleBuffersFlush(TestBufferManager.java:346) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12094) Log torrent when none isa-l EC is used.
LiXin Ge created HDFS-12094: --- Summary: Log torrent when none isa-l EC is used. Key: HDFS-12094 URL: https://issues.apache.org/jira/browse/HDFS-12094 Project: Hadoop HDFS Issue Type: Bug Components: erasure-coding Affects Versions: 3.0.0-beta1 Reporter: LiXin Ge Assignee: LiXin Ge Priority: Minor My hadoop is built without isa-l support, after the EC policy is enabled, whenever I get/put directory which contains many files, the log of warnings(see below) spam on the screen! This is unfriendly and depress the performance. Since we come to the beta version now, these logs should be deprecated and a one-time warning log instead of exception may be much better. {quote} 2017-07-06 15:42:41,398 WARN erasurecode.CodecUtil: Failed to create raw erasure encoder xor_native, fallback to next codec if possible java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawEncoder at org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawErasureCoderFactory.createEncoder(NativeXORRawErasureCoderFactory.java:35) at org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoderWithFallback(CodecUtil.java:177) at org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoder(CodecUtil.java:129) at org.apache.hadoop.hdfs.DFSStripedOutputStream.(DFSStripedOutputStream.java:302) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:309) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1216) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1195) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1133) ... at org.apache.hadoop.fs.shell.Command.run(Command.java:176) at org.apache.hadoop.fs.FsShell.run(FsShell.java:326) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:389) Caused by: java.lang.RuntimeException: libhadoop was built without ISA-L support at org.apache.hadoop.io.erasurecode.ErasureCodeNative.checkNativeCodeLoaded(ErasureCodeNative.java:69) at org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawDecoder.(NativeXORRawDecoder.java:33) ... 25 more {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.
Ewan Higgs created HDFS-12093: - Summary: [READ] Share remoteFS between ProvidedReplica instances. Key: HDFS-12093 URL: https://issues.apache.org/jira/browse/HDFS-12093 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Ewan Higgs Then a Datanode comes online using Provided storage, it fills the {{ReplicaMap}} with the known replicas. With Provided Storage, this includes {{ProvidedReplica}} instances. Each of these objects, in their constructor, will construct an FileSystem using the Service Provider. This can result in contacting the remote file system and checking that the credentials are correct and that the data is there. For large systems this is a prohibitively expensive operation to perform per replica. Instead, the {{ProvidedVolumeImpl}} should own the reference to the {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on their creation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org