*+1 binding.*

reviewed binaries, source, artifacts in the staging maven repository in
downstream builds. all good.

*## test run*

checked out the asf github repo at commit 6da346a358c into a location
already set up with aws and azure test credentials

ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
 -Dmarkers=delete -Dscale
and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
-DtestsThreadCount=6

all happy



*## binary*
downloaded KEYS and imported, so adding your key to my list (also signed
this and updated the key servers)

downloaded rc tar and verified
```
> gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <sunc...@apache.org>"
[full]


> cat hadoop-3.3.2.tar.gz.sha512
SHA512 (hadoop-3.3.2.tar.gz) =
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d

> shasum -a 512 hadoop-3.3.2.tar.gz
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
 hadoop-3.3.2.tar.gz
```


*# cloudstore against staged artifacts*
```
cd ~/.m2/repository/org/apache/hadoop
find . -name \*3.3.2\* -print | xargs rm -r
```
ensures no local builds have tainted the repo.

in cloudstore mvn build without tests
```
mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
```
this fetches all from asf staging

```
Downloading from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
Downloaded from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
(11 kB at 20 kB/s)
```
there's no tests there, but it did audit the download process. FWIW, that
project has switched to logback, so I now have all hadoop imports excluding
slf4j and log4j. it takes too much effort right now.

build works.

tested abfs and s3a storediags, all happy




*### google GCS against staged artifacts*

gcs is now java 11 only, so I had to switch JVMs here.

had to add a snapshots and staging profile, after which I could build and
test.

```
 -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
```
two test failures were related to auth failures where the tests were trying
to raise exceptions but things failed differently
```
[ERROR] Failures:
[ERROR]
GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
unexpected exception type thrown; expected:<java.io.FileNotFoundException>
but was:<java.lang.IllegalArgumentException>
[ERROR]
GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
value of: throwable.getMessage()
expected: Failed to create GCS FS
but was : A JSON key file may not be specified at the same time as
credentials via configuration.

```

I'm not worried here.

ran cloudstore's diagnostics against gcs.

Nice to see they are now collecting IOStatistics on their input streams. we
really need to get this collected through the parquet/orc libs and then
through the query engines.

```
> bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/

...
2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
(StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d{counters=((stream_read_close_operations=1)
(stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
(stream_read_bytes=7) (stream_read_exceptions=0)
(stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
(stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
(stream_read_seek_forward_operations=0)
(stream_read_operations_incomplete=1));
gauges=();
minimums=();
maximums=();
means=();
}
...
```

*### source*

once I'd done builds and tests which fetched from staging, I did a local
build and test

repeated download/validate of source tarball, unzip/untar

build with java11.

I've not done the test run there, because that directory tree doesn't have
the credentials, and this mornings run was good.

altogether then: very happy. tests good, downstream libraries building and
linking.

On Wed, 19 Jan 2022 at 17:50, Chao Sun <sunc...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Reply via email to