+1 (non-binding)

I tested it in a deployment with 24 nodes across 8 subclusters.
Tested a few jobs reading and writing data through HDFS Router-based
federation.
However, jobs failed to run when setting RBF as the default filesystem
because after MAPREDUCE-6954, it tries to invoke setErasureCodingPolicy
while is not implemented.
I filed HDFS-12919 to track this but I don't think is a blocker.

Thanks,
Inigo

On Tue, Dec 12, 2017 at 2:43 PM, Elek, Marton <h...@anzix.net> wrote:

> +1 (non-binding)
>
>  * built from the source tarball (archlinux) / verified signature
>  * Deployed to a kubernetes cluster (10/10 datanode/nodemanager pods)
>  * Enabled ec on hdfs directory (hdfs cli)
>  * Started example yarn jobs (pi/terragen)
>  * checked yarn ui/ui2
>
> Thanks for all the efforts.
>
> Marton
>
>
>
> On 12/08/2017 09:31 PM, Andrew Wang wrote:
>
>> Hi all,
>>
>> Let me start, as always, by thanking the efforts of all the contributors
>> who contributed to this release, especially those who jumped on the issues
>> found in RC0.
>>
>> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
>> fixed JIRAs since the previous 3.0.0-beta1 release.
>>
>> You can find the artifacts here:
>>
>> http://home.apache.org/~wang/3.0.0-RC1/
>>
>> I've done the traditional testing of building from the source tarball and
>> running a Pi job on a single node cluster. I also verified that the shaded
>> jars are not empty.
>>
>> Found one issue that create-release (probably due to the mvn deploy
>> change)
>> didn't sign the artifacts, but I fixed that by calling mvn one more time.
>> Available here:
>>
>> https://repository.apache.org/content/repositories/orgapachehadoop-1075/
>>
>> This release will run the standard 5 days, closing on Dec 13th at 12:31pm
>> Pacific. My +1 to start.
>>
>> Best,
>> Andrew
>>
>>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>

Reply via email to