Re: [Hadoop-3.3 Release update]- branch-3.3 has created

2020-04-29 Thread Akira Ajisaka
Hi Surendra,

Updated the version to 3.3.1-SNAPSHOT in branch-3.3.

Thanks,
Akira

On Wed, Apr 29, 2020 at 4:22 PM Surendra Singh Lilhore <
surendralilh...@gmail.com> wrote:

> Hi Brahma,
>
> Why the branch-3.3 & branch-3.3.0 pom version is same ?.
>
> In branch-3.3 pom version should be 3.3.1.
>
> Please correct me if I am wrong.
>
> -Surendra
>
>
> On Sat, 25 Apr, 2020, 9:33 am Mingliang Liu,  wrote:
>
> > Brahma,
> >
> > What about https://issues.apache.org/jira/browse/HADOOP-17007?
> >
> > Thanks,
> >
> > On Fri, Apr 24, 2020 at 11:07 AM Brahma Reddy Battula  >
> > wrote:
> >
> > > Ok. Done. Branch created.
> > >
> > > Following blockers are pending, will closely track this.
> > >
> > > https://issues.apache.org/jira/browse/HDFS-15287 ( Open: Under
> > discussion
> > > )
> > > https://issues.apache.org/jira/browse/YARN-10194 ( Patch Available)
> > > https://issues.apache.org/jira/browse/HDFS-15286 ( Patch Available)
> > > https://issues.apache.org/jira/browse/YARN-9898 ( Patch Available)
> > >
> > >
> > > On Fri, Apr 24, 2020 at 7:42 PM Wei-Chiu Chuang
> > >  wrote:
> > >
> > > > +1 we should have the branch ASAP.
> > > >
> > > > On Wed, Apr 22, 2020 at 11:07 PM Akira Ajisaka 
> > > > wrote:
> > > >
> > > > > > Since blockers are not closed, I didn't cut the branch because
> > > > > multiple branches might confuse or sombody might miss to commit.
> > > > >
> > > > > The current situation is already confusing. The 3.3.1 version
> already
> > > > > exists in JIRA, so some committers wrongly commit non-critical
> issues
> > > to
> > > > > branch-3.3 and set the fix version to 3.3.1.
> > > > > I think now we should cut branch-3.3.0 and freeze source code
> except
> > > the
> > > > > blockers.
> > > > >
> > > > > -Akira
> > > > >
> > > > > On Tue, Apr 21, 2020 at 3:05 PM Brahma Reddy Battula <
> > > bra...@apache.org>
> > > > > wrote:
> > > > >
> > > > >> Sure, I will do that.
> > > > >>
> > > > >> Since blockers are not closed, I didn't cut the branch because
> > > > >> multiple branches might confuse or sombody might miss to
> > commit.Shall
> > > I
> > > > >> wait till this weekend to create..?
> > > > >>
> > > > >> On Mon, Apr 20, 2020 at 11:57 AM Akira Ajisaka <
> aajis...@apache.org
> > >
> > > > >> wrote:
> > > > >>
> > > > >>> Hi Brahma,
> > > > >>>
> > > > >>> Thank you for preparing the release.
> > > > >>> Could you cut branch-3.3.0? I would like to backport some fixes
> for
> > > > >>> 3.3.1 and not for 3.3.0.
> > > > >>>
> > > > >>> Thanks and regards,
> > > > >>> Akira
> > > > >>>
> > > > >>> On Fri, Apr 17, 2020 at 11:11 AM Brahma Reddy Battula <
> > > > bra...@apache.org>
> > > > >>> wrote:
> > > > >>>
> > > >  Hi All,
> > > > 
> > > >  we are down to two blockers issues now (YARN-10194 and
> YARN-9848)
> > > > which
> > > >  are in patch available state.Hopefully we can out the RC soon.
> > > > 
> > > >  thanks to @Prabhu Joseph 
> > > > ,@masakate,@akira
> > > >  and @Wei-Chiu Chuang   and others for
> > helping
> > > >  resloving the blockers.
> > > > 
> > > > 
> > > > 
> > > >  On Tue, Apr 14, 2020 at 10:49 PM Brahma Reddy Battula <
> > > >  bra...@apache.org> wrote:
> > > > 
> > > > >
> > > > > @Prabhu Joseph 
> > > > > >>> Have committed the YARN blocker YARN-10219 to trunk and
> > > > > cherry-picked to branch-3.3. Right now, there are two blocker
> > > Jiras -
> > > > > YARN-10233 and HADOOP-16982
> > > > > which i will help to review and commit. Thanks.
> > > > >
> > > > > Looks you committed YARN-10219. Noted YARN-10233 and
> HADOOP-16982
> > > as
> > > > a
> > > > > blockers. (without YARN-10233 we have given so many
> releases,it's
> > > > not newly
> > > > > introduced.).. Thanks
> > > > >
> > > > > @Vinod Kumar Vavilapalli  ,@adam
> Antal,
> > > > >
> > > > > I noted YARN-9848 as a blocker as you mentioned above.
> > > > >
> > > > > @All,
> > > > >
> > > > > Currently following four blockers are pending for 3.3.0 RC.
> > > > >
> > > > > HADOOP-16963,YARN-10233,HADOOP-16982 and YARN-9848.
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Apr 14, 2020 at 8:11 PM Vinod Kumar Vavilapalli <
> > > > > vino...@apache.org> wrote:
> > > > >
> > > > >> Looks like a really bad bug to me.
> > > > >>
> > > > >> +1 for revert and +1 for making that a 3.3.0 blocker. I think
> > > should
> > > > >> also revert it in a 3.2 maintenance release too.
> > > > >>
> > > > >> Thanks
> > > > >> +Vinod
> > > > >>
> > > > >> > On Apr 14, 2020, at 5:03 PM, Adam Antal <
> > > adam.an...@cloudera.com
> > > > .INVALID>
> > > > >> wrote:
> > > > >> >
> > > > >> > Hi everyone,
> > > > >> >
> > > > >> > Sorry for coming a bit late with this, but there's also one
> > jira
> > > > >> that can
> > > > >> > have potential impact on clusters and we should talk about
> it.
> > > > >> >
> > > > >

[jira] [Created] (YARN-10252) Allow adjusting vCore weight in CPU cgroup strict mode

2020-04-29 Thread Zbigniew Baranowski (Jira)
Zbigniew Baranowski created YARN-10252:
--

 Summary: Allow adjusting vCore weight in CPU cgroup strict mode
 Key: YARN-10252
 URL: https://issues.apache.org/jira/browse/YARN-10252
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 3.2.1
Reporter: Zbigniew Baranowski
 Attachments: YARN.patch

Currently, with CPU cgroup strict mode enabled on NodeManager, when cpu 
resources are overcommitted ( 8 vCores on 4 core machine), the total amount of 
CPU time that container will get for each requested vCore will be automatically 
downscaled with the formula: vCoreCPUTime = totalPhysicalCoresOnNM / 
coresConfiguredForNM. So container speed will be throttled on CPU even if there 
are spare cores available on NM (e.g with 8 vCores available o 4 core machine, 
a container that asked for 2 cores effectively will be allowed to use only on 
physical core). The same is happening if CPU resource cap is enabled (via 
yarn.nodemanager.resource.percentage-physical-cpu-limit), in this case, 
totalCoresOnNode (=coresOnNode * percentage-physical-cpu-limit) is scaled down 
by the cap. So for example, if the cap is 80%, a container that asked for 2 
cores will be allowed to use the max of the equivalent of 1.6 physical core, 
regardless of the current NM load.

Both aforementioned situations may lead to underuse of available resources. In 
some cases, administrator may want to overcommit the resources if applications 
are statically over-allocating resources, but not fully using them. This will 
cause all containers to slow down, which is not the initial intention. 
Therefore it would be very useful if administrators have control on how vCores 
are mapped to CPU time on NodeManagers in strict mode when CPU resources are 
overcommitted or/and physical-cpu-limit is enabled.
This could be potentially done with a parameter like 
yarn.nodemanager.resource.strict-vcore-weight that controls the vCore to pCore 
time mapping. E.g value 1 means one to one mapping, 1.2 means that a single 
vcore can have up to 120% of a physical core (this can be handy for 
pysparkers), -1 (default) disables the feature - use auto-scaling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-04-29 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1484/

[Apr 28, 2020 4:53:28 PM] (snemeth) YARN-10215. Endpoint for obtaining direct 
URL for the logs. Contributed
[Apr 28, 2020 11:14:55 PM] (github) HADOOP-17010. Add queue capacity support 
for FairCallQueue (#1977)




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 
   org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should 
be package protected At WebServiceClient.java: At WebServiceClient.java:[line 
42] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 

Failed junit tests :

   hadoop.metrics2.source.TestJvmMetrics 
   hadoop.io.compress.snappy.TestSnappyCompressorDecompressor 
   hadoop.io.compress.TestCompressorDecompressor 
   hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider 
   hadoop.hdfs.TestAclsEndToEnd 
   hadoop.hdfs.TestRead 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup 
   hadoop.hdfs.server.blockmanagement.TestPendingReconstruction 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshot 
   hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup 
   hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy 
   hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.TestRefreshCallQueue 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.TestFSEditLogLoader 
   hadoop.hdfs.TestErasureCodingPolicyWithSnapshot 
   hadoop.hdfs.tools.TestECAdmin 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.tools.TestViewFSStoragePolicyCommands 
   hadoop.hdfs.server.namenode.TestFileTruncate 
   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer 
   hadoop.hdfs.server.namenode.TestAddStripedBlocks 
   hadoop.hdfs.TestDFSInputStreamBlockLocations 
   
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary 
   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot 
   hadoop.hdfs.server.namenode.TestFSImage 
   hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.TestMultiThreadedHflush 
   hadoop.hdfs.TestFileCorruption 
   hadoop.hdfs.server.namenode.TestStripedINodeFile 
   hadoop.hdfs.server.namenode.TestLargeDirectoryDelete 
   hadoop.hdfs.TestErasureCodingExerciseAPIs 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   
hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile 
   hadoop.hdfs.server.namenode.TestINodeFile 
   hadoop.hdfs.TestDatanodeReport 
   hadoop.hdfs.TestReadStripedFileWithDNFailure 
   hadoop.hdfs.TestReplication 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.server.

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-04-29 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/670/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
 AbstractDelegationTokenSecretManager$DelegationTokenInformation, boolean) At 
ZKDelegationTokenSecretManager.java:seqOs of method 
org.apache.hadoop.

Re: [NOTICE] Removal of protobuf classes from Hadoop Token's public APIs' signature

2020-04-29 Thread Steve Loughran
Okay.

I am not going to be a purist and say "what were they doing -using our
private APIs?" because as we all know, with things like UGI tagged @private
there's been no way to get something is done without getting into the
private stuff.

But why did we do the protobuf changes? So that we could update our private
copy of protobuf with out breaking every single downstream application. The
great protobuf upgrade to 2.5 is not something we wanted to repeat. When
was that? before hadoop-2.2 shipped? I certainly remember a couple of weeks
were absolutely nothing would build whatsoever, not until every downstream
project had upgraded to the same version of the library.

If you ever want to see an upgrade which makes a guava update seem a minor
detail, protobuf upgrades are it. Hence the shading

HBase
=

it looks like HBase has been using deep internal stuff. That is,
"unfortunate". I think in that world we have to look and say is there
something specific we can do here to help HBase in a way we could also
backport. They shouldn't need those IPC internals.

Tez & Tokens


I didn't know Tez was using those protobuf APIs internally. That is,
"unfortunate".

What is key is this: without us moving those methods things like Spark
wouldn't work. And they weren't even using the methods, just trying to work
with Token for job submission.

All Tez should need is a byte array serialization of a token. Given Token
is also Writable, that could be done via WritableUtils in a way which will
also work with older releases.

Ozone
=

When these were part of/in-sync with the hadoop build there wouldn't have
been problems. Now there are. Again, they're going in deep, but here
clearly to simulate some behaviour. Any way to do that differently?

Ratis
=

No idea.

On Wed, 29 Apr 2020 at 07:12, Wei-Chiu Chuang 
wrote:

> Most of the problems are downstream applications using Hadoop's private
> APIs.
>
> Tez:
>
> 17:08:38 2020/04/16 00:08:38 INFO: [ERROR] COMPILATION ERROR :
> 17:08:38 2020/04/16 00:08:38 INFO: [INFO]
> -
> 17:08:38 2020/04/16 00:08:38 INFO: [ERROR]
>
> /grid/0/jenkins/workspace/workspace/CDH-CANARY-parallel-centos7/SOURCES/tez/tez-plugins/tez-aux-services/src/main/java/org/apache/tez/auxservices/ShuffleHandler.java:[757,45]
> incompatible types: com.google.protobuf.ByteString cannot be converted
> to org.apache.hadoop.thirdparty.protobuf.ByteString
> 17:08:38 2020/04/16 00:08:38 INFO: [INFO] 1 error
>
>
> Tez keeps track of job tokens internally.
> The change would look like this:
>
> private void recordJobShuffleInfo(JobID jobId, String user,
> Token jobToken) throws IOException {
>   if (stateDb != null) {
> TokenProto tokenProto = ProtobufHelper.protoFromToken(jobToken);
> /*TokenProto tokenProto = TokenProto.newBuilder()
> .setIdentifier(ByteString.copyFrom(jobToken.getIdentifier()))
> .setPassword(ByteString.copyFrom(jobToken.getPassword()))
> .setKind(jobToken.getKind().toString())
> .setService(jobToken.getService().toString())
> .build();*/
> JobShuffleInfoProto proto = JobShuffleInfoProto.newBuilder()
> .setUser(user).setJobToken(tokenProto).build();
> try {
>   stateDb.put(bytes(jobId.toString()), proto.toByteArray());
> } catch (DBException e) {
>   throw new IOException("Error storing " + jobId, e);
> }
>   }
>   addJobToken(jobId, user, jobToken);
> }
>
>
> HBase:
>
>1. HBASE-23833 
> (this
>is recently fixed in the master branch)
>2.
>
>   [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile
> (default-compile) on project hbase-server: Compilation failure:
> Compilation failure:
>   [ERROR]
> /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[361,44]
> cannot access org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder
>   [ERROR]   class file for
> org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder not found
>   [ERROR]
> /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[362,14]
> cannot access org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3
>   [ERROR]   class file for
> org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 not found
>   [ERROR]
> /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,16]
> cannot access org.apache.hadoop.thirdparty.protobuf.ByteString
>   [ERROR]   class file for
> org.apache.hadoop.thirdparty.protobuf.ByteString not found
>   [ERROR]
> /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[375,12]
> cann

Re: [Hadoop-3.3 Release update]- branch-3.3 has created

2020-04-29 Thread Surendra Singh Lilhore
Hi Brahma,

Why the branch-3.3 & branch-3.3.0 pom version is same ?.

In branch-3.3 pom version should be 3.3.1.

Please correct me if I am wrong.

-Surendra


On Sat, 25 Apr, 2020, 9:33 am Mingliang Liu,  wrote:

> Brahma,
>
> What about https://issues.apache.org/jira/browse/HADOOP-17007?
>
> Thanks,
>
> On Fri, Apr 24, 2020 at 11:07 AM Brahma Reddy Battula 
> wrote:
>
> > Ok. Done. Branch created.
> >
> > Following blockers are pending, will closely track this.
> >
> > https://issues.apache.org/jira/browse/HDFS-15287 ( Open: Under
> discussion
> > )
> > https://issues.apache.org/jira/browse/YARN-10194 ( Patch Available)
> > https://issues.apache.org/jira/browse/HDFS-15286 ( Patch Available)
> > https://issues.apache.org/jira/browse/YARN-9898 ( Patch Available)
> >
> >
> > On Fri, Apr 24, 2020 at 7:42 PM Wei-Chiu Chuang
> >  wrote:
> >
> > > +1 we should have the branch ASAP.
> > >
> > > On Wed, Apr 22, 2020 at 11:07 PM Akira Ajisaka 
> > > wrote:
> > >
> > > > > Since blockers are not closed, I didn't cut the branch because
> > > > multiple branches might confuse or sombody might miss to commit.
> > > >
> > > > The current situation is already confusing. The 3.3.1 version already
> > > > exists in JIRA, so some committers wrongly commit non-critical issues
> > to
> > > > branch-3.3 and set the fix version to 3.3.1.
> > > > I think now we should cut branch-3.3.0 and freeze source code except
> > the
> > > > blockers.
> > > >
> > > > -Akira
> > > >
> > > > On Tue, Apr 21, 2020 at 3:05 PM Brahma Reddy Battula <
> > bra...@apache.org>
> > > > wrote:
> > > >
> > > >> Sure, I will do that.
> > > >>
> > > >> Since blockers are not closed, I didn't cut the branch because
> > > >> multiple branches might confuse or sombody might miss to
> commit.Shall
> > I
> > > >> wait till this weekend to create..?
> > > >>
> > > >> On Mon, Apr 20, 2020 at 11:57 AM Akira Ajisaka  >
> > > >> wrote:
> > > >>
> > > >>> Hi Brahma,
> > > >>>
> > > >>> Thank you for preparing the release.
> > > >>> Could you cut branch-3.3.0? I would like to backport some fixes for
> > > >>> 3.3.1 and not for 3.3.0.
> > > >>>
> > > >>> Thanks and regards,
> > > >>> Akira
> > > >>>
> > > >>> On Fri, Apr 17, 2020 at 11:11 AM Brahma Reddy Battula <
> > > bra...@apache.org>
> > > >>> wrote:
> > > >>>
> > >  Hi All,
> > > 
> > >  we are down to two blockers issues now (YARN-10194 and YARN-9848)
> > > which
> > >  are in patch available state.Hopefully we can out the RC soon.
> > > 
> > >  thanks to @Prabhu Joseph 
> > > ,@masakate,@akira
> > >  and @Wei-Chiu Chuang   and others for
> helping
> > >  resloving the blockers.
> > > 
> > > 
> > > 
> > >  On Tue, Apr 14, 2020 at 10:49 PM Brahma Reddy Battula <
> > >  bra...@apache.org> wrote:
> > > 
> > > >
> > > > @Prabhu Joseph 
> > > > >>> Have committed the YARN blocker YARN-10219 to trunk and
> > > > cherry-picked to branch-3.3. Right now, there are two blocker
> > Jiras -
> > > > YARN-10233 and HADOOP-16982
> > > > which i will help to review and commit. Thanks.
> > > >
> > > > Looks you committed YARN-10219. Noted YARN-10233 and HADOOP-16982
> > as
> > > a
> > > > blockers. (without YARN-10233 we have given so many releases,it's
> > > not newly
> > > > introduced.).. Thanks
> > > >
> > > > @Vinod Kumar Vavilapalli  ,@adam Antal,
> > > >
> > > > I noted YARN-9848 as a blocker as you mentioned above.
> > > >
> > > > @All,
> > > >
> > > > Currently following four blockers are pending for 3.3.0 RC.
> > > >
> > > > HADOOP-16963,YARN-10233,HADOOP-16982 and YARN-9848.
> > > >
> > > >
> > > >
> > > > On Tue, Apr 14, 2020 at 8:11 PM Vinod Kumar Vavilapalli <
> > > > vino...@apache.org> wrote:
> > > >
> > > >> Looks like a really bad bug to me.
> > > >>
> > > >> +1 for revert and +1 for making that a 3.3.0 blocker. I think
> > should
> > > >> also revert it in a 3.2 maintenance release too.
> > > >>
> > > >> Thanks
> > > >> +Vinod
> > > >>
> > > >> > On Apr 14, 2020, at 5:03 PM, Adam Antal <
> > adam.an...@cloudera.com
> > > .INVALID>
> > > >> wrote:
> > > >> >
> > > >> > Hi everyone,
> > > >> >
> > > >> > Sorry for coming a bit late with this, but there's also one
> jira
> > > >> that can
> > > >> > have potential impact on clusters and we should talk about it.
> > > >> >
> > > >> > Steven Rand found this problem earlier and commented to
> > > >> > https://issues.apache.org/jira/browse/YARN-4946.
> > > >> > The bug has impact on the RM state store: the RM does not
> delete
> > > >> apps - see
> > > >> > more details in his comment here:
> > > >> >
> > > >>
> > >
> >
> https://issues.apache.org/jira/browse/YARN-4946?focusedCommentId=16898599&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16898599
> > > >> > .
> > > >> > (