[jira] [Created] (HADOOP-16738) Best Big data hadoop training in pune

2019-11-28 Thread surbhi nahta (Jira)
surbhi nahta created HADOOP-16738:
-

 Summary: Best Big data hadoop training in pune
 Key: HADOOP-16738
 URL: https://issues.apache.org/jira/browse/HADOOP-16738
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: surbhi nahta


h1. What are some software and skills that every data scientist should know 
(including R, Matlab, and Hadoop)? Also, what are some resources for learning 
Hadoop?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16737) Best Big data hadoop training in pune

2019-11-28 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-16737.
---
Resolution: Invalid

Can you please not spam JIRA. This is not the place to do marketing. Please 
stop raising this, despite most of us telling multiple times. 

> Best Big data hadoop training in pune
> -
>
> Key: HADOOP-16737
> URL: https://issues.apache.org/jira/browse/HADOOP-16737
> Project: Hadoop Common
>  Issue Type: New Feature
> Environment: Here at SevenMentor, We've Got industry-standard Big 
> Data Hadoop Courses in Pune made by IT professionals. The coaching we provide 
> is 100% functional. With coaching we supply 100+ missions, POC's and real 
> time jobs.
>Reporter: surbhi nahta
>Priority: Minor
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> In SevenMentor, we're always striving to attain value for our applicants. We 
> supply the Greatest Big Data Hadoop Training in Pune that comprises latest 
> tools and technologies. Any candidate in a IT background or with basic 
> understanding of programming can register for this program. Freshers or 
> expert applicants can combine this course to comprehend Hadoop analytics and 
> advancement practically.
> Big Data is the information that can't be processed by conventional database 
> systems. Substantial data consist of information in the ordered ie. Hadoop 
> frame is made up of Storage area called Hadoop Distributed File System(HDFS) 
> and processing component called the MapReduce programming version.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16737) Best Big data hadoop training in pune

2019-11-28 Thread surbhi nahta (Jira)
surbhi nahta created HADOOP-16737:
-

 Summary: Best Big data hadoop training in pune
 Key: HADOOP-16737
 URL: https://issues.apache.org/jira/browse/HADOOP-16737
 Project: Hadoop Common
  Issue Type: New Feature
 Environment: Here at SevenMentor, We've Got industry-standard Big Data 
Hadoop Courses in Pune made by IT professionals. The coaching we provide is 
100% functional. With coaching we supply 100+ missions, POC's and real time 
jobs.
Reporter: surbhi nahta


In SevenMentor, we're always striving to attain value for our applicants. We 
supply the Greatest Big Data Hadoop Training in Pune that comprises latest 
tools and technologies. Any candidate in a IT background or with basic 
understanding of programming can register for this program. Freshers or expert 
applicants can combine this course to comprehend Hadoop analytics and 
advancement practically.

Big Data is the information that can't be processed by conventional database 
systems. Substantial data consist of information in the ordered ie. Hadoop 
frame is made up of Storage area called Hadoop Distributed File System(HDFS) 
and processing component called the MapReduce programming version.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16736) Best Big data hadoop training in pune

2019-11-28 Thread Larry McCay (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay resolved HADOOP-16736.
--
Resolution: Invalid

This is spam - resolving as Invalid.

> Best Big data hadoop training in pune
> -
>
> Key: HADOOP-16736
> URL: https://issues.apache.org/jira/browse/HADOOP-16736
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: surbhi nahta
>Priority: Major
>
> At SevenMentor, we are always striving to achieve value for our candidates. 
> We provide the *Best Big Data Hadoop Training in Pune* which includes all 
> recent technologies and tools. Any candidate from an IT background or having 
> basic knowledge of programming can enroll for this course. Freshers or 
> experienced candidates can join this course to understand Hadoop analytics 
> and development practically. Big Data is the data that can not be processed 
> by traditional database systems. Big data consist of data in the structured 
> ie. Rows and Columns format, semi-structured i.e.XML records and Unstructured 
> format i.e.Text records, Twitter Comments. Hadoop is a software framework for 
> writing and running distributed applications that process a large amount of 
> data. Hadoop framework consists of Storage area known as Hadoop Distributed 
> File System(HDFS) and processing part known as the MapReduce programming 
> model. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16736) Best Big data hadoop training in pune

2019-11-28 Thread surbhi nahta (Jira)
surbhi nahta created HADOOP-16736:
-

 Summary: Best Big data hadoop training in pune
 Key: HADOOP-16736
 URL: https://issues.apache.org/jira/browse/HADOOP-16736
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: surbhi nahta


At SevenMentor, we are always striving to achieve value for our candidates. We 
provide the *Best Big Data Hadoop Training in Pune* which includes all recent 
technologies and tools. Any candidate from an IT background or having basic 
knowledge of programming can enroll for this course. Freshers or experienced 
candidates can join this course to understand Hadoop analytics and development 
practically. Big Data is the data that can not be processed by traditional 
database systems. Big data consist of data in the structured ie. Rows and 
Columns format, semi-structured i.e.XML records and Unstructured format 
i.e.Text records, Twitter Comments. Hadoop is a software framework for writing 
and running distributed applications that process a large amount of data. 
Hadoop framework consists of Storage area known as Hadoop Distributed File 
System(HDFS) and processing part known as the MapReduce programming model. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16735) Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN

2019-11-28 Thread Mingliang Liu (Jira)
Mingliang Liu created HADOOP-16735:
--

 Summary: Make it clearer in config default that 
EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN
 Key: HADOOP-16735
 URL: https://issues.apache.org/jira/browse/HADOOP-16735
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, fs/s3
Reporter: Mingliang Liu
Assignee: Mingliang Liu


In the great doc {{hadoop-aws/tools/hadoop-aws/index.html}}, user can find that 
authenticating via the AWS Environment Variables supports session token. 
However, the config description in core-default.xml does not make it clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16734) Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to branch-2

2019-11-28 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-16734:
-

 Summary: Backport HADOOP-16455- "ABFS: Implement 
FileSystem.access() method" to branch-2
 Key: HADOOP-16734
 URL: https://issues.apache.org/jira/browse/HADOOP-16734
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/azure
Affects Versions: 2.0
Reporter: Bilahari T H
Assignee: Bilahari T H
 Fix For: 2.11.0


Backport https://issues.apache.org/jira/browse/HADOOP-16455



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Some updates for Hadoop on ARM and next steps

2019-11-28 Thread Vinayakumar B
Also note that.. Nothing gets deployed back to maven central repo from this
job. So no interference with other jobs and nodes as well.

-Vinay

On Thu, 28 Nov 2019, 10:55 pm Vinayakumar B, 
wrote:

> Hi all,
>
> As a starter..
> Created a simple mvn based job (not yetus and docker as current qbt on
> trunk on x86) in
>
> https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/7/console
>
> Right now using the manual tools installed and the previous workarounds
> mentioned related to various third-party dependencies related ARM
> architecture. This will be triggered daily once automatically.
>
> Going forward we can make sure that yetus with docker works fine in this
> node as well and configure similar to x86 qbt run.
>
> -Vinay
>
> On Thu, 28 Nov 2019, 7:30 am Zhenyu Zheng, 
> wrote:
>
>> Thanks for the reply Chris, And really appriaciated about all the things
>> you have done to made our node work. I'm sending this ML to send out info
>> about the node is ready. And hope someone from Hadoop project could help
>> us
>> set some new jobs/builds, I totally understand your role and opinion, I'm
>> not asking you to add jobs for Hadoop, I'm just trying to make clear about
>> what we are looking for.
>>
>> As Chris mentioned in previous email interactions, there are 3 kinds of CI
>> nodes available in the CI system, the 1st and 2nd type have to use the
>> current infra management tools to install tools and software required for
>> the system, which the infra management tool is currently not ready for ARM
>> platform. And the 3rd kind of CI nodes is what we are ready now - we
>> manually install all the required tools and software and maintain them
>> according to infra's other nodes.  And we will try to make the infra
>> management tools usable for ARM platform to make the nodes type2 or type1.
>>
>> As for jobs/builds, seems a periodic job/builds like
>> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-trunk-Commit/
>> seems
>> to be the most suitable for what we are looking for the current step.
>> Since
>> we are still having some errors and failures(15 erros in Hadoop-YARN, 4
>> Failures and 2 Errors in Hadoop-HDFS, 23 Failures in Hadoop-MapReduce,
>> which is a quite small number comparing to the total number of tests, and
>> failures/errors in same sub-project seems to be caused by same problem)
>> that our team will work on, so we want to propose 4 different jobs similar
>> to the mechanism used in Hadoop-trunk-Commit, a SCM triggered periodic job
>> test out building and UT for each sub-project:
>> Hadoop-YARN-trunk-Commit-Aarch64, Hadoop-HDFS-trunk-Commit-Aarch64,
>> Hadoop-MapReducer-trunk-Commit-Aarch64 and Hadoop-Common-trunk-Commit
>> Aarch64 to be more tracked for each project. We can also start one by one,
>> of cause.
>>
>> Hope this could clear all the misunderstanding.
>>
>> BR,
>>
>> On Wed, Nov 27, 2019 at 10:28 PM Chris Thistlethwaite 
>> wrote:
>>
>> > If anyone would like to follow along in JIRA, here's the ticket
>> > https://issues.apache.org/jira/browse/INFRA-19369. I've been updating
>> > that ticket with any issues. arm-poc has been moved to a node in
>> Singapore
>> > and will need to be tested again with builds.
>> >
>> > I'm going to mention again that someone from Hadoop should be changing
>> > these builds in order to run against arm-poc. In my reply below, I
>> thought
>> > that the project knew about the ARM nodes and was involved with setting
>> up
>> > new builds, which is why I said I'd be willing to make simple changes
>> for
>> > testing. However I don't want to change things without the knowledge of
>> the
>> > project. The builds themselves are created by the project, not Infra,
>> which
>> > means I have no idea which build should run against ARM vs any other
>> CPU.
>> >
>> > -Chris T.
>> > #asfinfra
>> >
>> > On 11/22/19 9:28 AM, Chris Thistlethwaite wrote:
>> >
>> > In order to run builds against arm-poc, someone (me included) will need
>> to
>> > change a build config to only use that label. The node itself isn't
>> fully
>> > built out like our other ASF nodes, due to the fact that it's ARM and we
>> > don't have all the packaged tools built for that architecture, it will
>> > likely take some time to fix issues.
>> >
>> >
>> > -Chris T.
>> > #asfinfra
>> >
>> > On 11/22/19 3:46 AM, bo zhaobo wrote:
>> >
>> > Thanks. That would be great if a project can use the ARM test worker to
>> do
>> > the specific testing on ARM.
>> >
>> > Also I think it's better to make @Chris Thistlethwaite <
>> chr...@apache.org> know
>> > this email.  Could you please give some kind advices? Thank you.
>> >
>> > BR
>> >
>> > ZhaoBo
>> >
>> >
>> >
>> > [image: Mailtrack]
>> > <
>> https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&;>
>> Sender
>> > notified by
>> > Mailtrack
>> > <
>> https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&;>
>> 19/11/22
>> > 下午04:42:30
>> >
>> > Zhenyu Zheng  于2019年11月

Re: Some updates for Hadoop on ARM and next steps

2019-11-28 Thread Vinayakumar B
Hi all,

As a starter..
Created a simple mvn based job (not yetus and docker as current qbt on
trunk on x86) in

https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/7/console

Right now using the manual tools installed and the previous workarounds
mentioned related to various third-party dependencies related ARM
architecture. This will be triggered daily once automatically.

Going forward we can make sure that yetus with docker works fine in this
node as well and configure similar to x86 qbt run.

-Vinay

On Thu, 28 Nov 2019, 7:30 am Zhenyu Zheng, 
wrote:

> Thanks for the reply Chris, And really appriaciated about all the things
> you have done to made our node work. I'm sending this ML to send out info
> about the node is ready. And hope someone from Hadoop project could help us
> set some new jobs/builds, I totally understand your role and opinion, I'm
> not asking you to add jobs for Hadoop, I'm just trying to make clear about
> what we are looking for.
>
> As Chris mentioned in previous email interactions, there are 3 kinds of CI
> nodes available in the CI system, the 1st and 2nd type have to use the
> current infra management tools to install tools and software required for
> the system, which the infra management tool is currently not ready for ARM
> platform. And the 3rd kind of CI nodes is what we are ready now - we
> manually install all the required tools and software and maintain them
> according to infra's other nodes.  And we will try to make the infra
> management tools usable for ARM platform to make the nodes type2 or type1.
>
> As for jobs/builds, seems a periodic job/builds like
> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-trunk-Commit/
> seems
> to be the most suitable for what we are looking for the current step. Since
> we are still having some errors and failures(15 erros in Hadoop-YARN, 4
> Failures and 2 Errors in Hadoop-HDFS, 23 Failures in Hadoop-MapReduce,
> which is a quite small number comparing to the total number of tests, and
> failures/errors in same sub-project seems to be caused by same problem)
> that our team will work on, so we want to propose 4 different jobs similar
> to the mechanism used in Hadoop-trunk-Commit, a SCM triggered periodic job
> test out building and UT for each sub-project:
> Hadoop-YARN-trunk-Commit-Aarch64, Hadoop-HDFS-trunk-Commit-Aarch64,
> Hadoop-MapReducer-trunk-Commit-Aarch64 and Hadoop-Common-trunk-Commit
> Aarch64 to be more tracked for each project. We can also start one by one,
> of cause.
>
> Hope this could clear all the misunderstanding.
>
> BR,
>
> On Wed, Nov 27, 2019 at 10:28 PM Chris Thistlethwaite 
> wrote:
>
> > If anyone would like to follow along in JIRA, here's the ticket
> > https://issues.apache.org/jira/browse/INFRA-19369. I've been updating
> > that ticket with any issues. arm-poc has been moved to a node in
> Singapore
> > and will need to be tested again with builds.
> >
> > I'm going to mention again that someone from Hadoop should be changing
> > these builds in order to run against arm-poc. In my reply below, I
> thought
> > that the project knew about the ARM nodes and was involved with setting
> up
> > new builds, which is why I said I'd be willing to make simple changes for
> > testing. However I don't want to change things without the knowledge of
> the
> > project. The builds themselves are created by the project, not Infra,
> which
> > means I have no idea which build should run against ARM vs any other CPU.
> >
> > -Chris T.
> > #asfinfra
> >
> > On 11/22/19 9:28 AM, Chris Thistlethwaite wrote:
> >
> > In order to run builds against arm-poc, someone (me included) will need
> to
> > change a build config to only use that label. The node itself isn't fully
> > built out like our other ASF nodes, due to the fact that it's ARM and we
> > don't have all the packaged tools built for that architecture, it will
> > likely take some time to fix issues.
> >
> >
> > -Chris T.
> > #asfinfra
> >
> > On 11/22/19 3:46 AM, bo zhaobo wrote:
> >
> > Thanks. That would be great if a project can use the ARM test worker to
> do
> > the specific testing on ARM.
> >
> > Also I think it's better to make @Chris Thistlethwaite <
> chr...@apache.org> know
> > this email.  Could you please give some kind advices? Thank you.
> >
> > BR
> >
> > ZhaoBo
> >
> >
> >
> > [image: Mailtrack]
> > <
> https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&;>
> Sender
> > notified by
> > Mailtrack
> > <
> https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&;>
> 19/11/22
> > 下午04:42:30
> >
> > Zhenyu Zheng  于2019年11月22日周五 下午4:32写道:
> >
> >> Hi Hadoop,
> >>
> >>
> >>
> >> First off, I want to thanks to Wei-Chiu for having me on the last week's
> >> Hadoop community sync to introduce our ideas of ARM support on Hadoop.
> And
> >> also for all the attendees for listening and providing suggestions.
> >>
> >>
> >>
> >> I want to provide some update on the status

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-11-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1334/

[Nov 27, 2019 2:57:20 AM] (yqlin) HDFS-14649. Add suspect probe for 
DeadNodeDetector. Contributed by
[Nov 27, 2019 8:57:27 AM] (prabhujoseph) MAPREDUCE-7240. Fix Invalid event: 
TA_TOO_MANY_FETCH_FAILURE at
[Nov 27, 2019 3:56:38 PM] (stevel) HADOOP-16455. ABFS: Implement 
FileSystem.access() method.
[Nov 27, 2019 11:10:21 PM] (dazhou) HADOOP-16660. ABFS: Make RetryCount in 
ExponentialRetryPolicy




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.tools.TestDFSZKFailoverController 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   hadoop.yarn.server.webproxy.TestWebAppProxyServlet 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.ser

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-11-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [164K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [320K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/519/artifact/out/patch-unit-hadoop-yarn-project_hadoop

[jira] [Resolved] (HADOOP-16733) Best Big data hadoop training in pune

2019-11-28 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-16733.
---
Resolution: Invalid

Closing as Invalid, JIRA isn’t the place for marketing!!!

> Best Big data hadoop training in pune
> -
>
> Key: HADOOP-16733
> URL: https://issues.apache.org/jira/browse/HADOOP-16733
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: surbhi nahta
>Priority: Minor
>  Labels: data, hadoop, pune
>
> At SevenMentor, we are always striving to achieve value for our candidates. 
> We provide the Best Big Data Hadoop Training in Pune which includes all 
> recent technologies and tools. Any candidate from an IT background or having 
> basic knowledge of programming can enroll for this course. Freshers or 
> experienced candidates can join this course to understand Hadoop analytics 
> and development practically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16733) Best Big data hadoop training in pune

2019-11-28 Thread surbhi nahta (Jira)
surbhi nahta created HADOOP-16733:
-

 Summary: Best Big data hadoop training in pune
 Key: HADOOP-16733
 URL: https://issues.apache.org/jira/browse/HADOOP-16733
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: surbhi nahta


At SevenMentor, we are always striving to achieve value for our candidates. We 
provide the [Best Big Data Hadoop Training in 
Pune|[https://www.sevenmentor.com/training/big-data-hadoop-training-institute-in-pune.php]]
 which includes all recent technologies and tools. Any candidate from an IT 
background or having basic knowledge of programming can enroll for this course. 
Freshers or experienced candidates can join this course to understand Hadoop 
analytics and development practically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16732) S3Guard to support encrypted DynamoDB table

2019-11-28 Thread Mingliang Liu (Jira)
Mingliang Liu created HADOOP-16732:
--

 Summary: S3Guard to support encrypted DynamoDB table
 Key: HADOOP-16732
 URL: https://issues.apache.org/jira/browse/HADOOP-16732
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Mingliang Liu


S3Guard is not yet supporting [encrypted DynamoDB 
table|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.tutorial.html].
 We can provide an option to enable encrypted DynamoDB table so data at rest 
could be encrypted. S3Guard data in DynamoDB usually is not sensitive since 
it's the S3 namespace mirroring, but some times even this is a concern. By 
default it's not enabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org