[jira] [Resolved] (HADOOP-17905) Modify Text.ensureCapacity() to efficiently max out the backing array size

2021-09-29 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HADOOP-17905.
-
Fix Version/s: 3.3.2
   Resolution: Fixed

> Modify Text.ensureCapacity() to efficiently max out the backing array size
> --
>
> Key: HADOOP-17905
> URL: https://issues.apache.org/jira/browse/HADOOP-17905
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> This is a continuation of HADOOP-17901.
> Right now we use a factor of 1.5x to increase the byte array if it's full. 
> However, if the size reaches a certain point, the increment is only (current 
> size + length). This can cause performance issues if the textual data which 
> we intend to store is beyond this point.
> Instead, let's max out the array to the maximum. Based on different sources, 
> a safe choice seems to be Integer.MAX_VALUE - 8 (see ArrayList, 
> AbstractCollection, HashTable, etc).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Hadoop 3.1.x EOL

2021-06-03 Thread Bharat Viswanadham
+1

Thanks,
Bharat


On Thu, Jun 3, 2021 at 1:00 PM Viraj Jasani  wrote:

> +1 (non-binding)
>
> On Thu, 3 Jun 2021 at 12:21 PM, Wei-Chiu Chuang 
> wrote:
>
> > +1
> >
> > On Thu, Jun 3, 2021 at 2:14 PM Akira Ajisaka 
> wrote:
> >
> > > Dear Hadoop developers,
> > >
> > > Given the feedback from the discussion thread [1], I'd like to start
> > > an official vote
> > > thread for the community to vote and start the 3.1 EOL process.
> > >
> > > What this entails:
> > >
> > > (1) an official announcement that no further regular Hadoop 3.1.x
> > releases
> > > will be made after 3.1.4.
> > > (2) resolve JIRAs that specifically target 3.1.5 as won't fix.
> > >
> > > This vote will run for 7 days and conclude by June 10th, 16:00 JST [2].
> > >
> > > Committers are eligible to cast binding votes. Non-committers are
> > welcomed
> > > to cast non-binding votes.
> > >
> > > Here is my vote, +1
> > >
> > > [1] https://s.apache.org/w9ilb
> > > [2]
> > >
> >
> https://www.timeanddate.com/worldclock/fixedtime.html?msg=4=20210610T16=248
> > >
> > > Regards,
> > > Akira
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


Re: [VOTE] Moving Ozone to a separated Apache project

2020-09-29 Thread Bharat Viswanadham
+1
Thank You @Elek, Marton  for driving this.


Thanks,
Bharat


On Mon, Sep 28, 2020 at 10:54 AM Vivek Ratnavel 
wrote:

> +1 for moving Ozone to a separated Top-Level Apache Project.
>
> Thanks,
> Vivek Subramanian
>
> On Mon, Sep 28, 2020 at 8:30 AM Hanisha Koneru
> 
> wrote:
>
> > +1
> >
> > Thanks,
> > Hanisha
> >
> > > On Sep 27, 2020, at 11:48 PM, Akira Ajisaka 
> wrote:
> > >
> > > +1
> > >
> > > Thanks,
> > > Akira
> > >
> > > On Fri, Sep 25, 2020 at 3:00 PM Elek, Marton  > e...@apache.org>> wrote:
> > >>
> > >> Hi all,
> > >>
> > >> Thank you for all the feedback and requests,
> > >>
> > >> As we discussed in the previous thread(s) [1], Ozone is proposed to
> be a
> > >> separated Apache Top Level Project (TLP)
> > >>
> > >> The proposal with all the details, motivation and history is here:
> > >>
> > >>
> >
> https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Hadoop+subproject+to+Apache+TLP+proposal
> > >>
> > >> This voting runs for 7 days and will be concluded at 2nd of October,
> 6AM
> > >> GMT.
> > >>
> > >> Thanks,
> > >> Marton Elek
> > >>
> > >> [1]:
> > >>
> >
> https://lists.apache.org/thread.html/rc6c79463330b3e993e24a564c6817aca1d290f186a1206c43ff0436a%40%3Chdfs-dev.hadoop.apache.org%3E
> > >>
> > >> -
> > >> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>  > yarn-dev-unsubscr...@hadoop.apache.org>
> > >> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> > 
> > >>
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > 
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > 
> >
>


[jira] [Created] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HADOOP-17245:
---

 Summary: Add RootedOzFS AbstractFileSystem to core-default.xml
 Key: HADOOP-17245
 URL: https://issues.apache.org/jira/browse/HADOOP-17245
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


When "ofs" is default, when running mapreduce job, YarnClient fails with below 
exception.
Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
scheme: ofs
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
at 
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
at java.security.AccessController.doPrivileged(Native Method)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Bharat Viswanadham
+1 (binding)

*  Built from the source tarball.
*  Verified the checksums and signatures.
*  Verified basic Ozone file system(o3fs) and S3 operations via AWS S3 CLI
on the OM HA un-secure cluster.
*  Verified ozone shell commands via CLI on the OM HA un-secure cluster.
*  Verified basic Ozone file system and S3 operations via AWS S3 CLI on the
OM HA secure cluster.
*  Verified ozone shell commands via CLI on the OM HA secure cluster.

Thanks, Sammi for driving the release.

Regards,
Bharat


On Mon, Aug 31, 2020 at 10:23 AM Xiaoyu Yao 
wrote:

> +1 (binding)
>
> * Verify the checksums and signatures.
> * Verify basic Ozone file system and S3 operations via CLI in secure docker
> compose environment
> * Run MR examples and teragen/terasort with ozone secure enabled.
> * Verify EN/CN document rendering with hugo serve
>
> Thanks Sammi for driving the release.
>
> Regards,
> Xiaoyu
>
> On Mon, Aug 31, 2020 at 8:55 AM Shashikant Banerjee
>  wrote:
>
> > +1(binding)
> >
> > 1.Verified checksums
> > 2.Verified signatures
> > 3.Verified the output of `ozone version
> > 4.Tried creating volume and bucket, write and read key, by Ozone shell
> > 5.Verified basic Ozone Filesystem operations
> >
> > Thank you very much Sammi for putting up the release together.
> >
> > Thanks
> > Shashi
> >
> > On Mon, Aug 31, 2020 at 4:35 PM Elek, Marton  wrote:
> >
> > > +1 (binding)
> > >
> > >
> > > 1. verified signatures
> > >
> > > 2. verified checksums
> > >
> > > 3. verified the output of `ozone version` (includes the good git
> > revision)
> > >
> > > 4. verified that the source package matches the git tag
> > >
> > > 5. verified source can be used to build Ozone without previous state
> > > (docker run -v ... -it maven ... --> built from the source with zero
> > > local maven cache during 16 minutes --> did on a sever at this time)
> > >
> > > 6. Verified Ozone can be used from binary package (cd compose/ozone &&
> > > test.sh --> all tests were passed)
> > >
> > > 7. Verified documentation is included in SCM UI
> > >
> > > 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
> > >
> > > 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
> > > executor) [2]
> > >
> > > 10. Deployed to Kubernetes and executed Flink Word count [3]
> > >
> > > 11. Deployed to Kubernetes and executed Nifi
> > >
> > > Thanks very much Sammi, to drive this release...
> > > Marton
> > >
> > > ps:  NiFi setup requires some more testing. Counters were not updated
> on
> > > the UI and at some cases, I saw DirNotFound exceptions when I used
> > > master. But during the last test with -rc1 it worked well.
> > >
> > > [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
> > >
> > > [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
> > >
> > > [3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone
> > >
> > >
> > > On 8/25/20 4:01 PM, Sammi Chen wrote:
> > > > RC1 artifacts are at:
> > > > https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> > > > 
> > > >
> > > > Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1278
> > > > <
> > https://repository.apache.org/content/repositories/orgapachehadoop-1277
> > > >
> > > >
> > > > The public key used for signing the artifacts can be found at:
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > > The RC1 tag in github is at:
> > > > https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> > > >  >
> > > >
> > > > Change log of RC1, add
> > > > 1. HDDS-4063. Fix InstallSnapshot in OM HA
> > > > 2. HDDS-4139. Update version number in upgrade tests.
> > > > 3. HDDS-4144, Update version info in hadoop client dependency readme
> > > >
> > > > *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm
> > PST.*
> > > >
> > > > Thanks,
> > > > Sammi Chen
> > > >
> > >
> > > -
> > > To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-21 Thread Bharat Viswanadham
I am seeing this issue when running hdfs commands on hadoop 27
docker-compose. I see the same test failing when running the smoke test.


$ docker exec -it c7fe17804044 bash

bash-4.4$ hdfs dfs -put /opt/hadoop/NOTICE.txt o3fs://bucket1.vol1/kk

2020-03-22 04:40:14 WARN  NativeCodeLoader:60 - Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable

2020-03-22 04:40:15 INFO  MetricsConfig:118 - Loaded properties from
hadoop-metrics2.properties

2020-03-22 04:40:16 INFO  MetricsSystemImpl:374 - Scheduled Metric snapshot
period at 10 second(s).

2020-03-22 04:40:16 INFO  MetricsSystemImpl:191 - XceiverClientMetrics
metrics system started

-put: Fatal internal error

java.lang.NullPointerException: client is null

at java.util.Objects.requireNonNull(Objects.java:228)

at
org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:201)

at
org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:227)

at
org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:305)

at
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:315)

at
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:599)

at
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:452)

at
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:463)

at
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:486)

at
org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:144)

at
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleStreamAction(KeyOutputStream.java:481)

at
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:455)

at
org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:508)

at
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:56)

at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)

at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)

at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:62)

at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:120)

at
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)

at
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)

at
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)

at
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)

at
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)

at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)

at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)

at
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)

at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)

at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)

at
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)

at
org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)

at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)

at org.apache.hadoop.fs.shell.Command.run(Command.java:165)

at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)


The same command when using ozone fs is working fine.

 docker exec -it fe5d39cf6eed bash

bash-4.2$ ozone fs -put /opt/hadoop/NOTICE.txt o3fs://bucket1.vol1/kk

2020-03-22 04:41:10,999 [main] INFO impl.MetricsConfig: Loaded properties
from hadoop-metrics2.properties

2020-03-22 04:41:11,123 [main] INFO impl.MetricsSystemImpl: Scheduled
Metric snapshot period at 10 second(s).

2020-03-22 04:41:11,127 [main] INFO impl.MetricsSystemImpl:
XceiverClientMetrics metrics system started

bash-4.2$ ozone fs -ls o3fs://bucket1.vol1/

Found 1 items

-rw-rw-rw-   3 hadoop hadoop  17540 2020-03-22 04:41
o3fs://bucket1.vol1/kk


- Built from the source tarball
- Verified md5 and sha256 signatures.
- Ran smoke tests, found one above issue.
- Deployed to a 5 node docker cluster using ozone compose definition(OM +
SCM + 3 Datanodes), and ran basic ozone shell and fs commands.

Thank You, Dinesh for driving the release.


Thanks,
Bharat




On Sat, Mar 21, 2020 at 8:48 PM Arpit Agarwal 
wrote:

> +1 binding.
>
> - Verified hashes and signatures
> - Built from source
> - Deployed to 5 node cluster
> - 

Re: [VOTE] EOL Hadoop branch-2.8

2020-03-03 Thread Bharat Viswanadham
+1

Thanks,
Bharat

On Tue, Mar 3, 2020 at 7:46 PM Zhankun Tang  wrote:

> Thanks, Wei-Chiu. +1.
>
> BR,
> Zhankun
>
> On Wed, 4 Mar 2020 at 08:03, Wilfred Spiegelenburg
>  wrote:
>
> > +1
> >
> > Wilfred
> >
> > > On 3 Mar 2020, at 05:48, Wei-Chiu Chuang  wrote:
> > >
> > > I am sorry I forgot to start a VOTE thread.
> > >
> > > This is the "official" vote thread to mark branch-2.8 End of Life. This
> > is
> > > based on the following thread and the tracking jira (HADOOP-16880
> > > ).
> > >
> > > This vote will run for 7 days and conclude on March 9th (Mon) 11am PST.
> > >
> > > Please feel free to share your thoughts.
> > >
> > > Thanks,
> > > Weichiu
> > >
> > > On Mon, Feb 24, 2020 at 10:28 AM Wei-Chiu Chuang  >
> > > wrote:
> > >
> > >> Looking at the EOL policy wiki:
> > >>
> >
> https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches
> > >>
> > >> The Hadoop community can still elect to make security update for
> EOL'ed
> > >> releases.
> > >>
> > >> I think the EOL is to give more clarity to downstream applications
> (such
> > >> as HBase) the guidance of which Hadoop release lines are still active.
> > >> Additionally, I don't think it is sustainable to maintain 6 concurrent
> > >> release lines in this big project, which is why I wanted to start this
> > >> discussion.
> > >>
> > >> Thoughts?
> > >>
> > >> On Mon, Feb 24, 2020 at 10:22 AM Sunil Govindan 
> > wrote:
> > >>
> > >>> Hi Wei-Chiu
> > >>>
> > >>> Extremely sorry for the late reply here.
> > >>> Cud u pls help to add more clarity on defining what will happen for
> > >>> branch-2.8 when we call EOL.
> > >>> Does this mean that, no more release coming out from this branch, or
> > some
> > >>> more additional guidelines?
> > >>>
> > >>> - Sunil
> > >>>
> > >>>
> > >>> On Mon, Feb 24, 2020 at 11:47 PM Wei-Chiu Chuang
> > >>>  wrote:
> > >>>
> >  This thread has been running for 7 days and no -1.
> > 
> >  Don't think we've established a formal EOL process, but to publicize
> > the
> >  EOL, I am going to file a jira, update the wiki and post the
> > >>> announcement
> >  to general@ and user@
> > 
> >  On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia <
> > >>> dineshc@gmail.com>
> >  wrote:
> > 
> > > Thanks Wei-Chiu for initiating this.
> > >
> > > +1 for 2.8 EOL.
> > >
> > > On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka <
> aajis...@apache.org>
> > > wrote:
> > >
> > >> Thanks Wei-Chiu for starting the discussion,
> > >>
> > >> +1 for the EoL.
> > >>
> > >> -Akira
> > >>
> > >> On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena 
> >  wrote:
> > >>
> > >>> Thanx Wei-Chiu for initiating this
> > >>> +1 for marking 2.8 EOL
> > >>>
> > >>> -Ayush
> > >>>
> >  On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang <
> > >>> weic...@apache.org>
> > >> wrote:
> > 
> >  The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th
> >  2018.
> > 
> >  It's been 17 months since the release and the community by and
> >  large
> > >> have
> >  moved up to 2.9/2.10/3.x.
> > 
> >  With Hadoop 3.3.0 over the horizon, is it time to start the EOL
> > >>> discussion
> >  and reduce the number of active branches?
> > >>>
> > >>>
> > >>> -
> > >>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > >>> For additional commands, e-mail:
> > >>> common-dev-h...@hadoop.apache.org
> > >>>
> > >>>
> > >>
> > >
> > 
> > >>>
> > >>
> >
> > Wilfred Spiegelenburg
> > Staff Software Engineer
> >  
> >
>


Re: [ANNOUNCE] New Apache Hadoop Committer - Stephen O'Donnell

2020-03-03 Thread Bharat Viswanadham
Congratulations Stephen!

Thanks,
Bharat


On Tue, Mar 3, 2020 at 12:12 PM Wei-Chiu Chuang  wrote:

> In bcc: general@
>
> It's my pleasure to announce that Stephen O'Donnell has been elected as
> committer on the Apache Hadoop project recognizing his continued
> contributions to the
> project.
>
> Please join me in congratulating him.
>
> Hearty Congratulations & Welcome aboard Stephen!
>
> Wei-Chiu Chuang
> (On behalf of the Hadoop PMC)
>


Re: [DISCUSS] Ozone 0.4.2 release

2019-12-07 Thread Bharat Viswanadham
+1

Thanks,
Bharat


On Sat, Dec 7, 2019 at 1:18 PM Giovanni Matteo Fumarola <
giovanni.fumar...@gmail.com> wrote:

> +1
>
> Thanks for starting this.
>
> On Sat, Dec 7, 2019 at 1:13 PM Jitendra Pandey
>  wrote:
>
> > +1
> >
> >
> > > On Dec 7, 2019, at 9:13 AM, Arpit Agarwal
> 
> > wrote:
> > >
> > > +1
> > >
> > >
> > >
> > >> On Dec 6, 2019, at 5:25 PM, Dinesh Chitlangia 
> > wrote:
> > >>
> > >> All,
> > >> Since the Apache Hadoop Ozone 0.4.1 release, we have had significant
> > >> bug fixes towards performance & stability.
> > >>
> > >> With that in mind, 0.4.2 release would be good to consolidate all
> those
> > fixes.
> > >>
> > >> Pls share your thoughts.
> > >>
> > >>
> > >> Thanks,
> > >> Dinesh Chitlangia
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


Re: [DISCUSS] Remove Ozone and Submarine from Hadoop repo

2019-10-24 Thread Bharat Viswanadham
+1

Thanks,
Bharat

On Thu, Oct 24, 2019 at 10:35 PM Jitendra Pandey
 wrote:

> +1
>
> On Thu, Oct 24, 2019 at 6:42 PM Ayush Saxena  wrote:
>
> > Thanx Akira for putting this up.
> > +1, Makes sense removing.
> >
> > -Ayush
> >
> > > On 25-Oct-2019, at 6:55 AM, Dinesh Chitlangia <
> dchitlan...@cloudera.com.invalid>
> > wrote:
> > >
> > > +1 and Anu's approach of creating a tag makes sense.
> > >
> > > Dinesh
> > >
> > >
> > >
> > >
> > >> On Thu, Oct 24, 2019 at 9:24 PM Sunil Govindan 
> > wrote:
> > >>
> > >> +1 on this to remove staleness.
> > >>
> > >> - Sunil
> > >>
> > >> On Thu, Oct 24, 2019 at 12:51 PM Akira Ajisaka 
> > >> wrote:
> > >>
> > >>> Hi folks,
> > >>>
> > >>> Both Ozone and Apache Submarine have separate repositories.
> > >>> Can we remove these modules from hadoop-trunk?
> > >>>
> > >>> Regards,
> > >>> Akira
> > >>>
> > >>
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


Re: [DISCUSS] Enable github security notifications to all Hadoop committers

2019-10-24 Thread Bharat Viswanadham
+1 (binding). I am interested in receiving these notifications.


Thanks,
Bharat



On Wed, Oct 23, 2019 at 11:38 PM Wei-Chiu Chuang  wrote:

> Hi,
> I raised INFRA-19327 
> to
> enable github security notification. How do people feel if we enable this
> notification to all committers? I already have hundreds of incoming emails
> from various Hadoop alias each day so I don't mind adding a few more.
>
> If there is no good consensus, then we'll have to let committers to opt-in
> individually.
>
> Thanks,
> Weichiu
>


[jira] [Resolved] (HADOOP-16373) Fix typo in FileSystemShell#test documentation

2019-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HADOOP-16373.
-
   Resolution: Fixed
Fix Version/s: 0.5.0

> Fix typo in FileSystemShell#test documentation
> --
>
> Key: HADOOP-16373
> URL: https://issues.apache.org/jira/browse/HADOOP-16373
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1, 3.0.0, 3.2.0, 2.9.2, 3.1.2
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Trivial
> Fix For: 0.5.0
>
>
> Typo is describing option -d
> https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#test
> {code:java}
> test
> Usage: hadoop fs -test -[defsz] URI
> Options:
> -d: f the path is a directory, return 0.
> -e: if the path exists, return 0.
> -f: if the path is a file, return 0.
> -s: if the path is not empty, return 0.
> -z: if the file is zero length, return 0.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16372) Fix typo in DFSUtil getHttpPolicy method

2019-06-13 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-16372:
---

 Summary: Fix typo in DFSUtil getHttpPolicy method
 Key: HADOOP-16372
 URL: https://issues.apache.org/jira/browse/HADOOP-16372
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham


[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1479]

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu

2019-05-07 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HADOOP-16302.
-
Resolution: Fixed

> Fix typo on Hadoop Site Help dropdown menu
> --
>
> Key: HADOOP-16302
> URL: https://issues.apache.org/jira/browse/HADOOP-16302
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: asf-site
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png
>
>
> On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
> Sponsorshop.
> This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16198) Upgrade Jackson-databind version to 2.9.8

2019-03-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HADOOP-16198.
-
Resolution: Duplicate

> Upgrade Jackson-databind version to 2.9.8
> -
>
> Key: HADOOP-16198
> URL: https://issues.apache.org/jira/browse/HADOOP-16198
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Bharat Viswanadham
>    Assignee: Bharat Viswanadham
>Priority: Major
>
> Jackson-databind 2.9.8 has a few fixes which are important to include.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16198) Upgrade Jackson-databind version to 2.9.8

2019-03-18 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-16198:
---

 Summary: Upgrade Jackson-databind version to 2.9.8
 Key: HADOOP-16198
 URL: https://issues.apache.org/jira/browse/HADOOP-16198
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Jackson-databind is affected by below CVEs and are getting reported by 
Customers.

CVE-2018-14719
CVE-2018-14720
CVE-2018-14721
CVE-2018-1000873
CVE-2018-7489
CVE-2018-19362
CVE-2017-15095
CVE-2018-19361
CVE-2017-7525
CVE-2018-19360
CVE-2017-17485
CVE-2018-5968



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Propose to start new Hadoop sub project "submarine"

2019-02-01 Thread Bharat Viswanadham
+1

Thanks,
Bharat


On 2/1/19, 3:12 PM, "Anu Engineer"  wrote:

+1
--Anu


On 2/1/19, 3:02 PM, "Jonathan Hung"  wrote:

+1. Thanks Wangda.

Jonathan Hung


On Fri, Feb 1, 2019 at 2:25 PM Dinesh Chitlangia <
dchitlan...@hortonworks.com> wrote:

> +1 (non binding), thanks Wangda for organizing this.
>
> Regards,
> Dinesh
>
>
>
> On 2/1/19, 5:24 PM, "Wangda Tan"  wrote:
>
> Hi all,
>
> According to positive feedbacks from the thread [1]
>
> This is vote thread to start a new subproject named 
"hadoop-submarine"
> which follows the release process already established for ozone.
>
> The vote runs for usual 7 days, which ends at Feb 8th 5 PM PDT.
>
> Thanks,
> Wangda Tan
>
> [1]
>
> 
https://lists.apache.org/thread.html/f864461eb188bd12859d51b0098ec38942c4429aae7e4d001a633d96@%3Cyarn-dev.hadoop.apache.org%3E
>
>
>



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [DISCUSS] Making submarine to different release model like Ozone

2019-02-01 Thread Bharat Viswanadham
Thank You Wangda for driving this discussion.
+1 for a separate release for submarine.
Having own release cadence will help iterate the project to grow at a faster 
pace and also get the new features in hand to the users, and get their feedback 
quickly.


Thanks,
Bharat




On 2/1/19, 10:54 AM, "Ajay Kumar"  wrote:

+1, Thanks for driving this. With rise of use cases running ML along with 
traditional applications this will be of great help.

Thanks,
Ajay   

On 2/1/19, 10:49 AM, "Suma Shivaprasad"  
wrote:

+1. Thanks for bringing this up Wangda.

Makes sense to have Submarine follow its own release cadence given the 
good
momentum/adoption so far. Also, making it run with older versions of 
Hadoop
would drive higher adoption.

Suma

On Fri, Feb 1, 2019 at 9:40 AM Eric Yang  wrote:

> Submarine is an application built for YARN framework, but it does not 
have
> strong dependency on YARN development.  For this kind of projects, it 
would
> be best to enter Apache Incubator cycles to create a new community.  
Apache
> commons is the only project other than Incubator that has independent
> release cycles.  The collection is large, and the project goal is
> ambitious.  No one really knows which component works with each other 
in
> Apache commons.  Hadoop is a much more focused project on distributed
> computing framework and not incubation sandbox.  For alignment with 
Hadoop
> goals, and we want to prevent Hadoop project to be overloaded while
> allowing good ideas to be carried forwarded in Apache incubator.  Put 
on my
> Apache Member hat, my vote is -1 to allow more independent subproject
> release cycle in Hadoop project that does not align with Hadoop 
project
> goals.
>
> Apache incubator process is highly recommended for Submarine:
> https://incubator.apache.org/policy/process.html This allows 
Submarine to
> develop for older version of Hadoop like Spark works with multiple 
versions
> of Hadoop.
>
> Regards,
> Eric
>
> On 1/31/19, 10:51 PM, "Weiwei Yang"  wrote:
>
> Thanks for proposing this Wangda, my +1 as well.
> It is amazing to see the progress made in Submarine last year, the
> community grows fast and quiet collaborative. I can see the reasons 
to get
> it release faster in its own cycle. And at the same time, the Ozone 
way
> works very well.
>
> —
> Weiwei
> On Feb 1, 2019, 10:49 AM +0800, Xun Liu , wrote:
> > +1
> >
> > Hello everyone,
> >
> > I am Xun Liu, the head of the machine learning team at Netease
> Research Institute. I quite agree with Wangda.
> >
> > Our team is very grateful for getting Submarine machine learning
> engine from the community.
> > We are heavy users of Submarine.
> > Because Submarine fits into the direction of our big data team's
> hadoop technology stack,
> > It avoids the needs to increase the manpower investment in 
learning
> other container scheduling systems.
> > The important thing is that we can use a common YARN cluster to 
run
> machine learning,
> > which makes the utilization of server resources more efficient, 
and
> reserves a lot of human and material resources in our previous years.
> >
> > Our team have finished the test and deployment of the Submarine 
and
> will provide the service to our e-commerce department (
> http://www.kaola.com/) shortly.
> >
> > We also plan to provides the Submarine engine in our existing 
YARN
> cluster in the next six months.
> > Because we have a lot of product departments need to use machine
> learning services,
> > for example:
> > 1) Game department (http://game.163.com/) needs AI battle 
training,
> > 2) News department (http://www.163.com) needs news 
recommendation,
> > 3) Mailbox department (http://www.163.com) requires anti-spam 
and
> illegal detection,
> > 4) Music department (https://music.163.com/) requires music
> recommendation,
> > 5) Education department (http://www.youdao.com) requires voice
> recognition,
> > 6) Massive Open Online Courses (https://open.163.com/) requires
> multilingual translation and so on.
> >
> > If Submarine can be released independently like Ozone, it will 
help
> us quickly get the latest features and improvements, and it will be 
great
> helpful to our 

Re: [VOTE] - HDDS-4 Branch merge

2019-01-11 Thread Bharat Viswanadham
+1 (binding)


Thanks,
Bharat


On 1/11/19, 11:04 AM, "Hanisha Koneru"  wrote:

+1 (binding)

Thanks,
Hanisha









On 1/11/19, 7:40 AM, "Anu Engineer"  wrote:

>Since I have not heard any concerns, I will start a VOTE thread now.
>This vote will run for 7 days and will end on Jan/18/2019 @ 8:00 AM PST.
>
>I will start with my vote, +1 (Binding)
>
>Thanks
>Anu
>
>
>-- Forwarded message -
>From: Anu Engineer 
>Date: Mon, Jan 7, 2019 at 5:10 PM
>Subject: [Discuss] - HDDS-4 Branch merge
>To: , 
>
>
>Hi All,
>
>I would like to propose a merge of HDDS-4 branch to the Hadoop trunk.
>HDDS-4 branch implements the security work for HDDS and Ozone.
>
>HDDS-4 branch contains the following features:
>- Hadoop Kerberos and Tokens support
>- A Certificate infrastructure used by Ozone and HDDS.
>- Audit Logging and parsing support (Spread across trunk and HDDS-4)
>- S3 Security Support - AWS Signature Support.
>- Apache Ranger Support for Ozone
>
>I will follow up with a formal vote later this week if I hear no
>objections. AFAIK, the changes are isolated to HDDS/Ozone and should not
>impact any other Hadoop project.
>
>Thanks
>Anu

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




Re: [DISCUSS] Move to gitbox

2018-12-11 Thread Bharat Viswanadham
+1

Thanks,
Bharat


On 12/11/18, 9:07 PM, "Brahma Reddy Battula"  wrote:

+1



On Sat, Dec 8, 2018 at 1:26 PM, Akira Ajisaka  wrote:

> Hi all,
>
> Apache Hadoop git repository is in git-wip-us server and it will be
> decommissioned.
> If there are no objection, I'll file a JIRA ticket with INFRA to
> migrate to https://gitbox.apache.org/ and update documentation.
>
> According to ASF infra team, the timeframe is as follows:
>
> > - December 9th 2018 -> January 9th 2019: Voluntary (coordinated)
> relocation
> > - January 9th -> February 6th: Mandated (coordinated) relocation
> > - February 7th: All remaining repositories are mass migrated.
> > This timeline may change to accommodate various scenarios.
>
> If we got consensus by January 9th, I can file a ticket with INFRA and
> migrate it.
> Even if we cannot got consensus, the repository will be migrated by
> February 7th.
>
> Regards,
> Akira
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
> --



--Brahma Reddy Battula




Re: [VOTE] Release Apache Hadoop Ozone 0.3.0-alpha (RC1)

2018-11-19 Thread Bharat Viswanadham
+1 (non-binding)
- Built from source.
- Deployed a 3 node docker cluster and verified S3 commands.
- Verified Virtual host style url format using CURL calls.
- Verified bucket browser UI.
- Verified UI of OM and SCM

  Thank You Marton for putting the release together.

Thanks,
Bharat


On 11/19/18, 9:04 AM, "Mukul Kumar Singh"  wrote:

+1.
Thanks for putting the release Marton, 

- Verified signatures and checksum
- Built from source
- Deployed Ozone on a physical cluster and ran Freon.


On 11/19/18, 7:27 PM, "Lokesh Jain"  wrote:

+1 (non-binding)

- Verified signatures and checksum
- Built from source
- Ran smoke tests

Thanks Marton for putting the release together!

- Lokesh
> On 16-Nov-2018, at 6:45 PM, Shashikant Banerjee 
 wrote:
> 
> +1 (non-binding).
> 
>  - Verified signatures
>  - Verified checksums
>  - Checked LICENSE/NOTICE files
>  - Built from source
>  - Ran smoke tests.
> 
> Thanks Marton for putting up the release together.
> 
> Thanks
> Shashi
> 
> On 11/14/18, 10:44 PM, "Elek, Marton"  wrote:
> 
>Hi all,
> 
>I've created the second release candidate (RC1) for Apache Hadoop 
Ozone
>0.3.0-alpha including one more fix on top of the previous RC0 
(HDDS-854)
> 
>This is the second release of Apache Hadoop Ozone. Notable changes 
since
>the first release:
> 
>* A new S3 compatible rest server is added. Ozone can be used from 
any
>S3 compatible tools (HDDS-434)
>* Ozone Hadoop file system URL prefix is renamed from o3:// to 
o3fs://
>(HDDS-651)
>* Extensive testing and stability improvements of OzoneFs.
>* Spark, YARN and Hive support and stability improvements.
>* Improved Pipeline handling and recovery.
>* Separated/dedicated classpath definitions for all the Ozone
>components. (HDDS-447)
> 
>The RC artifacts are available from:
>https://home.apache.org/~elek/ozone-0.3.0-alpha-rc1/
> 
>The RC tag in git is: ozone-0.3.0-alpha-RC1 (ebbf459e6a6)
> 
>Please try it out, vote, or just give us feedback.
> 
>The vote will run for 5 days, ending on November 19, 2018 18:00 
UTC.
> 
> 
>Thank you very much,
>Marton
> 
>
PS:
> 
>The easiest way to try it out is:
> 
>1. Download the binary artifact
>2. Read the docs from ./docs/index.html
>3. TLDR; cd compose/ozone && docker-compose up -d
>4. open localhost:9874 or localhost:9876
> 
> 
> 
>The easiest way to try it out from the source:
> 
>1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
>-DskipShade -am -pl :hadoop-ozone-dist
>2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha && docker-compose 
up -d
> 
> 
> 
>The easiest way to test basic functionality (with acceptance 
tests):
> 
>1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
>-DskipShade -am -pl :hadoop-ozone-dist
>2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha/smoketest
>3. ./test.sh
> 
>
-
>To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 
> 
> 
> 
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




[jira] [Created] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-12 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-15924:
---

 Summary: Hadoop aws cannot be used with shaded jars
 Key: HADOOP-15924
 URL: https://issues.apache.org/jira/browse/HADOOP-15924
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham


Issue is hadoop-aws cannot be used with shaded jars.

The recommended client side jars for hadoop 3 are client-api/runtime shaded 
jars.
They shade guava etc. So something like SemaphoredDelegatingExecutor refers to 
shaded guava classes.

hadoop-aws has S3AFileSystem implementation which refers to 
SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
constructor. When S3AFileSystem is created then it uses the hadoop-api jar and 
finds SemaphoredDelegatingExecutor but not the right constructor because in 
client-api jar SemaphoredDelegatingExecutor constructor has the shaded guava 
ListenerService.

So essentially none of the aws/azure/adl hadoop FS implementations will work 
with the shaded Hadoop client runtime jars.

 

This Jira is created to track the work required to make hadoop-aws work with 
hadoop shaded client jars.

 

The solution for this can be, hadoop-aws depends on hadoop shaded jars. In this 
way, we shall not see the issue. Currently, hadoop-aws depends on 
aws-sdk-bundle and all other remaining jars are provided dependencies.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Jenkins build machines are down

2018-10-31 Thread Bharat Viswanadham
There is already an infra ticket opened for this.
https://issues.apache.org/jira/browse/INFRA-17188

Comment from this Infra ticket:
Please read the builds@ list, they are getting a disk upgrade and will be back 
when complete.

Thank You @Marton Elek for providing this information.

Thanks,
Bharat


On 10/31/18, 11:58 AM, "Arpit Agarwal"  wrote:

A number of the Jenkins build machines appear to be down with different 
error messages.

https://builds.apache.org/label/Hadoop/

Does anyone know what is the process to restore them? I assume INFRA cannot 
help as they don’t own these machines.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




[jira] [Created] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-15879:
---

 Summary: Upgrade eclipse jetty version to 9.3.25.v20180904
 Key: HADOOP-15879
 URL: https://issues.apache.org/jira/browse/HADOOP-15879
 Project: Hadoop Common
  Issue Type: Task
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Ozone 0.2.1-alpha (RC0)

2018-09-25 Thread Bharat Viswanadham
Hi Marton,
Thank You for the first ozone release.
+1 (non-binding)

1. Verified signatures.
2. Built from source.
3. Ran a docker cluster using docker files from ozone tar ball. Tested ozone 
shell commands.
4. Ran ozone-hdfs cluster and verified ozone is started as a plugin when 
datanode boots up.

Thanks,
Bharat




On 9/19/18, 2:49 PM, "Elek, Marton"  wrote:

Hi all,

After the recent discussion about the first Ozone release I've created 
the first release candidate (RC0) for Apache Hadoop Ozone 0.2.1-alpha.

This release is alpha quality: it’s not recommended to use in production 
but we believe that it’s stable enough to try it out the feature set and 
collect feedback.

The RC artifacts are available from: 
https://home.apache.org/~elek/ozone-0.2.1-alpha-rc0/

The RC tag in git is: ozone-0.2.1-alpha-RC0 (968082ffa5d)

Please try the release and vote; the vote will run for the usual 5 
working days, ending on September 26, 2018 10pm UTC time.

The easiest way to try it out is:

1. Download the binary artifact
2. Read the docs at ./docs/index.html
3. TLDR; cd compose/ozone && docker-compose up -d


Please try it out, vote, or just give us feedback.

Thank you very much,
Marton

ps: At next week, we will have a BoF session at ApacheCon North Europe, 
Montreal on Monday evening. Please join, if you are interested, or need 
support to try out the package or just have any feedback.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


[jira] [Resolved] (HADOOP-15139) [Umbrella] Improvements and fixes for Hadoop shaded client work

2018-09-11 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HADOOP-15139.
-
  Resolution: Fixed
Target Version/s: 3.1.1, 3.2.0  (was: 3.2.0)

> [Umbrella] Improvements and fixes for Hadoop shaded client work 
> 
>
> Key: HADOOP-15139
> URL: https://issues.apache.org/jira/browse/HADOOP-15139
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>    Assignee: Bharat Viswanadham
>Priority: Critical
>
> In HADOOP-11656, we have made great progress in splitting out third-party 
> dependencies from shaded hadoop client jar (hadoop-client-api), put runtime 
> dependencies in hadoop-client-runtime, and have shaded version of 
> hadoop-client-minicluster for test. However, there are still some left work 
> for this feature to be fully completed:
> - We don't have a comprehensive documentation to guide downstream 
> projects/users to use shaded JARs instead of previous JARs
> - We should consider to wrap up hadoop tools (distcp, aws, azure) to have 
> shaded version
> - More issues could be identified when shaded jars are adopted in more test 
> and production environment, like HADOOP-15137.
> Let's have this umbrella JIRA to track all efforts that left to improve 
> hadoop shaded client effort.
> CC [~busbey], [~bharatviswa] and [~vinodkv].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[RESULT][VOTE] Merge ContainerIO branch (HDDS-48) in to trunk

2018-07-06 Thread Bharat Viswanadham
The vote passes with +1s from 6 committers and 2 contributors. There are no 
-1's.
Thanks everyone for voting.

Voting Thread:
https://lists.apache.org/thread.html/b63e4569407b61a333ddf7a646dcf9ca169d2fd2cb2344d113d80697@%3Chdfs-dev.hadoop.apache.org%3E


Thanks,
Bharat


On 6/29/18, 3:14 PM, "Bharat Viswanadham"  wrote:

Fixing subject line of the mail.


Thanks,
Bharat



On 6/29/18, 3:10 PM, "Bharat Viswanadham"  
wrote:

Hi All,

Given the positive response to the discussion thread [1], here is the 
formal vote thread to merge HDDS-48 in to trunk.

Summary of code changes:
1. Code changes for this branch are done in the hadoop-hdds subproject 
and hadoop-ozone subproject, there is no impact to hadoop-hdfs.
2. Added support for multiple container types in the datanode code path.
3. Added disk layout logic for the containers to supports future 
upgrades.
4. Added support for volume Choosing policy to distribute containers 
across disks on the datanode.
5. Changed the format of the .container file to a human-readable format 
(yaml)


 The vote will run for 7 days, ending Fri July 6th. I will start this 
vote with my +1.

Thanks,
Bharat

[1] 
https://lists.apache.org/thread.html/79998ebd2c3837913a22097102efd8f41c3b08cb1799c3d3dea4876b@%3Chdfs-dev.hadoop.apache.org%3E







[VOTE] Merge ContainerIO branch (HDDS-48) in to trunk

2018-06-29 Thread Bharat Viswanadham
Fixing subject line of the mail.


Thanks,
Bharat



On 6/29/18, 3:10 PM, "Bharat Viswanadham"  wrote:

Hi All,

Given the positive response to the discussion thread [1], here is the 
formal vote thread to merge HDDS-48 in to trunk.

Summary of code changes:
1. Code changes for this branch are done in the hadoop-hdds subproject and 
hadoop-ozone subproject, there is no impact to hadoop-hdfs.
2. Added support for multiple container types in the datanode code path.
3. Added disk layout logic for the containers to supports future upgrades.
4. Added support for volume Choosing policy to distribute containers across 
disks on the datanode.
5. Changed the format of the .container file to a human-readable format 
(yaml)


 The vote will run for 7 days, ending Fri July 6th. I will start this vote 
with my +1.

Thanks,
Bharat

[1] 
https://lists.apache.org/thread.html/79998ebd2c3837913a22097102efd8f41c3b08cb1799c3d3dea4876b@%3Chdfs-dev.hadoop.apache.org%3E





Reg:[VOTE] Merge ContainerIO branch (HDDS-48) in to trunk

2018-06-29 Thread Bharat Viswanadham
Hi All,

Given the positive response to the discussion thread [1], here is the formal 
vote thread to merge HDDS-48 in to trunk.

Summary of code changes:
1. Code changes for this branch are done in the hadoop-hdds subproject and 
hadoop-ozone subproject, there is no impact to hadoop-hdfs.
2. Added support for multiple container types in the datanode code path.
3. Added disk layout logic for the containers to supports future upgrades.
4. Added support for volume Choosing policy to distribute containers across 
disks on the datanode.
5. Changed the format of the .container file to a human-readable format (yaml)


 The vote will run for 7 days, ending Fri July 6th. I will start this vote with 
my +1.

Thanks,
Bharat

[1] 
https://lists.apache.org/thread.html/79998ebd2c3837913a22097102efd8f41c3b08cb1799c3d3dea4876b@%3Chdfs-dev.hadoop.apache.org%3E



Re: [VOTE] Release Apache Hadoop 3.0.1 (RC1)

2018-03-22 Thread Bharat Viswanadham
+1 (non-binding)
1. verified md5 checksum.
2. Built from source and deployed a 5 node cluster.
3. Ran few basic commands of hdfs operations.
4. Deployed a federated cluster (2 namenodes) and ran few hdfs basic commands
5. Checked NN web UI.

Thanks,
Bharat



On 3/22/18, 10:02 AM, "Ajay Kumar"  wrote:

verified signatures and checksums


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [VOTE] Release Apache Hadoop 3.1.0 (RC0)

2018-03-22 Thread Bharat Viswanadham
Hi Wangda,
Maven Artifact repositories is not having all Hadoop jars. (It is missing many 
like hadoop-hdfs, hadoop-client etc.,)
https://repository.apache.org/content/repositories/orgapachehadoop-1086/


Thanks,
Bharat


On 3/21/18, 11:44 PM, "Wangda Tan"  wrote:

Hi folks,

Thanks to the many who helped with this release since Dec 2017 [1]. We've
created RC0 for Apache Hadoop 3.1.0. The artifacts are available here:

http://people.apache.org/~wangda/hadoop-3.1.0-RC0/

The RC tag in git is release-3.1.0-RC0.

The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/orgapachehadoop-1086/

This vote will run 7 days (5 weekdays), ending on Mar 28 at 11:59 pm
Pacific.

3.1.0 contains 727 [2] fixed JIRA issues since 3.0.0. Notable additions
include the first class GPU/FPGA support on YARN, Native services, Support
rich placement constraints in YARN, S3-related enhancements, allow HDFS
block replicas to be provided by an external storage system, etc.

We’d like to use this as a starting release for 3.1.x [1], depending on how
it goes, get it stabilized and potentially use a 3.1.1 in several weeks as
the stable release.

We have done testing with a pseudo cluster and distributed shell job. My +1
to start.

Best,
Wangda/Vinod

[1]

https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd104bc9e19e559aedc5ebedb4ca08c8@%3Cyarn-dev.hadoop.apache.org%3E
[2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.0)
AND fixVersion not in (3.0.0, 3.0.0-beta1) AND status = Resolved ORDER BY
fixVersion ASC



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


[jira] [Created] (HADOOP-15168) Add kdiag and HadoopKerberosName tools to hadoop command

2018-01-11 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-15168:
---

 Summary: Add kdiag and HadoopKerberosName tools to hadoop command
 Key: HADOOP-15168
 URL: https://issues.apache.org/jira/browse/HADOOP-15168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-23 Thread Bharat Viswanadham
+1 (non-binding)

- Built from source
- Created a 3 node Docker Hadoop cluster
- Ran Hdfs commands
- Ran few basic webhdfs operations

Thanks,
Bharat


On 10/23/17, 9:03 AM, "Ajay Kumar"  wrote:

Thanks, Junping Du!

+1 (non-binding)

- Built from source
- Ran hdfs commands
- Ran pi and sample MR test.
- Verified the UI's

Thanks,
Ajay Kumar

On 10/23/17, 8:14 AM, "Shane Kumpf"  wrote:

Thanks, Junping!

+1 (non-binding)

- Verified checksums and signatures
- Deployed a single node cluster on CentOS 7.2 using the binary tgz, 
source
tgz, and git tag
- Ran hdfs commands
- Ran pi and distributed shell using the default and docker runtimes
- Verified Docker docs
- Verified docker runtime can be disabled
- Verified the UI's


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


[jira] [Created] (HADOOP-14934) Remove Warnings when building ozone

2017-10-05 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14934:
---

 Summary: Remove Warnings when building ozone
 Key: HADOOP-14934
 URL: https://issues.apache.org/jira/browse/HADOOP-14934
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham


[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-ozone:jar:3.1.0-SNAPSHOT
[WARNING] 'build.plugins.plugin.version' for 
org.apache.maven.plugins:maven-project-info-reports-plugin is missing. @ 
org.apache.hadoop:hadoop-ozone:[unknown-version], 
/Users/aengineer/codereview/hadoop-tools/hadoop-ozone/pom.xml, line 36, column 
15
[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-dist:jar:3.1.0-SNAPSHOT
[WARNING] 'build.plugins.plugin.version' for 
org.apache.maven.plugins:maven-gpg-plugin is missing. @ line 133, column 15
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten 
the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
[WARNING]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14929) Cleanup usage of decodecomponent and use QueryStringDecoder from netty

2017-10-04 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14929:
---

 Summary: Cleanup usage of decodecomponent and use 
QueryStringDecoder from netty
 Key: HADOOP-14929
 URL: https://issues.apache.org/jira/browse/HADOOP-14929
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This is from the review of HADOOP-14910
There is also other place usage of decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);
In ParameterParser.java Line 147-148:
String cf = decodeComponent(param(CreateFlagParam.NAME), 
StandardCharsets.UTF_8);

Use QueryStringDecoder from netty here too and cleanup the decodeComponent. 
Actually this is added for netty issue only.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14916) Replace HdfsFileStatus constructors with a builder pattern.

2017-09-29 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14916:
---

 Summary: Replace HdfsFileStatus constructors with a builder 
pattern.
 Key: HADOOP-14916
 URL: https://issues.apache.org/jira/browse/HADOOP-14916
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14915) method name is incorrect in ConfServlet

2017-09-28 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14915:
---

 Summary: method name is incorrect in ConfServlet
 Key: HADOOP-14915
 URL: https://issues.apache.org/jira/browse/HADOOP-14915
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham
Priority: Minor


method name is parseAccecptHeader.
Modify it as parseAcceptHeader



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14894) ReflectionUtils should use Time.monotonicNow to mesaure duration

2017-09-21 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14894:
---

 Summary: ReflectionUtils should use Time.monotonicNow to mesaure 
duration
 Key: HADOOP-14894
 URL: https://issues.apache.org/jira/browse/HADOOP-14894
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


ReflectionUtils should use Time.monotonicNow to mesaure duration



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] official docker image(s) for hadoop

2017-09-13 Thread Bharat Viswanadham
+1 (non-binding)
It would be really nice to have Docker to try different features of Hadoop 
(like HA, Federation Enabled, Erasure coding…), which will helpful for both 
developers and users.


Thanks,
Bharat


On 9/13/17, 11:31 AM, "Eric Badger"  wrote:

+1 definitely think an official Hadoop docker image (possibly 1 per major
or minor release) would be a positive both for contributors and for users
of Hadoop.

Eric

On Wed, Sep 13, 2017 at 1:19 PM, Wangda Tan  wrote:

> +1 to add Hadoop docker image for easier testing / prototyping, it gonna 
be
> super helpful!
>
> Thanks,
> Wangda
>
> On Wed, Sep 13, 2017 at 10:48 AM, Miklos Szegedi <
> miklos.szeg...@cloudera.com> wrote:
>
> > Marton, thank you for working on this. I think Official Docker images 
for
> > Hadoop would be very useful for a lot of reasons. I think that it is
> better
> > to have a coordinated effort with production ready base images with
> > dependent images for prototyping. Does anyone else have an opinion about
> > this?
> >
> > Thank you,
> > Miklos
> >
> > On Fri, Sep 8, 2017 at 5:45 AM, Marton, Elek  wrote:
> >
> > >
> > > TL;DR: I propose to create official hadoop images and upload them to
> the
> > > dockerhub.
> > >
> > > GOAL/SCOPE: I would like improve the existing documentation with
> > > easy-to-use docker based recipes to start hadoop clusters with various
> > > configuration.
> > >
> > > The images also could be used to test experimental features. For
> example
> > > ozone could be tested easily with these compose file and 
configuration:
> > >
> > > https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> > >
> > > Or even the configuration could be included in the compose file:
> > >
> > > https://github.com/elek/hadoop/blob/docker-2.8.0/example/doc
> > > ker-compose.yaml
> > >
> > > I would like to create separated example compose files for federation,
> > ha,
> > > metrics usage, etc. to make it easier to try out and understand the
> > > features.
> > >
> > > CONTEXT: There is an existing Jira https://issues.apache.org/jira
> > > /browse/HADOOP-13397
> > > But it’s about a tool to generate production quality docker images
> > > (multiple types, in a flexible way). If no objections, I will create a
> > > separated issue to create simplified docker images for rapid
> prototyping
> > > and investigating new features. And register the branch to the
> dockerhub
> > to
> > > create the images automatically.
> > >
> > > MY BACKGROUND: I am working with docker based hadoop/spark clusters
> quite
> > > a while and run them succesfully in different environments 
(kubernetes,
> > > docker-swarm, nomad-based scheduling, etc.) My work is available from
> > here:
> > > https://github.com/flokkr but they could handle more complex use cases
> > > (eg. instrumenting java processes with btrace, or read/reload
> > configuration
> > > from consul).
> > >  And IMHO in the official hadoop documentation it’s better to suggest
> to
> > > use official apache docker images and not external ones (which could 
be
> > > changed).
> > >
> > > Please let me know if you have any comments.
> > >
> > > Marton
> > >
> > > -
> > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> > >
> > >
> >
>




[jira] [Created] (HADOOP-14867) Update HDFS Federation Document, for incorrect property name for secondary name node

2017-09-13 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14867:
---

 Summary: Update HDFS Federation Document, for incorrect property 
name for secondary name node
 Key: HADOOP-14867
 URL: https://issues.apache.org/jira/browse/HADOOP-14867
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham


HDFS Federation setup documentation is having incorrect property name for 
secondary namenode http port

It is mentioned as 

dfs.namenode.secondaryhttp-address.ns1
snn-host1:http-port
  
  
dfs.namenode.rpc-address.ns2
nn-host2:rpc-port
  

Actual property should be dfs.namenode.secondary.http-address.ns.

Because of this documentation error, when the document is followed and user 
tries to setup HDFS federated cluster, secondary namenode will not be started 
and also 
hdfs getconf -secondarynamenodes will throw an exception 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14697) Analyze the properties files which need to be included/excluded from hadoop-client-runtime and hadoop-client-minicluster

2017-09-11 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HADOOP-14697.
-
Resolution: Duplicate

This is being taken care in 
Thanks [~busbey] for handling 
this.[https://issues.apache.org/jira/browse/HADOOP-14089]

> Analyze the properties files which need to be included/excluded from 
> hadoop-client-runtime and hadoop-client-minicluster
> 
>
> Key: HADOOP-14697
> URL: https://issues.apache.org/jira/browse/HADOOP-14697
> Project: Hadoop Common
>  Issue Type: Sub-task
>    Reporter: Bharat Viswanadham
>
> about.html
> capacity-scheduler.xml
> catalog.cat
> container-log4j.properties
> hdfs-default.xml
> java.policy
> javaee_5.xsd
> javaee_6.xsd
> javaee_web_services_client_1_2.xsd
> javaee_web_services_client_1_3.xsd
> jdtCompilerAdapter.jar
> jsp_2_1.xsd
> jsp_2_2.xsd
> krb5.conf
> log4j.properties
> plugin.properties
> plugin.xml
> web-app_2_5.xsd
> web-app_3_0.xsd
> web-common_3_0.xsd
> xml.xsd
> This issue is raised from [https://issues.apache.org/jira/browse/HADOOP-14685]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14847) Remove Guava Supplier and change to java Supplier in AMRMClient and AMRMClientAysnc

2017-09-07 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14847:
---

 Summary: Remove Guava Supplier and change to java Supplier in 
AMRMClient and AMRMClientAysnc
 Key: HADOOP-14847
 URL: https://issues.apache.org/jira/browse/HADOOP-14847
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


Remove the Guava library Supplier usage in user facing API's in AMRMClient.java 
and AMRMClientAsync.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14697) Analyze the properties files which need to be included/excluded from hadoop-client-runtime and hadoop-client-minicluster

2017-07-28 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14697:
---

 Summary: Analyze the properties files which need to be 
included/excluded from hadoop-client-runtime and hadoop-client-minicluster
 Key: HADOOP-14697
 URL: https://issues.apache.org/jira/browse/HADOOP-14697
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


about.html
capacity-scheduler.xml
catalog.cat
container-log4j.properties
hdfs-default.xml
java.policy
javaee_5.xsd
javaee_6.xsd
javaee_web_services_client_1_2.xsd
javaee_web_services_client_1_3.xsd
jdtCompilerAdapter.jar
jsp_2_1.xsd
jsp_2_2.xsd
krb5.conf
log4j.properties
plugin.properties
plugin.xml
web-app_2_5.xsd
web-app_3_0.xsd
web-common_3_0.xsd
xml.xsd

This issue is raised from [https://issues.apache.org/jira/browse/HADOOP-14685]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-07-25 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14685:
---

 Summary: Test jars to exclude from hadoop-client-minicluster jar
 Key: HADOOP-14685
 URL: https://issues.apache.org/jira/browse/HADOOP-14685
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.0.0-beta1
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This jira is to discuss, what test jars to be included/excluded from 
hadoop-client-minicluster



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org