Re: builds failing on trunk, "convergence failure"?

2019-02-26 Thread Billie Rinaldi
I've been seeing this for months. I believe Shane saw it for a while as
well, but his was fixed by removing his local maven repo.

Billie

On Tue, Feb 26, 2019 at 6:48 AM Steve Loughran 
wrote:

> Is anyone else seeing mr-client-app failing to build with a convergence
> error? It's started for me today on trunk
>
>
> [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @
> hadoop-mapreduce-client-app ---
> [WARNING]
> Dependency convergence error for org.hamcrest:hamcrest-core:1.1 paths to
> dependency are:
> +-org.apache.hadoop:hadoop-mapreduce-client-app:3.3.0-SNAPSHOT
>   +-com.github.stefanbirkner:system-rules:1.18.0
> +-junit:junit-dep:4.11.20120805.1225
>   +-org.hamcrest:hamcrest-core:1.1
> and
> +-org.apache.hadoop:hadoop-mapreduce-client-app:3.3.0-SNAPSHOT
>   +-junit:junit:4.12
> +-org.hamcrest:hamcrest-core:1.3
>
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
>
>
> It's only a trivial pom fixup, but I'm surprised its suddenly surfaced. Am
> I the only person seeing it?
>


Re: Hadoop-trunk commit jenkins job fails due to wrong version of protoc

2019-08-06 Thread Billie Rinaldi
I reopened INFRA-18244 to find out if there are any ubuntu 14.04 nodes left.

Billie

On Thu, Aug 1, 2019 at 6:30 PM Zhenyu Zheng 
wrote:

> Hi,
>
> I'm quite new to hadoop and not very sure whether it is correct or not. By
> checking the latest successful build log:
>
> https://builds.apache.org/job/Hadoop-trunk-Commit/lastSuccessfulBuild/consoleFull
>
> it seems the os version is ubuntu 14.04 and in the later failling jobs, the
> os version is 18.04:
> https://builds.apache.org/job/Hadoop-trunk-Commit/17020/console
> and according to
> https://packages.ubuntu.com/search?keywords=libprotoc-dev 18.04
> only provides  libprotoc 3.0,
> could this be a protential problem?
>
> BR,
>
> Kevin
>
>
> On Thu, Aug 1, 2019 at 7:59 PM Szilard Nemeth  >
> wrote:
>
> > Hi,
> >
> > All hadoop-trunk builds are failing because of wrong version of protoc is
> > being used.
> > The version we are using is 3.0.0, however we should use 2.5.0 according
> to
> > the build configuration.
> > This makes hadoop-common fail to build.
> >
> > Here's a failed job as an example:
> > https://builds.apache.org/job/Hadoop-trunk-Commit/17020/console
> >
> > , and this is the error message from Maven:
> >
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  22.245 s (Wall Clock)
> > [INFO] Finished at: 2019-08-01T11:12:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc
> > (compile-protoc) on project hadoop-common:
> > org.apache.maven.plugin.MojoExecutionException: protoc version is
> > 'libprotoc 3.0.0', expected version is '2.5.0' -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with
> > the -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> > [ERROR]
> > [ERROR] After correcting the problems, you can resume the build with
> > the command
> >
> > [ERROR] mvn  -rf :hadoop-common
> >
> >
> > Could someone please help me to indentify what is exactly wrong and where
> > should we fix this issue?
> >
> > Thanks a lot,
> > Szilard
> >
>


Re: [VOTE] Merge yarn-native-services branch into trunk

2017-09-01 Thread Billie Rinaldi
+1 (non-binding)

On Thu, Aug 31, 2017 at 8:33 PM, Jian He  wrote:

> Hi All,
>
> I would like to call a vote for merging yarn-native-services to trunk. The
> vote will run for 7 days as usual.
>
> At a high level, the following are the key feautres implemented.
> - YARN-5079[1]. A native YARN framework (ApplicationMaster) to migrate and
> orchestrate existing services to YARN either docker or non-docker based.
> - YARN-4793[2]. A Rest API server for user to deploy a service via a
> simple JSON spec
> - YARN-4757[3]. Extending today's service registry with a simple DNS
> service to enable users to discover services deployed on YARN
> - YARN-6419[4]. UI support for native-services on the new YARN UI
> All these new services are optional and are sitting outside of the
> existing system, and have no impact on existing system if disabled.
>
> Special thanks to a team of folks who worked hard towards this: Billie
> Rinaldi, Gour Saha, Vinod Kumar Vavilapalli, Jonathan Maron, Rohith Sharma
> K S, Sunil G, Akhil PB. This effort could not be possible without their
> ideas and hard work.
>
> Thanks,
> Jian
>
> [1] https://issues.apache.org/jira/browse/YARN-5079
> [2] https://issues.apache.org/jira/browse/YARN-4793
> [3] https://issues.apache.org/jira/browse/YARN-4757
> [4] https://issues.apache.org/jira/browse/YARN-6419
>
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Merge yarn-native-services branch into trunk

2017-11-06 Thread Billie Rinaldi
+1 (binding)

On Mon, Oct 30, 2017 at 1:06 PM, Jian He  wrote:

> Hi All,
>
> I would like to restart the vote for merging yarn-native-services to trunk.
> Since last vote, we have been working on several issues in documentation,
> DNS, CLI modifications etc. We believe now the feature is in a much better
> shape.
>
> Some back ground:
> At a high level, the following are the key feautres implemented.
> - YARN-5079[1]. A native YARN framework (ApplicationMaster) to orchestrate
> existing services to YARN either docker or non-docker based.
> - YARN-4793[2]. A Rest API service embeded in RM (optional)  for user to
> deploy a service via a simple JSON spec
> - YARN-4757[3]. Extending today's service registry with a simple DNS
> service to enable users to discover services deployed on YARN via standard
> DNS lookup
> - YARN-6419[4]. UI support for native-services on the new YARN UI
> All these new services are optional and are sitting outside of the
> existing system, and have no impact on existing system if disabled.
>
> Special thanks to a team of folks who worked hard towards this: Billie
> Rinaldi, Gour Saha, Vinod Kumar Vavilapalli, Jonathan Maron, Rohith Sharma
> K S, Sunil G, Akhil PB, Eric Yang. This effort could not be possible
> without their ideas and hard work.
> Also thanks Allen for some review and verifications.
>
> Thanks,
> Jian
>
> [1] https://issues.apache.org/jira/browse/YARN-5079
> [2] https://issues.apache.org/jira/browse/YARN-4793
> [3] https://issues.apache.org/jira/browse/YARN-4757
> [4] https://issues.apache.org/jira/browse/YARN-6419
>


Re: [VOTE] Release Apache Hadoop 3.1.2 - RC1

2019-02-04 Thread Billie Rinaldi
Hey Sunil and Wangda, thanks for the RC. The source tarball has a
patchprocess directory with some yetus code in it. Also, the file
dev-support/bin/create-release file has the following line added:
  export GPG_AGENT_INFO="/home/sunilg/.gnupg/S.gpg-agent:$(pgrep
gpg-agent):1"

I think we are probably due for an overall review of LICENSE and NOTICE. I
saw some idiosyncrasies there but nothing that looked like a blocker.

On Mon, Jan 28, 2019 at 10:20 PM Sunil G  wrote:

> Hi Folks,
>
> On behalf of Wangda, we have an RC1 for Apache Hadoop 3.1.2.
>
> The artifacts are available here:
> http://home.apache.org/~sunilg/hadoop-3.1.2-RC1/
>
> The RC tag in git is release-3.1.2-RC1:
> https://github.com/apache/hadoop/commits/release-3.1.2-RC1
>
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1215
>
> This vote will run 5 days from now.
>
> 3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.
>
> We have done testing with a pseudo cluster and distributed shell job.
>
> My +1 to start.
>
> Best,
> Wangda Tan and Sunil Govindan
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
> ORDER BY priority DESC
>


Re: [VOTE] Release Apache Hadoop 3.1.2 - RC1

2019-02-05 Thread Billie Rinaldi
Thanks Sunil, the new source tarball matches the RC tag and its checksum
and signature look good.

Billie

On Tue, Feb 5, 2019 at 10:50 AM Sunil G  wrote:

> Thanks Billie for pointing out.
> I have updated source by removing patchprocess and extra line create
> release.
>
> Also updated checksum as well.
>
> @bil...@apache.org   @Wangda Tan 
> please help to verify this changed bit once.
>
> Thanks
> Sunil
>
> On Tue, Feb 5, 2019 at 5:23 AM Billie Rinaldi 
> wrote:
>
>> Hey Sunil and Wangda, thanks for the RC. The source tarball has a
>> patchprocess directory with some yetus code in it. Also, the file
>> dev-support/bin/create-release file has the following line added:
>>   export GPG_AGENT_INFO="/home/sunilg/.gnupg/S.gpg-agent:$(pgrep
>> gpg-agent):1"
>>
>> I think we are probably due for an overall review of LICENSE and NOTICE.
>> I saw some idiosyncrasies there but nothing that looked like a blocker.
>>
>> On Mon, Jan 28, 2019 at 10:20 PM Sunil G  wrote:
>>
>>> Hi Folks,
>>>
>>> On behalf of Wangda, we have an RC1 for Apache Hadoop 3.1.2.
>>>
>>> The artifacts are available here:
>>> http://home.apache.org/~sunilg/hadoop-3.1.2-RC1/
>>>
>>> The RC tag in git is release-3.1.2-RC1:
>>> https://github.com/apache/hadoop/commits/release-3.1.2-RC1
>>>
>>> The maven artifacts are available via repository.apache.org at
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1215
>>>
>>> This vote will run 5 days from now.
>>>
>>> 3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.
>>>
>>> We have done testing with a pseudo cluster and distributed shell job.
>>>
>>> My +1 to start.
>>>
>>> Best,
>>> Wangda Tan and Sunil Govindan
>>>
>>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
>>> ORDER BY priority DESC
>>>
>>


Re: [DISCUSS] Hadoop 3.3.1 release

2021-04-19 Thread Billie Rinaldi
I was thinking of backporting HADOOP-16948 for 3.3.1.

Billie

On Mon, Apr 19, 2021 at 1:33 AM Wei-Chiu Chuang
 wrote:

> Hello, reviving this thread.
>
> I created a dashboard for Hadoop 3.3.1 release.
> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12336122
> Also a jira to track the release work: HADOOP-17647
> 
>
> We are currently at 5 release blockers and 3 critical issues for Hadoop
> 3.3.1. I'll go through each of them and push out the ones that aren't
> really blocking us.
>
> If you believe there are more features/bug fixes we should include in 3.3.1
> (I spent the past few weeks backporting jiras but I'm sure I missed some)
> please shout out.
>
> Meanwhile, I believe we need to release hadoop-thirdparty 1.1.0 too. There
> are a number of tasks to be done there too. Let's start another thread for
> hadoop-thirdparty 1.1.0 release.
>
> On Mon, Mar 15, 2021 at 7:04 PM hemanth boyina  >
> wrote:
>
> > Hi Steve and Wei-Chiu
> >
> > Regarding the IPV6,Few years back we have rebased the HADOOP-11890 to
> trunk
> > and tried to work out with IPV6, we have faced some issues and have made
> > the required changes for ipv6 to work.After the changes were made we have
> > tested the IPV6 changes on top of Ipv4 and Ipv6 machines and tested
> > rigorously.Its been quite a some time these changes were deployed in
> > production cluster and have been in use for extensive purpose.
> >
> > I think it's good time to add this feature.
> >
> > Thanks
> > Hemanth Boyina
> >
> >
> >
> > On Thu, 11 Mar 2021, 10:22 Vinayakumar B, 
> wrote:
> >
> > > Hi David,
> > >
> > > >> Still hoping for help here:
> > >
> > > >> https://issues.apache.org/jira/browse/HDFS-15790
> > >
> > > I will raise a PR for the said solution soon (in a day or two).
> > >
> > > -Vinay
> > >
> > > On Thu, 11 Mar 2021 at 5:39 AM, David  wrote:
> > >
> > > > Hello,
> > > >
> > > > Still hoping for help here:
> > > >
> > > > https://issues.apache.org/jira/browse/HDFS-15790
> > > >
> > > > Looks like it has been worked on, not sure how to best move it
> forward.
> > > >
> > > > On Wed, Mar 10, 2021, 12:21 PM Steve Loughran
> > >  > > > >
> > > > wrote:
> > > >
> > > > > I'm going to argue its too late to do IPv6 support close to a
> > release,
> > > as
> > > > > it's best if its on developer machines for some time to let all
> > quirks
> > > > > surface. It's not so much IPv6 itself, but do we cause any
> > regressions
> > > on
> > > > > IPv4?
> > > > >
> > > > > But: it can/should go into trunk and stabilize there
> > > > >
> > > > > On Thu, 4 Mar 2021 at 03:52, Muralikrishna Dmmkr <
> > > > > muralikrishna.dm...@gmail.com> wrote:
> > > > >
> > > > > > Hi Brahma,
> > > > > >
> > > > > > I have missed out mentioning about the IPV6 feature in the last
> > mail,
> > > > > > Support for IPV6 has been in development since 2015 and We have
> > done
> > > a
> > > > > good
> > > > > > amount of testing at our organisation, the feature is stable and
> > used
> > > > by
> > > > > > our customers extensively in the last one year. I think it is a
> > good
> > > > time
> > > > > > to add the IPV6 support to 3.3.1.
> > > > > >
> > > > > > https://issues.apache.org/jira/browse/HADOOP-11890
> > > > > >
> > > > > > Thanks
> > > > > > D M Murali Krishna Reddy
> > > > > >
> > > > > > On Wed, Feb 24, 2021 at 9:13 AM Muralikrishna Dmmkr <
> > > > > > muralikrishna.dm...@gmail.com> wrote:
> > > > > >
> > > > > > > Hi Brahma,
> > > > > > >
> > > > > > > Can we have this new feature "YARN Registry based AM discovery
> > with
> > > > > retry
> > > > > > > and in-flight task persistent via JHS" in the upcoming 3.1.1
> > > > release. I
> > > > > > > have also attached a test-report in the below jira.
> > > > > > >
> > > > > > > https://issues.apache.org/jira/browse/MAPREDUCE-6726
> > > > > > >
> > > > > > >
> > > > > > > Thanks,
> > > > > > > D M Murali Krishna Reddy
> > > > > > >
> > > > > > > On Tue, Feb 23, 2021 at 10:11 AM Brahma Reddy Battula <
> > > > > bra...@apache.org
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Hi Bilwa,
> > > > > > >>
> > > > > > >> I have commented on the jira's you mentioned. Based on the
> > > stability
> > > > > we
> > > > > > >> can
> > > > > > >> plan this.But needs to be merged ASAP.
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> On Fri, Feb 19, 2021 at 5:20 PM bilwa st 
> > > wrote:
> > > > > > >>
> > > > > > >> > Hi Brahma,
> > > > > > >> >
> > > > > > >> > Can we have below features in 3.3.1 release? We have been
> > using
> > > > > these
> > > > > > >> > features for a long time. They are stable and tested in
> bigger
> > > > > > clusters.
> > > > > > >> >
> > > > > > >> > 1. Container reuse -
> > > > > > >> https://issues.apache.org/jira/browse/MAPREDUCE-6749
> > > > > > >> > 2. Speculative attempts should not run on the same node -
> > > > > > >> > https://issues.apache.org/jira/browse/MAPREDUCE-7169
> > > > > > >> >
> > > > > > >> > T

Re: Time to address the Guava version problem

2014-09-24 Thread Billie Rinaldi
The use of an unnecessarily old dependency encourages problems like
HDFS-7040.  The current Guava dependency is a big problem for downstream
apps and I'd really like to see it addressed.

On Tue, Sep 23, 2014 at 2:09 PM, Steve Loughran 
wrote:

> I'm using curator elsewhere, it does log a lot (as does the ZK client), but
> it solves a lot of problem. It's being adopted more downstream too.
>
> I'm wondering if we can move the code to the extent we know it works with
> Guava 16, with the hadoop core being 16-compatible, but not actually
> migrated to 16.x only. Then hadoop ships with 16 for curator & downstream
> apps, but we say "you can probably roll back to 11 provided you don't use
> features x-y-z".
>
> On 23 September 2014 21:55, Robert Kanter  wrote:
>
> > At the same time, not being able to use Curator will require a lot of
> extra
> > code, a lot of which we probably already have from the ZKRMStateStore,
> but
> > it's not available to use in hadoop-auth.  We'd need to create our own ZK
> > libraries that Hadoop components can use, but (a) that's going to take a
> > while, and (b) it seems silly to reinvent the wheel when Curator already
> > does all this.
> >
> > I agree that upgrading Guava will be a compatibility problem though...
> >
> > On Tue, Sep 23, 2014 at 9:30 AM, Sandy Ryza 
> > wrote:
> >
> > > If we've broken compatibility in branch-2, that's a bug that we need to
> > > fix. HADOOP-10868 has not yet made it into a release; I don't see it
> as a
> > > justification for solidifying the breakage.
> > >
> > > -1 to upgrading Guava in branch-2.
> > >
> > > On Tue, Sep 23, 2014 at 3:06 AM, Steve Loughran <
> ste...@hortonworks.com>
> > > wrote:
> > >
> > > > +1 to upgrading guava. Irrespective of downstream apps, the hadoop
> > source
> > > > tree is now internally inconsistent
> > > >
> > > > On 22 September 2014 17:56, Sangjin Lee  wrote:
> > > >
> > > > > I agree that a more robust solution is to have better classloading
> > > > > isolation.
> > > > >
> > > > > Still, IMHO guava (and possibly protobuf as well) sticks out like a
> > > sore
> > > > > thumb. There are just too many issues in trying to support both
> guava
> > > 11
> > > > > and guava 16. Independent of what we may do with the classloading
> > > > > isolation, we should still consider upgrading guava.
> > > > >
> > > > > My 2 cents.
> > > > >
> > > > > On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla <
> > ka...@cloudera.com>
> > > > > wrote:
> > > > >
> > > > > > Upgrading Guava version is tricky. While it helps in many cases,
> it
> > > can
> > > > > > break existing applications/deployments. I understand we do not
> > have
> > > a
> > > > > > policy for updating dependencies, but still we should be careful
> > with
> > > > > > Guava.
> > > > > >
> > > > > > I would be more inclined towards a more permanent solution to
> this
> > > > > problem
> > > > > > - how about prioritizing classpath isolation so applications
> aren't
> > > > > > affected by Hadoop dependency updates at all? I understand that
> > will
> > > > also
> > > > > > break user applications, but it might be the driving feature for
> > > Hadoop
> > > > > > 3.0?
> > > > > >
> > > > > > On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee 
> > > wrote:
> > > > > >
> > > > > > > I would also agree on upgrading guava. Yes I am aware of the
> > > > potential
> > > > > > > impact on customers who might rely on hadoop bringing in guava
> > 11.
> > > > > > However,
> > > > > > > IMHO the balance tipped over to the other side a while ago;
> i.e.
> > I
> > > > > think
> > > > > > > there are far more people using guava 16 in their code and
> > > scrambling
> > > > > to
> > > > > > > make things work than the other way around.
> > > > > > >
> > > > > > > On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran <
> > > > > ste...@hortonworks.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > I know we've been ignoring the Guava version problem, but
> > > > > HADOOP-10868
> > > > > > > > added a transitive dependency on Guava 16 by way of Curator
> > 2.6.
> > > > > > > >
> > > > > > > > Maven currently forces the build to use Guava 11.0.2, but
> this
> > is
> > > > > > hiding
> > > > > > > at
> > > > > > > > compile timeall code paths from curator which may use
> classes &
> > > > > methods
> > > > > > > > that aren't there.
> > > > > > > >
> > > > > > > > I need curator for my own work (2.4.1 & Guava 14.0 was what
> I'd
> > > > been
> > > > > > > > using), so don't think we can go back.
> > > > > > > >
> > > > > > > > HADOOP-11102 covers the problem -but doesn't propose a
> specific
> > > > > > solution.
> > > > > > > > But to me the one that seems most likely to work is: update
> > Guava
> > > > > > > >
> > > > > > > > -steve
> > > > > > > >
> > > > > > > > --
> > > > > > > > CONFIDENTIALITY NOTICE
> > > > > > > > NOTICE: This message is intended for the use of the
> individual
> > or
> > > > > > entity
> > > > > > > to
> > > > > > > > which it is addressed and may contain informat

Re: Git repo ready to use

2014-09-24 Thread Billie Rinaldi
I fixed it.

On Wed, Sep 24, 2014 at 8:00 PM, Ted Yu  wrote:

> Billie found out that Hadoop-Common-2-Commit should be the build that
> publishes artifacts.
>
> Thanks Billie.
>
> On Wed, Sep 24, 2014 at 4:20 PM, Ted Yu  wrote:
>
> > FYI
> >
> > I made some changes to:
> > https://builds.apache.org/view/All/job/Hadoop-branch2
> >
> > because it until this morning was using svn to build.
> >
> > Would 2.6.0-SNAPSHOT maven artifacts be updated after the build ?
> >
> > Cheers
> >
> >
> > On Mon, Sep 15, 2014 at 11:14 AM, Todd Lipcon  wrote:
> >
> >> Hey all,
> >>
> >> For those of you who like to see the entire history of a file going back
> >> to
> >> 2006, I found I had to add a new graft to .git/info/grafts:
> >>
> >> # Project un-split in new writable git repo
> >> a196766ea07775f18ded69bd9e8d239f8cfd3ccc
> >> 928d485e2743115fe37f9d123ce9a635c5afb91a
> >> cd66945f62635f589ff93468e94c0039684a8b6d
> >> 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
> >>
> >> FWIW, my entire file now contains:
> >>
> >> # Project split
> >> 5128a9a453d64bfe1ed978cf9ffed27985eeef36
> >> 6c16dc8cf2b28818c852e95302920a278d07ad0c
> >> 6a3ac690e493c7da45bbf2ae2054768c427fd0e1
> >> 6c16dc8cf2b28818c852e95302920a278d07ad0c
> >> 546d96754ffee3142bcbbf4563c624c053d0ed0d
> >> 6c16dc8cf2b28818c852e95302920a278d07ad0c
> >> 4e569e629a98a4ef5326e5d25a84c7d57b5a8f7a
> >> c78078dd2283e2890018ff0e87d751c86163f99f
> >>
> >> # Project un-split in new writable git repo
> >> a196766ea07775f18ded69bd9e8d239f8cfd3ccc
> >> 928d485e2743115fe37f9d123ce9a635c5afb91a
> >> cd66945f62635f589ff93468e94c0039684a8b6d
> >> 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
> >>
> >> which seems to do a good job for me (not sure if the first few lines are
> >> necessary anymore in the latest world)
> >>
> >> -Todd
> >>
> >>
> >>
> >> On Fri, Sep 12, 2014 at 11:31 AM, Colin McCabe 
> >> wrote:
> >>
> >> > It's an issue with test-patch.sh.  See
> >> > https://issues.apache.org/jira/browse/HADOOP-11084
> >> >
> >> > best,
> >> > Colin
> >> >
> >> > On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang  >
> >> > wrote:
> >> > > We're still not seeing findbugs results show up on precommit runs. I
> >> see
> >> > > that we're archiving "../patchprocess/*", and Ted thinks that since
> >> it's
> >> > > not in $WORKSPACE it's not getting picked up. Can we get
> confirmation
> >> of
> >> > > this issue? If so, we could just add "patchprocess" to the toplevel
> >> > > .gitignore.
> >> > >
> >> > > On Thu, Sep 4, 2014 at 8:54 AM, Sangjin Lee 
> wrote:
> >> > >
> >> > >> That's good to know. Thanks.
> >> > >>
> >> > >>
> >> > >> On Wed, Sep 3, 2014 at 11:15 PM, Vinayakumar B <
> >> vinayakum...@apache.org
> >> > >
> >> > >> wrote:
> >> > >>
> >> > >> > I think its still pointing to old svn repository which is just
> read
> >> > only
> >> > >> > now.
> >> > >> >
> >> > >> > You can use latest mirror:
> >> > >> > https://github.com/apache/hadoop
> >> > >> >
> >> > >> > Regards,
> >> > >> > Vinay
> >> > >> > On Sep 4, 2014 11:37 AM, "Sangjin Lee"  wrote:
> >> > >> >
> >> > >> > > It seems like the github mirror at
> >> > >> > https://github.com/apache/hadoop-common
> >> > >> > > has stopped getting updates as of 8/22. Could this mirror have
> >> been
> >> > >> > broken
> >> > >> > > by the git transition?
> >> > >> > >
> >> > >> > > Thanks,
> >> > >> > > Sangjin
> >> > >> > >
> >> > >> > >
> >> > >> > > On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu 
> >> > wrote:
> >> > >> > >
> >> > >> > > > From
> >> https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console
> >> > :
> >> > >> > > >
> >> > >> > > > ERROR: No artifacts found that match the file pattern
> >> > >> > > > "trunk/hadoop-hdfs-project/*/target/*.tar.gz". Configuration
> >> > >> > > > error?ERROR <
> >> http://stacktrace.jenkins-ci.org/search?query=ERROR
> >> > >:
> >> > >> > > > ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match
> >> > anything,
> >> > >> > > > but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps
> >> that?s
> >> > what
> >> > >> > > > you mean?
> >> > >> > > >
> >> > >> > > >
> >> > >> > > > I corrected the path to hdfs tar ball.
> >> > >> > > >
> >> > >> > > >
> >> > >> > > > FYI
> >> > >> > > >
> >> > >> > > >
> >> > >> > > >
> >> > >> > > > On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur <
> >> > >> t...@cloudera.com
> >> > >> > >
> >> > >> > > > wrote:
> >> > >> > > >
> >> > >> > > > > it seems we missed updating the HADOOP precommit job to use
> >> > Git, it
> >> > >> > was
> >> > >> > > > > still using SVN. I've just updated it.
> >> > >> > > > >
> >> > >> > > > > thx
> >> > >> > > > >
> >> > >> > > > >
> >> > >> > > > > On Thu, Aug 28, 2014 at 9:26 PM, Ted Yu <
> yuzhih...@gmail.com
> >> >
> >> > >> wrote:
> >> > >> > > > >
> >> > >> > > > > > Currently patchprocess/ (contents shown below) is one
> level
> >> > >> higher
> >> > >> > > than
> >> > >> > > > > > ${WORKSPACE}
> >> > >> > > > > >
> >> > >> > > > > > diffJavadocWarnings.txt
> >> > >> > > >  newPatchFindbugsWarningshadoop-hdfs.html
> >> > 

Re: Git repo ready to use

2014-09-25 Thread Billie Rinaldi
I tried to fix up the rest of the trunk and branch-2 Jenkins builds, too.

On Wed, Sep 24, 2014 at 8:03 PM, Billie Rinaldi 
wrote:

> I fixed it.
>
> On Wed, Sep 24, 2014 at 8:00 PM, Ted Yu  wrote:
>
>> Billie found out that Hadoop-Common-2-Commit should be the build that
>> publishes artifacts.
>>
>> Thanks Billie.
>>
>> On Wed, Sep 24, 2014 at 4:20 PM, Ted Yu  wrote:
>>
>> > FYI
>> >
>> > I made some changes to:
>> > https://builds.apache.org/view/All/job/Hadoop-branch2
>> >
>> > because it until this morning was using svn to build.
>> >
>> > Would 2.6.0-SNAPSHOT maven artifacts be updated after the build ?
>> >
>> > Cheers
>> >
>> >
>> > On Mon, Sep 15, 2014 at 11:14 AM, Todd Lipcon 
>> wrote:
>> >
>> >> Hey all,
>> >>
>> >> For those of you who like to see the entire history of a file going
>> back
>> >> to
>> >> 2006, I found I had to add a new graft to .git/info/grafts:
>> >>
>> >> # Project un-split in new writable git repo
>> >> a196766ea07775f18ded69bd9e8d239f8cfd3ccc
>> >> 928d485e2743115fe37f9d123ce9a635c5afb91a
>> >> cd66945f62635f589ff93468e94c0039684a8b6d
>> >> 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
>> >>
>> >> FWIW, my entire file now contains:
>> >>
>> >> # Project split
>> >> 5128a9a453d64bfe1ed978cf9ffed27985eeef36
>> >> 6c16dc8cf2b28818c852e95302920a278d07ad0c
>> >> 6a3ac690e493c7da45bbf2ae2054768c427fd0e1
>> >> 6c16dc8cf2b28818c852e95302920a278d07ad0c
>> >> 546d96754ffee3142bcbbf4563c624c053d0ed0d
>> >> 6c16dc8cf2b28818c852e95302920a278d07ad0c
>> >> 4e569e629a98a4ef5326e5d25a84c7d57b5a8f7a
>> >> c78078dd2283e2890018ff0e87d751c86163f99f
>> >>
>> >> # Project un-split in new writable git repo
>> >> a196766ea07775f18ded69bd9e8d239f8cfd3ccc
>> >> 928d485e2743115fe37f9d123ce9a635c5afb91a
>> >> cd66945f62635f589ff93468e94c0039684a8b6d
>> >> 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
>> >>
>> >> which seems to do a good job for me (not sure if the first few lines
>> are
>> >> necessary anymore in the latest world)
>> >>
>> >> -Todd
>> >>
>> >>
>> >>
>> >> On Fri, Sep 12, 2014 at 11:31 AM, Colin McCabe > >
>> >> wrote:
>> >>
>> >> > It's an issue with test-patch.sh.  See
>> >> > https://issues.apache.org/jira/browse/HADOOP-11084
>> >> >
>> >> > best,
>> >> > Colin
>> >> >
>> >> > On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang <
>> andrew.w...@cloudera.com>
>> >> > wrote:
>> >> > > We're still not seeing findbugs results show up on precommit runs.
>> I
>> >> see
>> >> > > that we're archiving "../patchprocess/*", and Ted thinks that since
>> >> it's
>> >> > > not in $WORKSPACE it's not getting picked up. Can we get
>> confirmation
>> >> of
>> >> > > this issue? If so, we could just add "patchprocess" to the toplevel
>> >> > > .gitignore.
>> >> > >
>> >> > > On Thu, Sep 4, 2014 at 8:54 AM, Sangjin Lee 
>> wrote:
>> >> > >
>> >> > >> That's good to know. Thanks.
>> >> > >>
>> >> > >>
>> >> > >> On Wed, Sep 3, 2014 at 11:15 PM, Vinayakumar B <
>> >> vinayakum...@apache.org
>> >> > >
>> >> > >> wrote:
>> >> > >>
>> >> > >> > I think its still pointing to old svn repository which is just
>> read
>> >> > only
>> >> > >> > now.
>> >> > >> >
>> >> > >> > You can use latest mirror:
>> >> > >> > https://github.com/apache/hadoop
>> >> > >> >
>> >> > >> > Regards,
>> >> > >> > Vinay
>> >> > >> > On Sep 4, 2014 11:37 AM, "Sangjin Lee" 
>> wrote:
>> >> > >> >
>> >> > >> > > It seems like the github mirror at
>> >> > >> > https://github.com/apache/hadoop-common
>> >> > >&

[jira] [Created] (HADOOP-16948) ABFS: Support single writer dirs

2020-03-30 Thread Billie Rinaldi (Jira)
Billie Rinaldi created HADOOP-16948:
---

 Summary: ABFS: Support single writer dirs
 Key: HADOOP-16948
 URL: https://issues.apache.org/jira/browse/HADOOP-16948
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


This would allow some directories to be configured as single writer 
directories. The ABFS driver would obtain a lease when creating or opening a 
file for writing and would automatically renew the lease and release the lease 
when closing the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16948) ABFS: Support infinite lease dirs

2021-04-20 Thread Billie Rinaldi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi resolved HADOOP-16948.
-
Fix Version/s: 3.4.0
   3.3.1
   Resolution: Fixed

> ABFS: Support infinite lease dirs
> -
>
> Key: HADOOP-16948
> URL: https://issues.apache.org/jira/browse/HADOOP-16948
> Project: Hadoop Common
>  Issue Type: Sub-task
>    Reporter: Billie Rinaldi
>        Assignee: Billie Rinaldi
>Priority: Minor
>  Labels: abfsactive, pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> This would allow some directories to be configured as single writer 
> directories. The ABFS driver would obtain a lease when creating or opening a 
> file for writing and would automatically renew the lease and release the 
> lease when closing the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org