+ Junping, Sammi

Hi Jonathan,

Many thanks for reporting the issues and sorry for the inconvenience.

1. Shouldn't the build be looking for artifacts in

https://repository.apache.org/content/repositories/releases

rather than

https://repository.apache.org/content/repositories/snapshots
?

2.

Not seeing the artifact published here as well.
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project


Indeed, I did not see 2.9.1 there too. So included Sammi Chen.

Hi Junping, would you please share which step in
https://wiki.apache.org/hadoop/HowToRelease
should have done this?

Thanks a lot.

--Yongjun

On Fri, Jun 15, 2018 at 10:52 PM, Jonathan Eagles <jeag...@gmail.com> wrote:

> Upgraded Tez dependency to hadoop 3.0.3 and found this issue. Anyone else
> seeing this issue?
>
> [ERROR] Failed to execute goal on project hadoop-shim: Could not resolve
> dependencies for project org.apache.tez:hadoop-shim:jar:0.10.0-SNAPSHOT:
> Failed to collect dependencies at org.apache.hadoop:hadoop-yarn-api:jar:3.0.3:
> Failed to read artifact descriptor for 
> org.apache.hadoop:hadoop-yarn-api:jar:3.0.3:
> Could not find artifact org.apache.hadoop:hadoop-project:pom:3.0.3 in
> apache.snapshots.https (https://repository.apache.
> org/content/repositories/snapshots) -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/
> DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn <goals> -rf :hadoop-shim
>
> Not seeing the artifact published here as well.
> https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project
>
> On Tue, Jun 12, 2018 at 6:44 PM, Yongjun Zhang <yzh...@cloudera.com>
> wrote:
>
>> Thanks Eric!
>>
>> --Yongjun
>>
>> On Mon, Jun 11, 2018 at 8:05 AM, Eric Payne <erichadoo...@yahoo.com>
>> wrote:
>>
>> > Sorry, Yongjun. My +1 is also binding
>> > +1 (binding)
>> > -Eric Payne
>> >
>> > On Friday, June 1, 2018, 12:25:36 PM CDT, Eric Payne <
>> > eric.payne1...@yahoo.com> wrote:
>> >
>> >
>> >
>> >
>> > Thanks a lot, Yongjun, for your hard work on this release.
>> >
>> > +1
>> > - Built from source
>> > - Installed on 6 node pseudo cluster
>> >
>> >
>> > Tested the following in the Capacity Scheduler:
>> > - Verified that running apps in labelled queues restricts tasks to the
>> > labelled nodes.
>> > - Verified that various queue config properties for CS are refreshable
>> > - Verified streaming jobs work as expected
>> > - Verified that user weights work as expected
>> > - Verified that FairOrderingPolicy in a CS queue will evenly assign
>> > resources
>> > - Verified running yarn shell application runs as expected
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Friday, June 1, 2018, 12:48:26 AM CDT, Yongjun Zhang <
>> > yjzhan...@apache.org> wrote:
>> >
>> >
>> >
>> >
>> >
>> > Greetings all,
>> >
>> > I've created the first release candidate (RC0) for Apache Hadoop
>> > 3.0.3. This is our next maintenance release to follow up 3.0.2. It
>> includes
>> > about 249
>> > important fixes and improvements, among which there are 8 blockers. See
>> > https://issues.apache.org/jira/issues/?filter=12343997
>> >
>> > The RC artifacts are available at:
>> > https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/
>> >
>> > The maven artifacts are available via
>> > https://repository.apache.org/content/repositories/orgapachehadoop-1126
>> >
>> > Please try the release and vote; the vote will run for the usual 5
>> working
>> > days, ending on 06/07/2018 PST time. Would really appreciate your
>> > participation here.
>> >
>> > I bumped into quite some issues along the way, many thanks to quite a
>> few
>> > people who helped, especially Sammi Chen, Andrew Wang, Junping Du, Eddy
>> Xu.
>> >
>> > Thanks,
>> >
>> > --Yongjun
>> >
>>
>
>

Reply via email to