On Mon, Sep 28, 2015 at 12:52 AM, Steve Loughran <ste...@hortonworks.com> wrote:
>
> the jenkins machines are shared across multiple projects; cut the executors 
> to 1/node and then everyone's performance drops, including the time to 
> complete of all jenkins patches, which is one of the goals.

Hi Steve,

Just to be clear, the proposal wasn't to cut the executors to 1 per
node, but to have multiple Docker containers per node (perhaps 3 or 4)
and run each executor in an isolated container.  At that point,
whatever badness Maven does on the .m2 stops being a problem for
concurrently running jobs.

I guess I don't feel that strongly about this, but the additional
complexity of the other solutions (like running a "find" command in
.m2, or changing artifactID) seems like a disadvantage compared to
just using multiple containers.  And there may be other race
conditions here that we're not aware of... like a TOCTOU between
checking for a jar in .m2 and downloading it, for example.  The
Dockerized solution skips all those potential failure modes and
complexity.

cheers,
Colin


>
> https://builds.apache.org/computer/
>
> Like I said before: I don't think we need one mvn repo/build. All we need is 
> a unique artifact version tag on generated files. Ivy builds do that for you, 
> maven requires the build version in all the POMs to have a -SNAPSHOT tag, 
> which tells it to poll the remote repos for updates every day.
>
> We can build local hadoop releases with whatever version number we desire, 
> simply by using "mvn version:set" to update the version before the build. Do 
> that and you can share the same repo, with different artifacts generated and 
> referenced on every build. We don't need to play with >1 repo, which can be 
> pretty expensive. A du -h ~/.m2 tells me I have an 11GB local cache.
>
>
>> On 26 Sep 2015, at 06:43, Vinayakumar B <vinayakum...@apache.org> wrote:
>>
>> Thanks Andrew,
>>
>> May be we can try making it to 1 exec, and try for sometime. i think also
>> need to check what other jobs, hadoop ecosystem jobs, run in Hadoop nodes.
>> As HADOOP-11984 and HDFS-9139 are on the way to reduce build time
>> dramatically by enabling parallel tests, HDFS and COMMON precommit builds
>> will not block other builds for much time.
>>
>> To check, I dont have access to jenkins configuration. If I can get the
>> access I can reduce it myself and verify.
>>
>>
>> -Vinay
>>
>> On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang <andrew.w...@cloudera.com>
>> wrote:
>>
>>> Thanks for checking Vinay. As a temporary workaround, could we reduce the #
>>> of execs per node to 1? Our build queues are pretty short right now, so I
>>> don't think it would be too bad.
>>>
>>> Best,
>>> Andrew
>>>
>>> On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B <vinayakum...@apache.org>
>>> wrote:
>>>
>>>> In case if we are going to have separate repo for each executor,
>>>>
>>>> I have checked, each jenkins node is allocated 2 executors. so we only
>>> need
>>>> to create one more replica.
>>>>
>>>> Regards,
>>>> Vinay
>>>>
>>>> On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran <ste...@hortonworks.com>
>>>> wrote:
>>>>
>>>>>
>>>>>> On 22 Sep 2015, at 16:39, Colin P. McCabe <cmcc...@apache.org>
>>> wrote:
>>>>>>
>>>>>>> ANNOUNCEMENT: new patches which contain hard-coded ports in test
>>> runs
>>>>> will henceforth be reverted. Jenkins matters more than the 30s of your
>>>> time
>>>>> it takes to use the free port finder methods. Same for any hard code
>>>> paths
>>>>> in filesystems.
>>>>>>
>>>>>> +1.  Can you add this to HowToContribute on the wiki?  Or should we
>>>>>> vote on it first?
>>>>>
>>>>> I don't think we need to vote on it: hard code ports should be
>>> something
>>>>> we veto on patches anyway.
>>>>>
>>>>> In https://issues.apache.org/jira/browse/HADOOP-12143 I propose
>>> having a
>>>>> better style guide in the docs.
>>>>>
>>>>>
>>>>>
>>>>
>>>
>

Reply via email to