On Mon, Feb 16, 2015 at 4:16 AM, Felix Meschberger <fmesc...@adobe.com> wrote:
> - private@sling, infra@
>
> Hi
>
> Do we know what causes this ? If we don’t I suggest we stop the builds for 
> now and have someone investigate.

+1

IIRC we set up buildbot since Jenkins was not stable, but IMO it got
much better in the last months.

Robert

>
> Maybe it is some strange indexing configuration in Jackrabbit ?
>
> Regards
> Felix
>
>> Am 13.02.2015 um 05:04 schrieb Mark Thomas <ma...@apache.org>:
>>
>> Sling developers,
>>
>> We have just had a re-occurrence of the same problem.
>>
>> I will clean this up again this time but if it happens again I will
>> simply remove the sling builds from buildbot.
>>
>> Mark
>>
>> On 22/01/2015 23:03, Mark Thomas wrote:
>>> Sling developers,
>>>
>>> The sling-trunk CI build managed to kill one of the buildbot slaves by
>>> filling this directory with files until the file system ran out of inodes:
>>> /home/buildslave3/slave3/sling-trunk/build/testing/samples/integration-tests/sling/default/jackrabbit/workspaces/default/index
>>>
>>> There were so many files ls hung for 5+ minutes without any output.
>>>
>>> I have started to clean this up (rm -rf
>>> /home/buildslave3/slave3/sling-trunk) and that looks like it is going to
>>> take at least several hours to complete.
>>>
>>> The next CI build should re-checkout sling-trunk so your CI builds
>>> should be unaffected. However, please could you take a look at the
>>> buildbot configuration for this build and figure out a) why this
>>> happened and b) how to stop it happening again.
>>>
>>> Cheers,
>>>
>>> Mark
>>> on behalf of the ASF Infra team
>>>
>>
>



-- 
Sent from my (old) computer

Reply via email to