Hi,

I opened up a report for docker: https://github.com/docker/docker/issues/31182

I ran lslocks as root and with several flags, I could not get a more
detailed output. For completeness, here is a (reduced) correct report
with a blocked make:
make            11052  POSIX    0B WRITE* 0     0   0 /dev/null
dockerd         15312 OFDLCK       READ   0     0   0 /dev...

I still think that make should behave better then to block
indefinitely, and should be able to cleanly exit on a term signal (not
the case if using multiple jobs, seems to end just one of the blocked
threads)

2017-02-18 20:33 GMT+01:00 James Cowgill <jcowg...@debian.org>:
> Hi,
>
> On 18/02/17 08:27, Norbert Lange wrote:
>> Hi,
>>
>> sorry for messing up years.
>> lslocks only showed makes locking /dev/null, but it appears to be that
>> the culprit is a running dockerd daemon.
>
> lslocks shouldn't be showing make holding a lock in /dev/null because it
> does so for a very short period of time. It's also not in the list you
> posted below.
>
>> I dont understand why, but with the service disabled a blocked make
>> will suddenly continue.
>
> Both make and docker are locking the same file. Only one will be able to
> obtain it and the other will probably hang (or fail in another way).
>
>> to install the service:
>> echo > /etc/apt/sources.list.d/docker.list 'deb [arch=amd64]
>> https://apt.dockerproject.org/repo/ debian-stretch main'
>> apt-get update; apt-get install docker-engine
>>
>> For completeness, the lslocks output:
>> $ lslocks
>> COMMAND           PID   TYPE   SIZE MODE  M      START        END PATH
> [...]
>> dockerd          3732 OFDLCK        READ  0          0          0 /dev...
>
> You may need to run lslocks as root to get the rest of this path.
> Assuming this is a lock on /dev/null, then this is probably a bug in
> docker rather than make. No-one should be holding long lived locks on
> "global" files like that.
>
>> dockerd          3732  FLOCK   128K WRITE 0          0          0
>> /var/lib/docker/volumes/metadata.db
>
> Thanks,
> James
>

Reply via email to