[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #1 from Gregory Nilsson  ---
Created attachment 105388
  --> https://bugs.kde.org/attachment.cgi?id=105388&action=edit
log

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Ivo Raisr
https://bugs.kde.org/show_bug.cgi?id=379630

Ivo Raisr  changed:

   What|Removed |Added

 CC||iv...@ivosh.net

--- Comment #2 from Ivo Raisr  ---
I get no violation when the attached example is compiled with gcc 6.3 and run
on my Ubuntu 16.10:

valgrind --tool=helgrind --hg-sanity-flags=01 -q ./mutex
m1=0xffefffc00, m2=0xffefffc28
m2=0xffefffc00, m1=0xffefffc28

What is your Valgrind version and compiler version?

Could you also try your example with DRD instead of Helgrind?

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #3 from Gregory Nilsson  ---
I'm running gcc 4.9.2 and valgrind 3.13 SVN, tried 3.12 as well with the same
result. No faults are reported when running DRD (attaching log).

I compile my test like this (seen in log.txt):
gcc -g -std=c++11 main.cc -o test -O0 -lstdc++ -lpthread

I can see from the output of the -v flag to valgrind REDIR printouts only for
pthread_mutex_lock and pthread_mutex_unlock (not pthread_mutex_init/destroy).
If I remove -lpthread when compiling then helgrind reports no faults but also
no REDIRs can be seen for pthread_mutex, not sure if that is a hint to why you
saw no error?

I've tried to set SHOW_EVENTS to 1 in hg_main.cc and still cannot see any calls
to evh__HG_PTHREAD_MUTEX_DESTROY_PRE. This function would call
map_locks_delete() 
which removes the mutex from the internal lock map of helgrind, but since it is
not called then the old mutex will be found when a new mutex is allocated in
the same place.

Does helgrind detect std::mutex going out of scope on your system?

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #4 from Gregory Nilsson  ---
Created attachment 105390
  --> https://bugs.kde.org/attachment.cgi?id=105390&action=edit
drd

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Ivo Raisr
https://bugs.kde.org/show_bug.cgi?id=379630

Ivo Raisr  changed:

   What|Removed |Added

 Ever confirmed|0   |1
 Status|UNCONFIRMED |CONFIRMED

--- Comment #5 from Ivo Raisr  ---
Yes, compilation options make a huge difference.
I can reproduce the problem now.

DRD with '--trace-mutex=yes' reports this:
...
m1=0xffefffc00, m2=0xffefffc28
==23335== [1] mutex_trylock   mutex 0xffefffc00 rc 0 owner 0
==23335== [1] post_mutex_lock mutex 0xffefffc00 rc 0 owner 0
==23335== [1] mutex_trylock   mutex 0xffefffc28 rc 0 owner 0
==23335== [1] post_mutex_lock mutex 0xffefffc28 rc 0 owner 0
==23335== [1] mutex_unlockmutex 0xffefffc28 rc 1
==23335== [1] mutex_unlockmutex 0xffefffc00 rc 1
m2=0xffefffc00, m1=0xffefffc28
==23335== [1] mutex_trylock   mutex 0xffefffc28 rc 0 owner 1
==23335== [1] post_mutex_lock mutex 0xffefffc28 rc 0 owner 1
==23335== [1] mutex_trylock   mutex 0xffefffc00 rc 0 owner 1
==23335== [1] post_mutex_lock mutex 0xffefffc00 rc 0 owner 1
==23335== [1] mutex_unlockmutex 0xffefffc00 rc 1
==23335== [1] mutex_unlockmutex 0xffefffc28 rc 1

And Helgrind built with "SHOW_EVENTS 1" reports this:
m1=0xffefffc10, m2=0xffefffc38
evh__hg_PTHREAD_MUTEX_LOCK_PRE(ctid=1, mutex=0xFFEFFFC10)
evh__HG_PTHREAD_MUTEX_LOCK_POST(ctid=1, mutex=0xFFEFFFC10)
evh__hg_PTHREAD_MUTEX_LOCK_PRE(ctid=1, mutex=0xFFEFFFC38)
evh__HG_PTHREAD_MUTEX_LOCK_POST(ctid=1, mutex=0xFFEFFFC38)
evh__HG_PTHREAD_MUTEX_UNLOCK_PRE(ctid=1, mutex=0xFFEFFFC38)
evh__hg_PTHREAD_MUTEX_UNLOCK_POST(ctid=1, mutex=0xFFEFFFC38)
evh__HG_PTHREAD_MUTEX_UNLOCK_PRE(ctid=1, mutex=0xFFEFFFC10)
evh__hg_PTHREAD_MUTEX_UNLOCK_POST(ctid=1, mutex=0xFFEFFFC10)
m2=0xffefffc10, m1=0xffefffc38
evh__hg_PTHREAD_MUTEX_LOCK_PRE(ctid=1, mutex=0xFFEFFFC38)
evh__HG_PTHREAD_MUTEX_LOCK_POST(ctid=1, mutex=0xFFEFFFC38)
evh__hg_PTHREAD_MUTEX_LOCK_PRE(ctid=1, mutex=0xFFEFFFC10)
evh__HG_PTHREAD_MUTEX_LOCK_POST(ctid=1, mutex=0xFFEFFFC10)

evh__HG_PTHREAD_MUTEX_UNLOCK_PRE(ctid=1, mutex=0xFFEFFFC10)
evh__hg_PTHREAD_MUTEX_UNLOCK_POST(ctid=1, mutex=0xFFEFFFC10)
evh__HG_PTHREAD_MUTEX_UNLOCK_PRE(ctid=1, mutex=0xFFEFFFC38)
evh__hg_PTHREAD_MUTEX_UNLOCK_POST(ctid=1, mutex=0xFFEFFFC38)

So both tools correctly detect mutex lock and unlock.
DRD does not report lock order violation, although it could have.

As regards detecting when mutex is allocated or not. The reproducer
unfortunately does not invoke none of pthread_mutex_init() or
pthread_mutex_destroy().
If you disassemble the resulting binary, you'll see calls to
gthread_mutex_lock() and gthread_mutex_unlock(). However no calls to init or
destroy - they are implicit.

At this point I think you need to teach your program to annotate for this.
See Valgrind manual [1] and also these hints [2].

[1] http://valgrind.org/docs/manual/drd-manual.html#drd-manual.C++11
[2] http://valgrind.org/docs/manual/hg-manual.html#hg-manual.effective-use

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-09 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #6 from Gregory Nilsson  ---
Created attachment 105409
  --> https://bugs.kde.org/attachment.cgi?id=105409&action=edit
test case with mutex wrapper

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-09 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #7 from Gregory Nilsson  ---
Created attachment 105411
  --> https://bugs.kde.org/attachment.cgi?id=105411&action=edit
log when using wrapper

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-09 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #8 from Gregory Nilsson  ---
I've created a wrapper for the std::mutex (updated test code attached),
with VALGRIND_HG_MUTEX_INIT_POST/VALGRIND_HG_MUTEX_DESTROY_PRE in its
constructor/destructor.
The lock order violation error is now gone (log attached).

Is this how I should do it if I want helgrind to handle std::mutex correctly?

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2023-09-11 Thread Paul Floyd
https://bugs.kde.org/show_bug.cgi?id=379630

Paul Floyd  changed:

   What|Removed |Added

   Assignee|jsew...@acm.org |pjfl...@wanadoo.fr

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2023-09-12 Thread Paul Floyd
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #9 from Paul Floyd  ---
There are two ways to initialize a pthread_mutex_t object:
call pthread_mutex_init() or
use the static initializer PTHREAD_MUTEX_INITIALIZER

The function way is "nice" from a Valgrind perspective since we can intercept
the function and record the initialization, as well as check that it gets
destroyed.

The static initializer is less nice - Valgrind doesn't see anything so can't do
any create/destroy validation.

llvm looks like it uses static initialization:

__libcpp_mutex_t __m_ = _LIBCPP_MUTEX_INITIALIZER;

Not surprising, I expect that it's faster and does the same thing.

libstdc++ code is more complicated and depends on whether __GTHREAD_MUTEX_INIT
is defined or not. If it is then static initialization is used. Which is the
case for posix systems.

Other than modifying your C++ standard library and writing a wrapper there's no
easy fix.

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2023-09-13 Thread Paul Floyd
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #10 from Paul Floyd  ---
I still get the same error with CGG 11.2.

Your wrapper looks right to me, as long as the pthread_mutex_t is the very
first thing in the maemory layout of the std::mutex. That's the case for GCC
libstdc++ and I suspect also for LLVM libc++.

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2023-01-07 Thread Paul Floyd
https://bugs.kde.org/show_bug.cgi?id=379630

Paul Floyd  changed:

   What|Removed |Added

 CC||pjfl...@wanadoo.fr

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #1 from Gregory Nilsson  ---
Created attachment 105388
  --> https://bugs.kde.org/attachment.cgi?id=105388&action=edit
log

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Ivo Raisr
https://bugs.kde.org/show_bug.cgi?id=379630

Ivo Raisr  changed:

   What|Removed |Added

 CC||iv...@ivosh.net

--- Comment #2 from Ivo Raisr  ---
I get no violation when the attached example is compiled with gcc 6.3 and run
on my Ubuntu 16.10:

valgrind --tool=helgrind --hg-sanity-flags=01 -q ./mutex
m1=0xffefffc00, m2=0xffefffc28
m2=0xffefffc00, m1=0xffefffc28

What is your Valgrind version and compiler version?

Could you also try your example with DRD instead of Helgrind?

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #3 from Gregory Nilsson  ---
I'm running gcc 4.9.2 and valgrind 3.13 SVN, tried 3.12 as well with the same
result. No faults are reported when running DRD (attaching log).

I compile my test like this (seen in log.txt):
gcc -g -std=c++11 main.cc -o test -O0 -lstdc++ -lpthread

I can see from the output of the -v flag to valgrind REDIR printouts only for
pthread_mutex_lock and pthread_mutex_unlock (not pthread_mutex_init/destroy).
If I remove -lpthread when compiling then helgrind reports no faults but also
no REDIRs can be seen for pthread_mutex, not sure if that is a hint to why you
saw no error?

I've tried to set SHOW_EVENTS to 1 in hg_main.cc and still cannot see any calls
to evh__HG_PTHREAD_MUTEX_DESTROY_PRE. This function would call
map_locks_delete() 
which removes the mutex from the internal lock map of helgrind, but since it is
not called then the old mutex will be found when a new mutex is allocated in
the same place.

Does helgrind detect std::mutex going out of scope on your system?

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #4 from Gregory Nilsson  ---
Created attachment 105390
  --> https://bugs.kde.org/attachment.cgi?id=105390&action=edit
drd

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-08 Thread Ivo Raisr
https://bugs.kde.org/show_bug.cgi?id=379630

Ivo Raisr  changed:

   What|Removed |Added

 Ever confirmed|0   |1
 Status|UNCONFIRMED |CONFIRMED

--- Comment #5 from Ivo Raisr  ---
Yes, compilation options make a huge difference.
I can reproduce the problem now.

DRD with '--trace-mutex=yes' reports this:
...
m1=0xffefffc00, m2=0xffefffc28
==23335== [1] mutex_trylock   mutex 0xffefffc00 rc 0 owner 0
==23335== [1] post_mutex_lock mutex 0xffefffc00 rc 0 owner 0
==23335== [1] mutex_trylock   mutex 0xffefffc28 rc 0 owner 0
==23335== [1] post_mutex_lock mutex 0xffefffc28 rc 0 owner 0
==23335== [1] mutex_unlockmutex 0xffefffc28 rc 1
==23335== [1] mutex_unlockmutex 0xffefffc00 rc 1
m2=0xffefffc00, m1=0xffefffc28
==23335== [1] mutex_trylock   mutex 0xffefffc28 rc 0 owner 1
==23335== [1] post_mutex_lock mutex 0xffefffc28 rc 0 owner 1
==23335== [1] mutex_trylock   mutex 0xffefffc00 rc 0 owner 1
==23335== [1] post_mutex_lock mutex 0xffefffc00 rc 0 owner 1
==23335== [1] mutex_unlockmutex 0xffefffc00 rc 1
==23335== [1] mutex_unlockmutex 0xffefffc28 rc 1

And Helgrind built with "SHOW_EVENTS 1" reports this:
m1=0xffefffc10, m2=0xffefffc38
evh__hg_PTHREAD_MUTEX_LOCK_PRE(ctid=1, mutex=0xFFEFFFC10)
evh__HG_PTHREAD_MUTEX_LOCK_POST(ctid=1, mutex=0xFFEFFFC10)
evh__hg_PTHREAD_MUTEX_LOCK_PRE(ctid=1, mutex=0xFFEFFFC38)
evh__HG_PTHREAD_MUTEX_LOCK_POST(ctid=1, mutex=0xFFEFFFC38)
evh__HG_PTHREAD_MUTEX_UNLOCK_PRE(ctid=1, mutex=0xFFEFFFC38)
evh__hg_PTHREAD_MUTEX_UNLOCK_POST(ctid=1, mutex=0xFFEFFFC38)
evh__HG_PTHREAD_MUTEX_UNLOCK_PRE(ctid=1, mutex=0xFFEFFFC10)
evh__hg_PTHREAD_MUTEX_UNLOCK_POST(ctid=1, mutex=0xFFEFFFC10)
m2=0xffefffc10, m1=0xffefffc38
evh__hg_PTHREAD_MUTEX_LOCK_PRE(ctid=1, mutex=0xFFEFFFC38)
evh__HG_PTHREAD_MUTEX_LOCK_POST(ctid=1, mutex=0xFFEFFFC38)
evh__hg_PTHREAD_MUTEX_LOCK_PRE(ctid=1, mutex=0xFFEFFFC10)
evh__HG_PTHREAD_MUTEX_LOCK_POST(ctid=1, mutex=0xFFEFFFC10)

evh__HG_PTHREAD_MUTEX_UNLOCK_PRE(ctid=1, mutex=0xFFEFFFC10)
evh__hg_PTHREAD_MUTEX_UNLOCK_POST(ctid=1, mutex=0xFFEFFFC10)
evh__HG_PTHREAD_MUTEX_UNLOCK_PRE(ctid=1, mutex=0xFFEFFFC38)
evh__hg_PTHREAD_MUTEX_UNLOCK_POST(ctid=1, mutex=0xFFEFFFC38)

So both tools correctly detect mutex lock and unlock.
DRD does not report lock order violation, although it could have.

As regards detecting when mutex is allocated or not. The reproducer
unfortunately does not invoke none of pthread_mutex_init() or
pthread_mutex_destroy().
If you disassemble the resulting binary, you'll see calls to
gthread_mutex_lock() and gthread_mutex_unlock(). However no calls to init or
destroy - they are implicit.

At this point I think you need to teach your program to annotate for this.
See Valgrind manual [1] and also these hints [2].

[1] http://valgrind.org/docs/manual/drd-manual.html#drd-manual.C++11
[2] http://valgrind.org/docs/manual/hg-manual.html#hg-manual.effective-use

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-09 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #6 from Gregory Nilsson  ---
Created attachment 105409
  --> https://bugs.kde.org/attachment.cgi?id=105409&action=edit
test case with mutex wrapper

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-09 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #7 from Gregory Nilsson  ---
Created attachment 105411
  --> https://bugs.kde.org/attachment.cgi?id=105411&action=edit
log when using wrapper

-- 
You are receiving this mail because:
You are watching all bug changes.

[valgrind] [Bug 379630] false positive std::mutex problems

2017-05-09 Thread Gregory Nilsson
https://bugs.kde.org/show_bug.cgi?id=379630

--- Comment #8 from Gregory Nilsson  ---
I've created a wrapper for the std::mutex (updated test code attached),
with VALGRIND_HG_MUTEX_INIT_POST/VALGRIND_HG_MUTEX_DESTROY_PRE in its
constructor/destructor.
The lock order violation error is now gone (log attached).

Is this how I should do it if I want helgrind to handle std::mutex correctly?

-- 
You are receiving this mail because:
You are watching all bug changes.