Re: [FFmpeg-devel] [PATCH 3/3] avutil/threadmessage: fix condition broadcasting

2015-12-07 Thread Clément Bœsch
On Wed, Dec 02, 2015 at 03:57:31PM +0100, Clément Bœsch wrote:
> From: Clément Bœsch 
> 
> Fix a dead lock under certain conditions. Let's assume we have a queue of 1
> message max, 2 senders, and 1 receiver.
[...]

Pushed with commit message adjusted

-- 
Clément B.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/3] avutil/threadmessage: fix condition broadcasting

2015-12-06 Thread Nicolas George
Le quintidi 15 frimaire, an CCXXIV, Clement Boesch a écrit :
> This was before providing the 2 cond version, but OK

Sorry, missed that. Patch looks good to me, but you forgot to update the
commit message.

> I'd like to push the test as well, but it depends on the first patch
> (flush func). I clarified a bit the use of the flush in the doxy. Are you
> still uncomfortable about this one?

I will look at it soon.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/3] avutil/threadmessage: fix condition broadcasting

2015-12-05 Thread Clément Bœsch
On Sat, Dec 05, 2015 at 01:19:16PM +0100, Nicolas George wrote:
> Le quintidi 15 frimaire, an CCXXIV, Clement Boesch a écrit :
> > ping, I'd like to apply this fix soon
> 
> I have already answered:
> 
> http://ffmpeg.org/pipermail/ffmpeg-devel/2015-December/184285.html
> 

This was before providing the 2 cond version, but OK

> So please go ahead.
> 

I'd like to push the test as well, but it depends on the first patch
(flush func). I clarified a bit the use of the flush in the doxy. Are you
still uncomfortable about this one?

-- 
Clément B.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/3] avutil/threadmessage: fix condition broadcasting

2015-12-05 Thread Nicolas George
Le quintidi 15 frimaire, an CCXXIV, Clement Boesch a écrit :
> ping, I'd like to apply this fix soon

I have already answered:

http://ffmpeg.org/pipermail/ffmpeg-devel/2015-December/184285.html

So please go ahead.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/3] avutil/threadmessage: fix condition broadcasting

2015-12-05 Thread Clément Bœsch
On Wed, Dec 02, 2015 at 03:57:31PM +0100, Clément Bœsch wrote:
[...]
> This second solution replaces the condition with two: one to notify the
> senders, and one to notify the receivers. This prevents senders from
> notifying other senders instead of a reader, and the other way around.
> It also avoid broadcasting to everyone like the first solution, and is,
> as a result in theory more optimized.
> ---
>  libavutil/threadmessage.c | 35 ---
>  1 file changed, 24 insertions(+), 11 deletions(-)
> 

ping, I'd like to apply this fix soon

[...]

-- 
Clément B.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 3/3] avutil/threadmessage: fix condition broadcasting

2015-12-02 Thread Clément Bœsch
From: Clément Bœsch 

Fix a dead lock under certain conditions. Let's assume we have a queue of 1
message max, 2 senders, and 1 receiver.

Scenario (real record obtained with debug added):
[...]
SENDER #0: acquired lock
SENDER #0: queue is full, wait
SENDER #1: acquired lock
SENDER #1: queue is full, wait
RECEIVER: acquired lock
RECEIVER: reading a msg from the queue
RECEIVER: signal the cond
RECEIVER: acquired lock
RECEIVER: queue is empty, wait
SENDER #0: writing a msg the queue
SENDER #0: signal the cond
SENDER #0: acquired lock
SENDER #0: queue is full, wait
SENDER #1: queue is full, wait

Translated:
 - initially the queue contains 1/1 message with 2 senders blocking on
   it, waiting to push another message.
 - Meanwhile the receiver is obtaining the lock, read the message,
   signal & release the lock. For some reason it is able to acquire the
   lock again before the signal wakes up one of the sender. Since it
   just emptied the queue, the reader waits for the queue to fill up
   again.
 - The signal finally reaches one of the sender, which writes a message
   and then signal the condition. Unfortunately, instead of waking up
   the reader, it actually wakes up the other worker (signal = notify
   the condition just for 1 waiter), who can't push another message in
   the queue because it's full.
 - Meanwhile, the receiver is still waiting. Deadlock.

This scenario can be triggered with for example:
tests/api/api-threadmessage-test 2 1 100 100 999

The fix is simply to make sure to notify everyone when work is done (be
it reading or writing). Another solution would be to use 2 distincts
conditions.
---
 libavutil/threadmessage.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/libavutil/threadmessage.c b/libavutil/threadmessage.c
index 87ce8dc..ec62c2d 100644
--- a/libavutil/threadmessage.c
+++ b/libavutil/threadmessage.c
@@ -118,7 +118,7 @@ static int 
av_thread_message_queue_send_locked(AVThreadMessageQueue *mq,
 if (mq->err_send)
 return mq->err_send;
 av_fifo_generic_write(mq->fifo, msg, mq->elsize, NULL);
-pthread_cond_signal(>cond);
+pthread_cond_broadcast(>cond);
 return 0;
 }
 
@@ -134,7 +134,7 @@ static int 
av_thread_message_queue_recv_locked(AVThreadMessageQueue *mq,
 if (av_fifo_size(mq->fifo) < mq->elsize)
 return mq->err_recv;
 av_fifo_generic_read(mq->fifo, msg, mq->elsize, NULL);
-pthread_cond_signal(>cond);
+pthread_cond_broadcast(>cond);
 return 0;
 }
 
-- 
2.6.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 3/3] avutil/threadmessage: fix condition broadcasting

2015-12-02 Thread Clément Bœsch
From: Clément Bœsch 

Fix a dead lock under certain conditions. Let's assume we have a queue of 1
message max, 2 senders, and 1 receiver.

Scenario (real record obtained with debug added):
[...]
SENDER #0: acquired lock
SENDER #0: queue is full, wait
SENDER #1: acquired lock
SENDER #1: queue is full, wait
RECEIVER: acquired lock
RECEIVER: reading a msg from the queue
RECEIVER: signal the cond
RECEIVER: acquired lock
RECEIVER: queue is empty, wait
SENDER #0: writing a msg the queue
SENDER #0: signal the cond
SENDER #0: acquired lock
SENDER #0: queue is full, wait
SENDER #1: queue is full, wait

Translated:
 - initially the queue contains 1/1 message with 2 senders blocking on
   it, waiting to push another message.
 - Meanwhile the receiver is obtaining the lock, read the message,
   signal & release the lock. For some reason it is able to acquire the
   lock again before the signal wakes up one of the sender. Since it
   just emptied the queue, the reader waits for the queue to fill up
   again.
 - The signal finally reaches one of the sender, which writes a message
   and then signal the condition. Unfortunately, instead of waking up
   the reader, it actually wakes up the other worker (signal = notify
   the condition just for 1 waiter), who can't push another message in
   the queue because it's full.
 - Meanwhile, the receiver is still waiting. Deadlock.

This scenario can be triggered with for example:
tests/api/api-threadmessage-test 1 2 100 100 1 1000 1000

One working solution is to make av_thread_message_queue_{send,recv}()
call pthread_cond_broadcast() instead of pthread_cond_signal() so both
senders and receivers are unlocked when work is done (be it reading or
writing).

This second solution replaces the condition with two: one to notify the
senders, and one to notify the receivers. This prevents senders from
notifying other senders instead of a reader, and the other way around.
It also avoid broadcasting to everyone like the first solution, and is,
as a result in theory more optimized.
---
 libavutil/threadmessage.c | 35 ---
 1 file changed, 24 insertions(+), 11 deletions(-)

diff --git a/libavutil/threadmessage.c b/libavutil/threadmessage.c
index 66b5fc6..783855f 100644
--- a/libavutil/threadmessage.c
+++ b/libavutil/threadmessage.c
@@ -36,7 +36,8 @@ struct AVThreadMessageQueue {
 #if HAVE_THREADS
 AVFifoBuffer *fifo;
 pthread_mutex_t lock;
-pthread_cond_t cond;
+pthread_cond_t cond_recv;
+pthread_cond_t cond_send;
 int err_send;
 int err_recv;
 unsigned elsize;
@@ -63,13 +64,20 @@ int av_thread_message_queue_alloc(AVThreadMessageQueue **mq,
 av_free(rmq);
 return AVERROR(ret);
 }
-if ((ret = pthread_cond_init(>cond, NULL))) {
+if ((ret = pthread_cond_init(>cond_recv, NULL))) {
+pthread_mutex_destroy(>lock);
+av_free(rmq);
+return AVERROR(ret);
+}
+if ((ret = pthread_cond_init(>cond_send, NULL))) {
+pthread_cond_destroy(>cond_recv);
 pthread_mutex_destroy(>lock);
 av_free(rmq);
 return AVERROR(ret);
 }
 if (!(rmq->fifo = av_fifo_alloc(elsize * nelem))) {
-pthread_cond_destroy(>cond);
+pthread_cond_destroy(>cond_send);
+pthread_cond_destroy(>cond_recv);
 pthread_mutex_destroy(>lock);
 av_free(rmq);
 return AVERROR(ret);
@@ -99,7 +107,8 @@ void av_thread_message_queue_free(AVThreadMessageQueue **mq)
 av_thread_message_flush(*mq);
 av_freep(&(*mq)->tmp_msg);
 av_fifo_freep(&(*mq)->fifo);
-pthread_cond_destroy(&(*mq)->cond);
+pthread_cond_destroy(&(*mq)->cond_send);
+pthread_cond_destroy(&(*mq)->cond_recv);
 pthread_mutex_destroy(&(*mq)->lock);
 av_freep(mq);
 }
@@ -115,12 +124,13 @@ static int 
av_thread_message_queue_send_locked(AVThreadMessageQueue *mq,
 while (!mq->err_send && av_fifo_space(mq->fifo) < mq->elsize) {
 if ((flags & AV_THREAD_MESSAGE_NONBLOCK))
 return AVERROR(EAGAIN);
-pthread_cond_wait(>cond, >lock);
+pthread_cond_wait(>cond_send, >lock);
 }
 if (mq->err_send)
 return mq->err_send;
 av_fifo_generic_write(mq->fifo, msg, mq->elsize, NULL);
-pthread_cond_signal(>cond);
+/* one message is sent, signal one receiver */
+pthread_cond_signal(>cond_recv);
 return 0;
 }
 
@@ -131,12 +141,13 @@ static int 
av_thread_message_queue_recv_locked(AVThreadMessageQueue *mq,
 while (!mq->err_recv && av_fifo_size(mq->fifo) < mq->elsize) {
 if ((flags & AV_THREAD_MESSAGE_NONBLOCK))
 return AVERROR(EAGAIN);
-pthread_cond_wait(>cond, >lock);
+pthread_cond_wait(>cond_recv, >lock);
 }
 if (av_fifo_size(mq->fifo) < mq->elsize)
 return mq->err_recv;
 av_fifo_generic_read(mq->fifo, msg, mq->elsize, NULL);
-