DVBv5 qos/stats driver implementation

2015-05-05 Thread Jemma Denson

Mauro/Antti,

Myself and Patrick are currently in the process of bringing an old out 
of tree frontend driver into shape for inclusion, and one of the issues 
raised by Mauro was the requirement for the new DVBv5 stats method. I've 
noticed there seems to be two different ways of going around this - one 
is to run through the collection and cache filling process during the 
calls to read_status (as in dib7000p/dib8000p), and the other is to poll 
independently every couple of seconds via schedule_delayed_work (as in 
af9033, rtl2830/2832).


Is there any reason for the two different ways - is it just a coding 
preference or is there some specifics to how these frontends need to be 
implemented?


Thanks,

Jemma.


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: DVBv5 qos/stats driver implementation

2015-05-05 Thread Mauro Carvalho Chehab
Em Tue, 05 May 2015 15:26:07 +0100
Jemma Denson jden...@gmail.com escreveu:

 Mauro/Antti,
 
 Myself and Patrick are currently in the process of bringing an old out 
 of tree frontend driver into shape for inclusion, and one of the issues 
 raised by Mauro was the requirement for the new DVBv5 stats method. I've 
 noticed there seems to be two different ways of going around this - one 
 is to run through the collection and cache filling process during the 
 calls to read_status (as in dib7000p/dib8000p), and the other is to poll 
 independently every couple of seconds via schedule_delayed_work (as in 
 af9033, rtl2830/2832).
 
 Is there any reason for the two different ways - is it just a coding 
 preference or is there some specifics to how these frontends need to be 
 implemented?

Hi Jemma,

It is basically coding preference. 

The DVB has already a thread that calls the frontend driver on every
3 seconds, at most (or when an event occurs). So, I don't see any need
for the drivers to start another thread to update status, as 3 seconds
is generally good enough for status update, then the frontend is
already locked, and the events can make status to update earlier during
device lock phase. Also, if needed, it won't be hard to add core support
to adjust the kthread delay time.

The driver may also skip an update if needed. So, I don't see much
gain on having a per-driver thread.

Having a per-driver thread should work too, although it is an overkill
for me to have two status kthreads running (one provided by the core,
and another by the driver).

Regards,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html