On Wed, Jan 02, 2002 at 07:08:08PM +0100, Zdenek Kabelac wrote:
> > Yep. I'll send you something on which I see problems.
> I still have the japanies clip - so there is no need - I'll check it
> with this one later...
ok

> > > the problem with setting right values for  H/V sync signals - and
> > > should be easy to solve if the driver would be written by me...
> > Well, the ATI TVout is written by me so I know this is not possible yet :-)
> From doc for Matrox card it looks like rather simple thing to select image
> borders and I would expect the same for ATI
Unfortunately the ATI tv encoder chips are undocumented (because macrovision
blah blah), thats why it aint so easy.

> > > scheduler (I would say it wakes the thread with 99.9% precision at least
> > > with my mga driver)
> > I should be more precise: ATI handles double buffering in hardware, when
> > you tell it "do yuv conversion/scaling" and set a certain bit, the picture
> > isn't drawn immediately, only on the next refresh. And as the command returns
> 
> aviplay for now doesn't use hw double buffering of mga_vid device - but
> it works with MGA the same way as you describe for ATI - for duuble
> buffering you also do not have to wait until image is being flipped.
> 
> But I've modified the driver in such a way that read operation
> on device will wait for the next vsync blank interrupt.
yep, ill try to do this for ati as well, but the main point is to coordinate
this with other ati stuff so that there arent dozens of colliding drivers :-)

> Thus aviplay is only using  read  on this device to get proper timing - it
> has nothing common with XFree rendering stuff - it's still fully handle
> inside XFree 
Yup.

> > why it happens to me, aviplay has no idea when the actual drawing happens.
> That is why LINUX needs  such device and I still couldn't understand
> why it's not there - even ZXSpectrum has been able to provide such
> thing...
Yes. Lets write an API, you implement it for mga, I for ati and then we talk
to kernel guys. Alan is apparently busy with other stuff or doesn't watch
so much videos as I do :-)

> > I'd like to keep the double-buffering to prevent tearing. So my idea is to
> > write a module that wakes the video renderer thread AFTER the refresh has
> > finished, not when it begins. This will allow the transfer part of
> That's what we are talking about all the time :)
> - some modular architecture which would provide interrupt handler and
> eventualy some extra addons for some video cards
yep.

> - though it would be much better if these would be handled by XFree directly
> (ok there are still some 'weird' people who prefer console & framebuffer...
> so there might be some reason to provide such capabilities outside of XFree)
Not necessarily IMHO, do we really need it to be able to do so much really?

Here comes API proposal:

/proc/something/list -> list of cards and short descriptions, one per line,
and enough info for X or generally a userspace app to find out which card it
is currently working with
/proc/something/x/info (x being card #) -> some additional info, like if the
current mode is interlaced
/proc/something/x/freq -> current frequency, so we can automagically find out
if the "24 fps video on PAL TV" situation is occuring. It could either be
measured with TSC on every vbi, or with simple clock ticks, say "how many vbis
have happened in the last 10 or 100 clock ticks?" (this might cause a short
fscking up upon videomode switch, but doesn't require a pentium+ CPU, think
about other architectures).
/proc/something/x/eject -> spit out the card out of the case :-)

/dev/something (character device) -> block on read till vbi occurs on one
card, and then will spit out a few bytes telling what (vbi end, vbi begin,
nuthing) on a even/odd frame happened on which card. I think we don't need
ioctls.  Or perhaps one
"DO_SOMETHING_SO_THAT_NEITHER_JUDDER_NOR_TEARING_OCCURS", that will do
absolutely nothing :-)

I think this is not very much code (and more importantly VERY LITTLE
card-dependent code) and hence I propose this beast to compile into a single
module. The only problem I see is closed-source driver and I couldn't care
less about it. If there aren't even docs on how to detect vbi and interlacing,
screw the card.

mga_vid and ati_vid are currently used for YUV scaler. I think this stuff
belongs to userspace, because XvShmPutImage already does this optimally
(memcpy to videocard inside kernel is bad, DMA is already done by dri and
XvShmPutImage can use the userspace allocated memory as source, which IMHO
isn't possible with kernel so one more memcpy is necessary, plus context
switches). Does it go faster inside kernel? Hell no, it only increases
latency. Can it be programmed in userspace without X? Yes, and it ain't so
difficult IMHO.

Bye,

Peter Surda (Shurdeek) <[EMAIL PROTECTED]>, ICQ 10236103, +436505122023

--
                Where do you think you're going today?

Attachment: msg02539/pgp00000.pgp
Description: PGP signature

Reply via email to