Re: [Xpert]How do I make my colors _perfect_?

2002-12-17 Thread Billy Biggs
[EMAIL PROTECTED] ([EMAIL PROTECTED]):

 I need to have accurate color representation on my monitor.  Of course
 monitors and video cards differ in this area, so how do I know a color
 being presented to me on the screen is true to the color of the
 source?
 
 Is there any way to tune the Xserver to display colors accurately on
 a given monitor?

  Colour is a complicated subject. :)  If you are able to use a device
such as a colorimeter, you can write a program to measure the colour
gamut of your monitor.  Using this information, and assuming that your
application outputs colours assuming a standard like sRGB, you can use
the gamma table in X to map input colours to the appropriate ones for
your monitor, and do something intelligent with ones outside of the
gamut.

  That is, at least, one approach.  I think the basic answer to your
question is that X lets you set the colour ramps for each of R, G, B
individually (see the 'xgamma' application) and this can be used for
some colour correction work, but not all.  Doing more would require
gamut mapping by the application, and likely access to a colorimeter or
knowing reasonable guesses for the phosphors in your monitor.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]SiS XVideo performance

2002-11-10 Thread Billy Biggs
  Hi all,  I recently released a video deinterlacer application called
tvtime: http://tvtime.sourceforge.net/   which is a bit of a stress test
for XVideo performance: we upload 720x480x59.94fps YUY2 for NTSC, and
720x576x50fps YUY2 for PAL.

  We just found a problem with SiS users.  For one user, even though AGP
4x is active (or so it seems), frame uploads are taking a horribly long
time.  Between 15ms and 30ms for PAL-sized frames.

  I thought we had this problem last December, and it was because the
driver spins waiting for the retrace:
  http://www.xfree86.org/pipermail/xpert/2001-December/014146.html

  Looking at the code in cvsweb, it looks like this is still the case.
I guess no patch came out of that work?  Was it rejected for some
reason?

  One solution that was suggested to me was to not XFlush after doing
XvShmPutImage, and just make sure tvtime runs at a higher priority.
This works nicely for me, since if I'm run as root I run SCHED_FIFO
among other things..  Or I could just grab a high priority and hope that
works.  Any comments on this solution?  I haven't tried this out with
the SiS user, but it seems to work o.k. for my G400 (but if I'm a user
app without high priority then I drop frames).

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]SiS XVideo performance

2002-11-10 Thread Billy Biggs
Thomas Winischhofer ([EMAIL PROTECTED]):

 The sis driver will be heavily updated within the next few weeks, with
 the code from www.winischhofer.net/linuxsis630.shtml.

  Good to hear!

  My just don't XFlush fix didn't work anyway, causes tearing with the
mga driver :)   I updated the tvtime bug report to point to your page.

  Thanks,
  -Billy

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: But *why* no vblank?

2002-11-06 Thread Billy Biggs
Michel Dänzer ([EMAIL PROTECTED]):

  It would be preferable in general for video apps, though, to provide
  a DRM-based api to use the overlay buffer, too.  Like, a DRM-Xv.
  For desktop use, the X11 context switch may be fairly acceptable
  with something like XSYNC, but to achieve really excellent quality
  (eg, suitable for output to a TV/broadcast/etc.) in, say, a video
  player, a direct API would be nicer.
 
 If I'm not mistaken that's what XvMC is for.

  No, XvMC is an API to hardware motion compensation, basically for
hardware MPEG decoding.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]But *why* no vblank?

2002-11-01 Thread Billy Biggs
Scott Long ([EMAIL PROTECTED]):

 I'm not suggesting that XFree86 take on the responsibility for these
 drivers! What I'm saying is it doesn't seem like such a big deal to
 simply have *support* for a vblank mechanism. Leave the details of how
 it occurs up to a particular set of drivers. If people need vblank
 badly enough, the driver support will get written. You're right that
 other platforms might not support this so easily -- but so what? If
 people really need it, they'll write it.
 
 I guess I'm proposing some sort of X extension that allows a
 particular X request to be tagged as synced to vblank. For example a
 XCopyArea request could be preceded by another request (call it
 XRequestVerticalSync) that indicates to the server that the operation
 shouldn't be performed until a vblank period.

  We already have that in XSYNC, I thought.  You can definitely define a
counter for vsyncs and have requests scheduled for them, at least that
was my impression.

  Some folks in the DRI world are doing a standard device API.  There
are also a million different kernel module hacks that export vsync
interrupts existing now, look at any of the video applications right
now: we have like 4 different mga kernel modules for hardware overlay
surfaces with vsync interrupts, now svgalib has their own kernel
callback driver, DRI has theirs presumably for page flipping at some
point, etc.

  I think the main reason for the lack of a standard effort is the
varying requirements of the projects proposing these solutions.  The
mplayer folks (and some xine/etc authors) want some kernel solution
that's independent of X and DRI and fbcon, since they want to support
different levels of exlusive or direct access.  Some other projects,
such as DirectFB, only care about this in the context of fbcon itself.
And then the DRI folks only care about X, and so solutions from that
arena may not work outside of a running X server.

  Resolving this is going to be difficult:  X developers don't ever seem
to want to consider compatibilty with driver layers that aren't X, for
one.

 This is a very simple change. If no driver support exists for vertical 
 sync, the server can just politely ignore the request, and do the 
 operation whenever it feels like it. This would at the very least 
 provide a framework for individual driver writers to start working on 
 a kernel interface.

  Getting back to this, I think we're getting there especially with the
recent DRI work.  It's not like _nobody_ is thinking about this issue,
it is being dealt with.  Join #dri-devel on freenode if you want to
discuss it, that's where at least some conversation has occurred (but
yeah, in the context of DRI/GL, not in the context of syncing generic X
requests in the X server itself, afaik).

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]But *why* no vblank?

2002-11-01 Thread Billy Biggs
Owen Taylor ([EMAIL PROTECTED]):

 Michel Dänzer [EMAIL PROTECTED] writes:
 
  The interface we've implemented in the DRM is an ioctl which
  basically blocks for a requested number of vertical blanks (it's
  more flexible in fact). Maybe a daemon or something could provide a
  file descriptor to select against?
 
 Both select and a blocking ioctl are really the wrong interface here.
 
 select() or poll() are problematical because select() waits for a
 condition, not an event. That is, these function calls are designed
 things like tell me when there is more data to read. 
 
 The blocking ioctl is a pain because it means you have to devote a
 thread to waiting for the VBI; but it at least is well defined.
 
 Unix signals are another possibility - a real pain to program, but at
 least they were designed for things like this. Tell me when the next
 VBI occurs has very similar semantics to alarm(2).

  I like the idea of a file descriptor that, when you read() from it,
gives the timestamp of the last VBI.  This has a natural mapping to
hardware interrupts (unlike giving milliseconds until the next VBI as
was suggested in another response, since we don't really know that), and
it matches the semantics of select().  It also allows nicely for async
access.

  Good timestamps (preferably ones matches those returned from APIs like
ALSA or V4L2) are essential for most of the applications I can think of,
and if anyone disagrees with me on this point let me know.  :)

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]fastest refresh rate card

2002-10-12 Thread Billy Biggs
Mark Vojkovich ([EMAIL PROTECTED]):

 I don't think you'll find flat panels or projectors that can go that
 high though.  Those have inherently slow refresh rates.

  For these displays, having a good vsync API is going to be so
important.  I think I'm getting spoiled with my 120hz CRT for video
watching...  I guess I need an LCD next.

  Mark: there's been some work for a vsync ioctl in DRI.  Would you be
willing to do a similar ioctl for the nVidia binary drivers?

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Autoconfiguring

2002-10-01 Thread Billy Biggs

Keith Packard ([EMAIL PROTECTED]):

 Around 9 o'clock on Oct 1, Dr Andrew C Aitchison wrote:
 
  use XFree86-VidModeExtension to change the video timings on the fly. From
  this week you could even use the RandR extension to change the virtual
  desktop size if appropriate.
 
 Yes, RandR will pick up screen sizes from any video modes added through 
 the VidMode extension.

  Can you successfully add modes yet?  In december I was unable to (but
I patched the vidmode extension code to have it work, sorry I didn't
send that patch in yet).  Did someone else fix it?

  -Billy

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert] improved xv refresh bug patch

2002-08-15 Thread Billy Biggs

Peter Surda ([EMAIL PROTECTED]):

 Hmm, could it be that this is causing the idiotic CPU load while using
 Xv*PutImage on my r128 that started about last November and I've been
 complaining all the time and noone knew how to solve?
 
 However, in my local copy of XF86cvs from about a month ago, I can't
 find any file named xv.c and grepping the patters from your patch
 didn't provide any results either, so I can't test it.

  I think the patch was for the image viewing program 'xv'.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]XVideo extension docs

2002-07-23 Thread Billy Biggs

Michel Lanners ([EMAIL PROTECTED]):

 On  23 Jul, this message from Guido Fiala echoed through cyberspace:
  moving YUV across the bus, but it should use no CPU.
  
  Yes, so i thought. But that would mean, that the XServer loads it's
  data directly from the v4l device itself (mmap-io or direct write
  into AGP-memory?) The latter should work if the ximage is a
  contigous array in AGP memory only.
 
 No, it's the video input hardware that pushes the data with DMA to an
 off-screen VRAM memory area, and the graphic chip then copies and
 color-converts on the fly to the visible window.

  Wouldn't it be great if they were actually in sync so they wouldn't
tear, or so that we could handled interlaced streams better.

  The v4l-module architecture needs to be reworked.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]refresh rates

2002-07-07 Thread Billy Biggs

Nick Name ([EMAIL PROTECTED]):

  (as an ex-Windows user, modelines don't ring a bell)
 
 I agree perfectly... there should be a line like verticalrefreshrate
 85hz dunno why it's not there

  You already set in the config file the allowable refresh rate range.
It doesn't make sense, given the modeline system, to set the refresh
rate specifically like that.  However:

  I was working on fixing up the VidMode extension to allow proper
modeline creation and switching on a running X server (the code is all
there, just needed to be finished).  This will allow for writing a
client app to change the refresh rate from, say, the GNOME panel or
equivalent.  Seems like a much better goal.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]XShmPutImage and Interpolation

2002-06-11 Thread Billy Biggs

Detlef Grittner ([EMAIL PROTECTED]):

 I've recently found out that NVidias Geforce2 does bilinear
 Interpolation when scaling up and some linear interpolation when
 scaling down images with DirectX on Windows.
 I want to know whether that is a hardware feature?
 Can I use XFree86 and have the same interpolation when using
 XShmPutImage?  Which drivers for Nvidia cards will provide the
 interpolation?
 
 And do cards of other vendors have the interpolation and can I use it
 with XFree86?

  With the texture engine in most cards these days you can get happy 2d
R'G'B' scaling, and apparently some new cards can do gamma-correct
scaling (woohoo!).  Hardware scaling is useful for emulators especially
so you can get nice fullscreen modes without having to convince your
monitor to get back to 320x240, and without hurting your CPU.

  I hacked the mga driver here to provide RGB surface scaling (565)
using the XVideo API.  I'd like to see more drivers export RGB surface
scaling using XVideo- this seems to be the right interface for it.  I
haven't yet got a nice patch for my work done though, and I'm waiting
for the 4.2 debian packages to help with doing a patch for 4.2.

  I hear that some other drivers (ati?) are exporting RGB surfaces
already using XVideo.  I hope this continues, and that we can
standardize on what fourcc's we should use.

  In windows this functionality is exported transparently using
DirectDraw/StretchBlt.  I think XVideo is probably the best API to
extend on the linux side.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]monitor redetection

2002-05-22 Thread Billy Biggs

Brian Wellington ([EMAIL PROTECTED]):

 Is there a way to kick the server and get it to notice the monitor
 change, perhaps doing DDC, and change the mode to one with a better
 refresh rate?  I didn't see a way for xvidtune to do this.  I also
 tried writing a simple program to fetch the current modelines (with
 XF86VidModeGetAllModeLines), update the modeline with appropriate
 values for an 85Hz refresh rate, and switch modes (with
 XF86VidModeSwitchToMode), but nothing happened (even though all of the
 calls succeeded).

  Has this code been updated in X 4.2?  I know that in X 4.1, the
XF86VidMode calls to do this didn't work at all (making a modeline on
the fly, adding it, and switching to it):  the code was just unfinished.
I wrote a patch to fix this but it was buggy and incomplete, so I
haven't sent it yet.  If nobody has fixed it, it may be why it's not
working for you.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using Xv to display odd/even fields from a TV camera

2002-05-10 Thread Billy Biggs

Paul Robertson ([EMAIL PROTECTED]):

 Now we need to start working with both odd and even fields. If we do
 that with our current software, the picture wobbles up and down.  If I
 write some code to adjust the position of the odd field, the picture
 still looks wrong, particularly if nothing is moving in the picture.
 If I write code to reconstitute a full frame by interlacing the odd
 and even fields, then I see nasty artefacts when there is horizontal
 motion in front of the camera.

  If you want the hardware to do all the scaling, it was sort of decided
that we should update the Xv API to tell it if you're giving an odd or
even field, and let it handle the scanline offset.

  Otherwise, you can do the interpolation in software.  I wrote an app
that does this, but it's not 'done' yet:

  http://www.dumbterm.net/graphics/tvtime/

  Linear interpolation as I do there emulates pretty reasonably what
cheap TVs look like.

  We can discuss options further if you like, but Xpert probably isn't
the right place.  v4l-list might be better. :)

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: Re: Re: Voodoo 3 and 4.2.0

2002-05-10 Thread Billy Biggs

Frank Van Damme ([EMAIL PROTECTED]):

 On Friday 10 May 2002 03:01 pm, Jules Bean wrote:
  If I run my desktop@1600x1200 on a voodoo 3 (which I am at the
  moment), but load a 3D game which switches the resolution down to
  something more sane like 800x600, will that game be able to use DRI
  then, at that reduced resolution?  Or would I have to restart the X
  server at a lower resolution?
 
  That would be a nice thing to support, if it is possible to support
  it.
 
 Since that game changes the X server's resolution, there is no problem
 at all. There would be one if that game crashed ;-)

  Unfortunately, in X you can't yet change the desktop resolution, only
the visible resolution:  the 2D desktop will still eat up all the video
RAM.  So while it might work in Windows, I don't think it will work in X
until xrandr is done.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using Xv to display odd/even fields from a TV camera

2002-05-10 Thread Billy Biggs

  The delay stuff is less important than the field flag.

Mark Vojkovich ([EMAIL PROTECTED]):

 I'd like an XV_FIELD (or better named) attribute that indicates the
 next PutImage request should upload and display the field rather than
 the frame which is the default.  0 is top, 1 is bottom, per mpeg
 conventions.  It's a one-shot state that gets reset to frame after the
 next PutImage happens.

  How does this work for full 50 or 59.94fps video?  Do I upload the
same frame twice, or does this attribute just mean: only copy from the
source frame the even/odd field, so that the whole frame isn't coped
twice, only half of it?

  I'm also worried about switching from frame-mode to field-mode.  In my
DVD player for example, I'll switch between 3:2 pulldown correction in
software, or, if I lose the phase, go back to 60fps output.  Do you
think video cards are going to have a problem switching the scaling
width/height without artifacts?

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]XFree86 not tied to a VT?

2002-04-19 Thread Billy Biggs

Dr Andrew C Aitchison ([EMAIL PROTECTED]):

 On Mon, 15 Apr 2002 [EMAIL PROTECTED] wrote:
 
  I'd like to run two XFree86 servers on one machine under Linux, with 
  separate heads (on a dual-head Matrox G400TV card), and separate 
  keyboards/mice (using USB for at least one of the two keyboards/mice) 
  (two completely independent X servers on the same machine).
 
 http://cambuca.ldhs.cetuc.puc-rio.br/multiuser/
 describes a successful attempt at this, at least for two separate graphics 
 cards. A dual head G400 might not work, since the two X servers have to 
 be more careful if they are talking to the same chip.

  Actually, Miguel Freitas also got it working with a G450 dual head card:

  http://cambuca.ldhs.cetuc.puc-rio.br/multiuser/g450.html

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Direct 2D Access for Streaming Video

2002-03-25 Thread Billy Biggs

Joe Krahn ([EMAIL PROTECTED]):

 Shane Walton wrote:
  Could someone point me in the right direction to the ability to DMA
  a video frame from shared memory to the graphics device?  Is there a
  way to use OpenGL along with DRI?  There is just too much overhead
  involved to allow the Xserver to handle a CPU based memory transfer.
  Any thoughts/comments are appreciated.  Thank you much.

  Shane,

  You say 'video'.  I guess your frames are Y'CbCr maps?  Regardless of
DMA, your two API options for hardware conversion are XVideo and OpenGL.
If you're using R'G'B', then you can also blit with DGA or the XShm
extension.

  For DMA transfers:

 I suspect the best approach would be the XVideo extension.  Look, for
 examplem, at XvPutVideo.

  In general, the XVideo drivers use the cpu to transfer the frames to
video memory.  I've heard that some of the ATI drivers use DMA now, and
I also heard that the nVidia binary drivers use DMA, but that's it.  I
know it's on my todo list to get this working for some other cards, and
I know of some others who are anxious to do this work also.  But work
isn't happening too quickly here. :)

 If you want to use OpenGL, glDrawPixels will copy a source image to
 the screen, hopefully optimized in hardware.

  It's also not necessarily true that all drivers use DMA for texture
uploads.  Again, the ATI drivers likely do, and same for the nVidia
binary drivers.  I know for my video apps, I'm avoiding using OpenGL
until I can see about getting the DRI stuff to do vsync'ed page
flipping.  I find the tearing much more awkward than the CPU time lost.

  I don't think there's a call in DGA2 to let you request a
hardware-assisted image upload, so I would suspect that no drivers would
be able to use DMA to accellerate it.

  For the X shared memory extension, it would be possible to have the
driver DMA the image to video memory, but I don't think any drivers do
this.  Personally though, I find this API less interesting since you
cannot use it to export hardware scaling or page flipping.  It seems to
me that the XVideo API should soon deprecate the use of XShm once all
drivers support R'G'B' surfaces.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Setting refresh rate

2002-03-06 Thread Billy Biggs

Gerald Pichler ([EMAIL PROTECTED]):

 Well, I am running this Monitor at 1024x768, and with 121 kHz max
 horizontal frequency and 160 Hz max vertical, it is definitly able to
 run at 100 Hz, so that is not the problem.

  Hi,

  I wrote a program for watching TV in X which dynamically sets the
refresh rate to an appropriate one for the incoming framerate.  I
calculate a new dotclock as follows:

  double rate = 100.0;
   ...
  newclock = ((double) (info.vtotal * info.htotal) * rate) / 1000.0;
  info.dotclock = (int) newclock;

  You should probably also be adjusting the size of the sync signals,
but the difference between 85hz and 100hz isn't so big a deal.  You're
probably fine.

  I'd like to release the code from my app so that I could do a quick
little app to set the refresh dynamically from the gnome taskbar or
something, but I had to patch alot of the vidmode extension code to get
this to work and I haven't cleaned up my patch yet.  Mail me if you want
it anyway...

  Ideally there would be set modes for high refreshes, since so many
video cards and monitors support them, and it can greatly improve the
quality of high framerate video.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]X server dies when I call Xv[Shm]PutImage()

2002-02-05 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

 On Mon, 4 Feb 2002, Billy Biggs wrote:
 
  Arguably we shouldn't be using the 601 transfer functions anyway, we
  should use the ITU-R BT.709 ones instead since CRTs are likely
  closer to the HDTV standard.  I'd also add that MPEG2 video actually
  specifies the 709 transfer function (those mmx'ified yuv2rgb
  routines everywhere are usually doing the outdated transfer).
 
 MPEG2 supports many functions, DVD is 601.

  I did some research and I now agree with you.  MPEG2 specs that if you
don't indicate what colourspace you're using, default to Rec.709.  In my
DVD collection, many don't indicate, and I was (wrongly) assuming they
were encoded for 709.  The ones that do indicate all use S170M or
Rec.460, which go by the 601 transfers, and I bet that's specified
somewhere in the DVD standard I don't have access to.

  That said, the colour primaries for 709 match sRGB, while S170M does
not.  So if I have an sRGB calibrated monitor, couldn't I maybe fudge
things by doing the conversion Y'CbCr-R'G'B' using the 709 function
instead of the 601 function and get (subjectively?) better colour?  The
more I think about it, the more I think I'm on crack.  I should instead
just use the Rec.601 transfer and then correct if I feel so inclined.
Two wrongs don't make a right.

 Everybody's hardware does 601 except for 3dfx, as far as I can tell.
 And 3dfx looks like crap because of it - all washed out.  The poor
 software conversion routines look bad because they're not clamping to
 the 16-240 (or whatever it was) range.

  Yeah, if they're not using the right excursions, it will look like
crap no matter what transform they use.

  Where best is defined as stretchy.  720x480 at 4:3 or 16:9 isn't
  square pixel so unless you're lucky you always need to scale. :)
 
 Pixels on the monitor don't have to be square unless you're on a flat
 panel.

  Fine, but unless you're 'really lucky' you'll need to scale.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]X server dies when I call Xv[Shm]PutImage()

2002-02-04 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

 On Mon, 4 Feb 2002, Steve Kirkendall wrote:
 
 I'd be surprised if a voodoo card could do 2048x2048.  It's one of the
 worst overlay hardware.  Doesn't seem to do the correct YUV-RGB
 colorspace conversion either (ie. not CCIR601).

  Arguably we shouldn't be using the 601 transfer functions anyway, we
should use the ITU-R BT.709 ones instead since CRTs are likely closer to
the HDTV standard.  I'd also add that MPEG2 video actually specifies the
709 transfer function (those mmx'ified yuv2rgb routines everywhere are
usually doing the outdated transfer).

  Do you know what conversion they are using?

  the smoothing seems to kick in at 1024x768.  This is true for DVD
  player programs too, not just my XMMS plugin.  But is there some way
  I can detect this?  DVDs look better when played with filtering in a
  1024x768 screen than they do unfiltered in a 1600x1200 screen.
 
 There's no way to detect that.  Ideally, DVD's would look best at
 720x480 without scaling.

  Where best is defined as stretchy.  720x480 at 4:3 or 16:9 isn't
square pixel so unless you're lucky you always need to scale. :)

  For best as in least artifacts due to bilinear filtering of gamma
corrected images in nonlinear space I'd have to agree with you also. :)

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]mga TexturedVideo and RGB surfaces

2002-02-03 Thread Billy Biggs

  Over the weekend I played with hardware scaling of RGB surfaces using
the mga's TexturedVideo code.  Works pretty nicely (except I'm not sure
yet how to fix the tearing).

  This method is advantageous over the OpenGL interface because we can
scale 565 surfaces and we don't need to copy the full power-of-two
texture size (afaict).

  However, enabling TexturedVideo disables both DRI and BES overlays.  I
bet I could get both the BES and TexturedVideo working at once, but I'm
not sure where to begin with the DRI stuff.  Do I just need to add locks
in the appropriate places, or do I also need to allocate texture memory
through the DRM?  I'm stuck on this one.

-- 
Billy Biggs [EMAIL PROTECTED]
http://www.billybiggs.com/  [EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]DGA requires SUID/root access???

2002-01-10 Thread Billy Biggs

Derrik Pates ([EMAIL PROTECTED]):

  That's true for the install but not when the user is running the
  app, which is what I'm worried about.  Surely, we don't need SUID
  just to display graphics?
 
 For the program that's using DGA to open and mmap() /dev/mem, it has
 to be root. That's part of doing DGA. That's why it's called _direct_.

  I never really understood the whole you need root to write to video
memory issue.  I always assumed there were two reasons, and please
correct me if I'm wrong:

  1. We don't have kernel-level video drivers, so we can't safely mmap
to a user app sections of video memory.
  2. Giving a user full access to the framebuffer is dangerous so only
let root do it.

  That is, there's no reason you need full access to /dev/mem just to
write to the framebuffer, it's just some silliness because of the way
everything is architectured.   I'd love to find out the real reason
though.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Billy Biggs

Ewald Snel ([EMAIL PROTECTED]):

 Of course, I'm using #ifdef USE_MMX_ASM and the original C code as
 an alternative for other CPU architectures. Runtime detection of MMX
 support is not included yet, but will be added if MMX is allowed.

  I've also been playing with some mmx-ification of the XVideo routines,
for example I also did an SSE-4:2:0-to-4:2:2 function.

  There was some discussion on #xfree86 about efforts to have a nice
runtime detection mechanism somewhere.  Has anyone got any code for this
already done?  If not I might also have a go at it.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Fixing XF86VidMode extension

2002-01-04 Thread Billy Biggs

  Over the holidays I fixed XF86VidModeAddModeLine and
XF86VidModeSwitchToMode in my local copy of 4_1-branch.  My patch though
is very strange and not entirely correct.

  I don't believe the current CVS code ever worked for anyone: I felt I
was completing the code rather than bugfixing it.  That said, I'm
worried that my patch may be missing the intention of the code and how
it is supposed to work.  For example, when should the new modelines be
validated and how to return errors to the user.

  Who is currently 'in charge' of this code?

  My application is my deinterlacer.  I was attempting to build
modelines on the fly to soft-genlock the monitor refresh to the incoming
video signal.  Since this is proving difficult, I now generate modelines
on the fly with sufficiently high refresh rates to hide judder effects.

  It's going ok so far. Much easier with a working VidMode extension. :)

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Billy Biggs

  I've also been playing with some mmx-ification of the XVideo
  routines, for example I also did an SSE-4:2:0-to-4:2:2 function.
 
 I just did this too, MMX only though. How many cycles/pixel did you
 end up with? What percentage of pairing did you achieve?

  I'll get some numbers in a sec.

  There was some discussion on #xfree86 about efforts to have a nice
  runtime detection mechanism somewhere.  Has anyone got any code for
  this already done?  If not I might also have a go at it.
 
 there are plenty of samples of this on Intel's site.

 And in many nice abstracted open source modules.  :)  Specifically I
meant code to put this somewhere appropriate in the X tree.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Billy Biggs

  To reply to my own mail  :)

Billy Biggs ([EMAIL PROTECTED]):

  It's actually 0.5 pixel (my mistake :)) using the following filter :
  
  o   o   (c=c1)
   c1
  o   o   (c=.5*c1 + .5*c2)
  
  o   o   (c=c2)
   c2
  o   o   (c=.5*c2 + .5*c3)
 
   I don't think this is right for MPEG2.

  I sent this and realized I might look like an asshole.  :)  This
should read:

  Thanks, I see what you mean now, and yeah, I think this filter is
wrong for filtering chroma from MPEG2.  :)

  Apologies.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Tearing, with regard to viewing DVDs (Trident Ai1)

2001-12-20 Thread Billy Biggs

  This is really stretching the bounds of 'on topic', but just this
once...  :)

Kenneth Wayne Culver ([EMAIL PROTECTED]):

 Is the code in this de-interlacing player portable??? meaning will it
 work in FreeBSD?

  The algorithms are portable.  The capture API I use is 'video4linux'
which afaict has nothing linux specific about it, and I would hope it
would be supported under FreeBSD, but I have no clue.

  -Billy

 On Wed, 19 Dec 2001, Billy Biggs wrote:
 
  Mark Vojkovich ([EMAIL PROTECTED]):
  
   The Wedding Singer and Dark City are good ones.  There is also
   a good vob trailer for Lost in Space on the web someplace.  Of
   course, good is relative.  They're not DVD quality because
   they're all interlaced.
  
  That said, many DVDs are interlaced, for example on a large
  percentage of NTSC DVDs you will see sections of the disc with the
  3:2 pulldown sequence expanded, and as such require deinterlacing.
  
My technical discussion on DVD deinterlacing:
http://www.dumbterm.net/graphics/dvd/
  
  My DVD player source with realtime deinterlacing and 3:2 pulldown
  removal:
http://www.sf.net/projects/movietime/
  
  Not to mention all the DVDs of TV shows which are natively
  interlaced, unlike the film-source examples mentioned above.
  
  ObOnTopic: One way to test for tearing if you have a v4l card would
  be to try out my TV deinterlacer, which will output at 59.94fps (or
  50fps for PAL).  You'll see tearing pretty quick if your driver
  doesn't double buffer:
http://www.dumbterm.net/graphics/tvtime/

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]SiS630 Xv performance

2001-12-18 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

 I don't see any Image acceleration at all in the sis driver in CVS.  I
 do see while loops polling registers in the Xv code paths, however.
 The code would imply that you can only program the overlay during the
 retrace, hence polling until the retrace for every XvShmPutImage
 request.  

  Seems like they'd do alot better just:

  while (!pOverlay-VBlankActiveFunc(pSIS));

  and hope they don't enter the blank while they're writing those
registers.  If they really need to synchronize something on the retrace,
is it easy to put functions in DRI so that you can queue a request in
the kernel?

  Regardless this driver is busted. :(

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]SiS630 Xv performance (a bit better now)

2001-12-18 Thread Billy Biggs

Andre Werthmann ([EMAIL PROTECTED]):

 I'm just testing it at this moment, but so far everything works good
 and finally playing a dvd in fullscreen with Xv works smoothly on my
 laptop. :)

  Out of curiousity, do you see any tearing?

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Problems adding modes with XF86VidMode

2001-12-17 Thread Billy Biggs

  I'm having a little trouble adding and switching to modes in my app.
First off, the XF86VidMode manpage eludes to a description of
AddModeLine being present, but there is none.

  XF86VidModeAddModeLine() takes two modelines, one is 'new modeline'
the other is 'after modeline'.  If you pass in anything for 'after
modeline' other than all 0s then the call returns BadValue.  Looking at
the code, it doesn't seem to be used for anything at all.  Maybe it
should be removed or fixed.

  No matter what mode I give to AddModeLine(), I always get
XF86VidModeBadHTimings.  From the code, this is because my HSYNC is out
of range, however when I calculate it myself and compare with the
numbers from GetMonitor, it's definitely in range.  In fact, taking an
existing modeline and changing the hdisplay to be one pixel larger (to
not conflict with the existing mode) causes this error.  Any ideas as to
what is going on?  The mode validates successfully using ValidateMode().

  I'm using XFree86 version: 4.1.0.1 from debian.

  I'm trying to allow my application to switch the refresh rate for
playing video.  (http://www.dumbterm.net/graphics/tvtime/)

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Fourcc Codes

2001-12-15 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

  The fourcc.h file does not contain any RGB formats. Is there any
  particular reason for this??http://www.webartz.com/fourcc has
  standard RGB formats defined as well. Why not include them for the
  next release.
 
 I don't think any drivers are using RGB formats.  Also, there aren't
 unambiguous fourcc codes for RGB as far as I can tell.

  Need to start somewhere to encourage driver writers to support RGB
overlay surfaces.  Why don't we start by standardizing what X uses for
RGB fourccs.

  Unfortunately in the Windows world, you construct a 565 overlay by
using the RGB bitfield fourcc and setting some other parameter to ask
for 565, from what I remember.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Tearing on overlay surfaces

2001-12-15 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

  Do other drivers spin on a call to XvPut if the refresh is occuring?
 
 No.  If you send faster than the retrace on NVIDIA hardware I just let
 it shear because I don't want the X-server eating CPU.  Below the
 refresh rate, it won't shear.

  After a discussion on IRC, I'm not sure if this is such a good idea.
It was noted that we'll get shearing on fast-forward and also when I'm
scanning through a video in my NLE.

  More reasons the X server should export more information about what's
going on in the hardware...

-- 
Billy Biggs [EMAIL PROTECTED]
http://www.billybiggs.com/  [EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Xinerama api

2001-12-15 Thread Billy Biggs

 Is the size of the screen actually useful?  DPI seems more useful, but
 even that doesn't make sense with XFree86's virtual desktops. 

  We just need the pixel aspect ratio to calculate correct scaling for
video or any digital image.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Tearing on overlay surfaces

2001-12-03 Thread Billy Biggs

  Well for planar formats, 720 luma samples is 720 bytes, and that
  just doens't align nicely.  But yes, if you're using 4:2:2, 720*2 is
  nice and round.
 
I'm not following you.  720 luma is 128 bit aligned.

  720/128 = 5.625
  720/32  = 22.5

  So in a planar format where you have the luma plane, your 8bpp luma
scanlines don't align nicely.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Tearing on overlay surfaces

2001-12-03 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

Well for planar formats, 720 luma samples is 720 bytes, and that
just doens't align nicely.  But yes, if you're using 4:2:2,
720*2 is nice and round.
   
  I'm not following you.  720 luma is 128 bit aligned.
  
720/128 = 5.625
720/32  = 22.5
  
So in a planar format where you have the luma plane, your 8bpp
  luma scanlines don't align nicely.
 
It's 720 BYTES not bits. 720*8/128 = 45.

  Aha, yes, I see you.  Yeah I'm a moron.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Tearing on overlay surfaces

2001-12-02 Thread Billy Biggs

  I recently purchased a Matrox G400 (to play with TV-out).

  I noticed nasty tearing when using my DVD player + XVideo.  This is in
contrast to the i810 driver which will wait to blit if called during the
retrace.  My code now spins on the VGA port, which fixed the tearing.

  Since I really badly want to see a well-supported API to query the
vertical refresh, I'd hope that maybe this is a good chance to again
request help and support.  (please!)

  That said, since I'm sure most simple video apps don't care about
amortizing frame display time over refreshes and just want to avoid
tearing, maybe doing the i810 trick in the mga driver would be useful.

  Do other drivers spin on a call to XvPut if the refresh is occuring?


  Also, I wait for the refresh using something like this code:

  void spin_until_refresh_complete( void ) {
for(;;) if( inb( 0x3da )  8 ) break;
  }

  Does anyone know what VGA cards this could fail on?  So far it's
worked great for my i815, G400, and TNT2.

-- 
Billy Biggs [EMAIL PROTECTED]
http://www.billybiggs.com/  [EMAIL PROTECTED]



msg01966/pgp0.pgp
Description: PGP signature


Re: [Xpert]Tearing on overlay surfaces

2001-12-02 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

  I noticed nasty tearing when using my DVD player + XVideo.  This is
  in contrast to the i810 driver which will wait to blit if called
  during the retrace.  My code now spins on the VGA port, which fixed
  the tearing.
 
 No, that's not what happens.  The MGA driver doesn't double buffer
 video so you get tearing.  The i810 driver does so you don't.  I doubt
 spinning on the client side can do anything to prevent tearing.

  But if it double buffered it would still need to flip-on-retrace.

  So you're saying that my spinning on the client is just making the
video not suck by coincidence?

 I think you misunderstand what the i810 driver is doing.  [...]

  Thanks for the clarification.

Do other drivers spin on a call to XvPut if the refresh is occuring?
 
 No.  If you send faster than the retrace on NVIDIA hardware I just let
 it shear because I don't want the X-server eating CPU.  Below the
 refresh rate, it won't shear.

  So, if I'm running my display at 59.94hz and sending it NTSC video,
there's no way for me to get in sync without having some way of querying
the refresh, yeah?

Also, I wait for the refresh using something like this code:
  
void spin_until_refresh_complete( void ) {
  for(;;) if( inb( 0x3da )  8 ) break;
}
  
  Does anyone know what VGA cards this could fail on?  So far it's
  worked great for my i815, G400, and TNT2.
 
 It won't necessarily work on those.  It depends whether or not I/O
 access is enabled.  In the case of secondary cards, port access to
 legacy VGA registers is typically disabled. 

  Ok, what do you mean by this?  Is this a software or hardware setting
or what?

  On startup I run: ioperm( 0x3da, 1, 1 ), and I also require root
access.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Tearing on overlay surfaces

2001-12-02 Thread Billy Biggs

  Also, I wait for the refresh using something like this code:

  void spin_until_refresh_complete( void ) {
for(;;) if( inb( 0x3da )  8 ) break;
  }

  Yeah, this loop was wrong.  Sorry.

for(;i;--i) {
/* wait for the refresh to occur. */
if( (inb( BASEPORT )  8) ) break;
}
/* now we're inside the refresh */
for(;i;--i) {
/* wait for the refresh to stop. */
if( !(inb( BASEPORT )  8) ) break;
}

  That's the actual cut-and-paste from my crap first-attempt code.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Tearing on overlay surfaces

2001-12-02 Thread Billy Biggs

  Ah Mark Mark Mark Mark,

  I just got school'ed bigtime.  I totally understand what you're saying
now, and what's going on.  Thanks for bringing me back to reality, and
sorry it took so long.

  If the mga is so smart, I'm surprised that the XVideo driver buggers
  it up. :(Is this a bug do you think?
 
 It's a feature.  Double buffering takes twice as much video memory and
 there's not much of it due to the DRI stealing it all for 3D.

  But if we can't double-buffer in the hardware then I get tearing, and
this makes it useless to a DVD player.  And you're right, my little
retrace hack doesn't help much.

  What do you recommend?

Um...  According to my tests, the screen redraw only takes 0.064ms.
 
 What is a screen redraw?  I thought you were using XvShmPutImage?
 That takes probably 4 or 5 ms to copy the data to framebuffer with
 that driver.

  Sorry, I needed schooling.  I understand the problem now.

  Why does it take so long to copy the data to the framebuffer?  Can't
we use DMA here?  Does it really take that long to just copy 512k?

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Tearing on overlay surfaces

2001-12-02 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

  But if we can't double-buffer in the hardware then I get tearing,
  and this makes it useless to a DVD player.  And you're right, my
  little retrace hack doesn't help much.
  
What do you recommend?
 
 You should use different hardware or rewrite the mga code to
 doublebuffer.

  Ok, I'll take a look at the code tomorrow.

  Why does it take so long to copy the data to the framebuffer?  Can't
  we use DMA here?  Does it really take that long to just copy 512k?
 
 It's a little more than that because the driver is using 4:2:2
 internally.  Copying the way it is doing you can't get much more than
 160 MB/sec and uses the CPU the whole time.

  Hrm, I hope it doesn't just double each chroma scanline.  I'm in fear
now.  Wish that was documented somewhere, I would have written a better
chroma upsampler sooner.

 DMA won't make the transfer go any faster (it will probably be slower
 unless you're using 2x+ AGP), but it won't eat the CPU.  The only
 drivers that do this are NVIDIA's binary drivers and supposedly some
 experimental ATI drivers that some people are working on.  Maybe the
 i810 drivers do too, not that it would help, since the bandwidth is
 probably the same writing to video ram or the framebuffer.

  I still think that using DMA is essential for all drivers if the
images doesn't require some conversion.  I guess I'd be worried about
drivers which require scanlines to be 32bit aligned.  Too bad the Xv API
doesn't allow for that.  (or am I on crack again)

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Tearing on overlay surfaces

2001-12-02 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

  Hrm, I hope it doesn't just double each chroma scanline.  I'm in
  fear now.  Wish that was documented somewhere, I would have written
  a better chroma upsampler sooner.
 
 It just doubles each chroma line.  That's all the nv driver does.  I'm
 not sure there's any preprocessing that would make the result more
 accurate.

  I was thinking you could interpolate the samples, but then I realized
that you can't be gamma correct without doing an expensive conversion:

  Y'CbCr-R'G'B'-RGB-R'G'B'-Y'CbCr

  And then I realized the obvious, that MPEG's 4:2:0 sampling (in
between on the left) makes doubling each chroma line the correct thing
to do anyway.

  I still think that using DMA is essential for all drivers if the
  images doesn't require some conversion.  I guess I'd be worried
  about drivers which require scanlines to be 32bit aligned.  Too bad
  the Xv API doesn't allow for that.  (or am I on crack again)
 
 I'm not sure what you mean.  Pretty much everything always needs to be
 32 bit aligned.  mpeg scanlines are always multiples of 16 pixels
 anyhow.

  Well for planar formats, 720 luma samples is 720 bytes, and that just
doens't align nicely.  But yes, if you're using 4:2:2, 720*2 is nice and
round.

  I find it interesting that all these overlays only use 4:2:2.  Do you
think I should move the conversion into my app and save them the trouble
(and also improve my OSD compositing)?   Sounds like a win to me.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert] Xv hit list

2001-10-24 Thread Billy Biggs

  Duuudes... :)

  I think Matt's idea is AWESOME..  Absolutely gorgeous.  Makes me
almost cry.

  I don't see how that is different that just doing two
  XvShmPutImage's now and doubling the pitch and vertical scaling.
 
 Slight difference in that the client can't bias the top down a quarter
 pixel and bottom upward by a quarter pixels so they line up.  And
 XvImages don't have a pitch to double.  It's implied from the width
 just as XImages are.

  Yes that's the problem (although the pitch thing is avoided by setting
up the image as a (width*2 x height/2) map and using the src offsets).

  I have a different idea that might work out better. Define these
  four things:
  
  XV_TOP_FIELD 0x1
  XV_BOTTOM_FIELD  0x2
  XV_FRAME (XV_TOP_FIELD | XV_BOTTOM_FIELD)
  XV_HOLD  0x4
 
 Hmmm.  That would probably work OK.  I kindof like HOLD as 0x0 though
 (ie, no fields displaying and it makes the min-max 0-3 instead of
 1-4).

  Yeah I think this idea is awesome.

 This probably also helps with 3:2 pulldown and the field repeat.

  No, I don't think it helps here.  You never, ever want to perform 3:2
pulldown on a progressive display.  If your monitor refresh is 59.94hz
and you're providing 24fps content you still are better off showing the
first full frame for 3 refreshes and then the next full frame for 2.

  If you're using Xv as an API for TV output, you've got worse problems,
since you need accurate timing information (and TV norm and stuff) from
the output card to know what's going on anyways.  Something like the
v4l2 TV output API may be better suited for this.

  If you're doubling output to both a TV and an Xv window (ugh!) then
you should build progressive frames and display them on the monitor and
let the TV output grab fields as necessary.  We can't do better then
that. :(

  What XV_HOLD would really help with though is still images, such as
when a video player is paused or when it's sitting in a menu or FBI
warning (which is encoded on the DVD as a single I-frame).

  However:

 Note that this introduces some slight uncertainty in the display since
 the overlay might not correspond to the window location when it comes
 time to actually display.  But that's not going to be a big deal since
 the times are short and you get a delay now anyhow due to the overlay
 not going into effect until the retrace on most hardware.  It
 introduces a larger lag between overlay and image when moving windows
 which isn't going to be a big deal.  You can't do this for blitted
 video though.  The cliplist is stale by the time you display.  Not a
 problem though since blitted video are merely fallbacks.

  So, this breaks when we're paused or showing a still and you move the
image around.  Ugh.  Are you sure this can't be handled in the hardware
without the client having to do another put?  Or do you expect the
client to re-put whenever it moves (which it has to now anyways)?

  This has the benefit of getting rid of the unknown delay between the
  XvShmPutImage and the overlay flip since they are now separated (as
  they are in XvMC) plus you can display just top or just bottom.

  Matt you're awesome.

  The XV_FRAME_TYPE will default to 0x3 (XV_FRAME) which means that
  apps using Xv without knowledge of this will just get a flip when
  they do an XvShmPutImage.

  Matt you rock my world.  Just reset this back to 0x3 when any client
releases the port. :)

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]i810 Xv bugs

2001-10-22 Thread Billy Biggs

  Hi,

  I watched a movie yesterday and forgot to shut off the screensaver.
When it kicked in, X hung.  An strace of the hung process showed alot of
SIGALARMs and that's it.  No interesting messages to the log.  I can
investigate further if you like and see what's up, I hope to get the CVS
X stuff up this week so I can start looking into those Xv issues I
posted earlier.

  Also, an annoyance.  I see alot of thin little green horizontal lines
that flicker all over the overlay surface.  Any thoughts as to what
those might be, or any reason they might be happening?  I suspect it's
some hardware defect and I should go buy another video card.   If you
like I can try to describe them in more detail, but it's basically about
maybe 30 little single pixel high 20 pixel wide green lines that appear
randomly about the scaled image.

  Thanks,
  Billy

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]RE: i810 Xv bugs

2001-10-22 Thread Billy Biggs

Sottek, Matthew J ([EMAIL PROTECTED]):

  Also, an annoyance.  I see alot of thin little green horizontal
  lines that flicker all over the overlay surface.  Any thoughts as to
  what those might be, or any reason they might be happening?  I
  suspect it's some hardware defect and I should go buy another video
  card.   If you like I can try to describe them in more detail, but
  it's basically about maybe 30 little single pixel high 20 pixel wide
  green lines that appear randomly about the scaled image.
 
 I have never heard of such a problem. What resolution/depth/refresh
 are you running at? Perhaps this happens when there isn't sufficient
 memory bandwidth? Like maybe 1280x1024@85hz?

  I was running at 1024x768, 24bpp, 85hz.  I get bad with everything if
I do 1280x1024, 24bpp (weird wavey vertical lines).  The display looks
like garbage.  I was getting not as many green little lines when running
at 1280x1024, 16bpp, 85hz.

  I wouldn't want to go below 1024x768, since I'd be scaling video down.
Can you speak for the quality of video 24bpp vs 16bpp?  Will it make a
difference if I'm using an overlay?

 It could also be a bad watermark chosen for the mode. In Linux we have
 to use a pixel clock based lookup to get a watermark which isn't
 always the best choice for the mode. Other operating systems have a
 specific watermark for a specific mode, but of course that only works
 when the driver generates the list of available timings.
 
 In the back of the PRM (Programmers reference manual) there is a list
 of modes and the watermarks for those modes. You might compare X's
 calculated watermark (It's in the log) against the recommended value
 in the PRM. If they are different you could replace the value in the
 code with the recommended value to see if your problems go away.

  I'll look into this tonight, thanks.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert] i810 Xv bugs

2001-10-22 Thread Billy Biggs

Sottek, Matthew J ([EMAIL PROTECTED]):

  The highest framerate I will ever be pumping video at the board is
  59.94fps from 720x480, don't know if that matters.  In this case
  though I'm only outputting at 24fps.
 
 Try 10x7@16bpp at 85hz. That should work fine.

  Well, I would like to go to a better refresh rate (see
http://www.dumbterm.net/graphics/refresh/ for some discussion).

  What about quality at 16bpp?  Won't I still see banding?  I'd assume
the overlay would still output as 16bpp.

  I checked the watermark values, X calculates 0x22218000 but the docs
say 0x22212000.  Not sure if this will matter but I can try if you think
it will help. (8 qwords stored in the FIFO vs 2 qwords stored in the
FIFO).

  You really think it's the memory bandwidth?  I am using PC133 RAM.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert] i810 Xv bugs

2001-10-22 Thread Billy Biggs

Sottek, Matthew J ([EMAIL PROTECTED]):

  Also, an annoyance.  I see alot of thin little green horizontal
  lines that flicker all over the overlay surface.  Any thoughts as to
  what those might be, or any reason they might be happening?  I
  suspect it's some hardware defect and I should go buy another video
  card.   If you like I can try to describe them in more detail, but
  it's basically about maybe 30 little single pixel high 20 pixel wide
  green lines that appear randomly about the scaled image.
 
 [...] It could also be a bad watermark chosen for the mode. In Linux
 we have to use a pixel clock based lookup to get a watermark which
 isn't always the best choice for the mode. Other operating systems
 have a specific watermark for a specific mode, but of course that only
 works when the driver generates the list of available timings.

  I built an X server from CVS and changed the watermark values to match
what was listed in the PRM.  No change.  I see the green crap at 24bpp
and at 16bpp, and even when I use Ctrl-Alt-+/- to go down to 640x480
it's there (and it gets bigger) :)I don't think it's related to the
watermark value.

  It really doesn't look like it's anything my software could do.  If I
use SDL's software Y'CbCr conversion I don't see any crap, for example.
Messed up.

  BTW, I've found for overlay surfaces that I must have the DRI module
loaded, so, for reference, I'm currently running kernel 2.4.5.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: XVideo and refresh sync

2001-10-21 Thread Billy Biggs

Vladimir Dergachev ([EMAIL PROTECTED]):

  I'm talking about planning how many refreshes to show each frame.
  I'd be really surprised if you could say to the ATI card 'show this
  2 refreshes from now'.
 
 The card won't do that, but the driver can. In this case what you do
 is submit a frame and a number - show this frame n-vsync interrupts
 from the previous one - and the driver can do the rest.

  Yes, for all cards which generate an interrupt every vsync then we can
make an awesome API for high quality video, so long as the driver code
which copies the frame is also in the kernel, since we don't want the
scheduling wait of having the X driver get loaded.  Having a /dev/vsync
device would give the smallest amount of code in the kernel as possible,
but then you need SCHED_FIFO access to protect against scheduling
delays. (read: hack)

  You still need calls to tell you when the timebase for the refresh is
though to make sure we plan ourselves out corrrectly.  Regardless that
kind of information is needed for TV output if you want to get
field-correctness.  How do the ATI cards currently export TV output
capabilities when you're writing interlaced frames?   The TV output API
of v4l2 looks like it might be a good start.  Since the requirements for
TV output sync and progressive video playback sync are so similar, I'm
wondering if the frame queueing API shouldn't be the same.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: XVideo and refresh sync

2001-10-19 Thread Billy Biggs

Peter Surda ([EMAIL PROTECTED]):

 On Fri, Oct 19, 2001 at 12:21:12AM -0400, Billy Biggs wrote:
  The +/-5ms error here is visible, especially on big cinematic pans.
 I REALLY doubt what you perceive as an error is a 5ms difference.

  No?  I'll post up a good example later today.  Consider a pan where
the time between frames goes like this: 40ms, 45ms, 35ms, 45ms, 42ms...
So, the second frame is shown for much longer than the first or third.
On a smooth pan you'll see the background shudder as it goes by.  Think
of a skateboard video.  It's really horrid.

 Though I agree that current vsync adjusting in XF86drivers (those that
 support it e.g. ATI) is insufficient. I see noticeable disturbaces on
 TVout (thought not on monitor), I believe it is because from time to
 time the thread that calls XvShmPutImage has missed one frame and
 suddenly the difference between 2 frames becomes double.

  Speaking of TV output, if you're outputting interlaced material to a
TV then you MUST have accurate vsync, field dominance, and some way of
ensuring you never miss a field.  Otherwise you get crazy effects of
people jumping back in time when you miss a field blit.

  Currently it seems like all TV output APIs assume you're sending
progressive video at some very low framerate.  Argh. :(

  One reason you may be seeing disturbances is that your TV encoder is
only blitting frames and will wait until the next frame refresh before
showing the next.  This gives your output display a refresh of 25hz, and
if you're not synced to it, you'll see much worse effects.  Consider
trying to play a 24fps video:  you'll cycle around the real output sync
pretty badly.

  Ideally for playing 24fps on a 25fps output, apps should speed it up
by 4% (same as film-PAL video).  To do so though we need to know
information about the output capabilities though.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: XVideo and refresh sync

2001-10-19 Thread Billy Biggs

[EMAIL PROTECTED] ([EMAIL PROTECTED]):

If we had the vsync as an interrupt (/dev/vsync or something),
  then quality would increase with less system load!  Very nice!
  But SCHED_FIFO would still be needed to guarentee accurate blits.
 
 The other approach is to have a device which you mmap and write your
 frames into. Then you issue an ioctl and the driver DMAs the buffers
 at the next available opportunity. 

  Yes, you could make a cool API to solve the issue.  It would be really
neat.  My only concern would be the difficulty of implementing it with
consumer video cards.  Ideally I'd want 'here's a frame, blit at time X'
and hope that someone in the kernel has an accurate clock.  It would
also need to tell me the (exact) refresh rate + sync point.

Check out deinterlace.sf.net (aka dscaler.sf.net).  [...]
 
 How big are the cpu requirements ? This is something that is quite
 useful, because, at the moment, v4l does not allow to convey such
 information as field parity..

  Not sure yet.  For v4l, I assume all frames are top-field-first.  v4l2
provides field parity information, and I'm getting flamed on the
video4linux list to switch to it.

 One thing - did you try your dvd player with recent ATI drivers ? ATI
 can doublebuffer frames switching them automatically on vsync. If you
 still see the issue you are worried about something is wrong with our
 drivers - I'd appreciate a lot if you could write a test up to debug
 the issue. (or a test mpeg stream)

  I'm not worried about tearing, I'm worried about amortizing frame
time over the refresh rate in a pleasing way (especially for special
refresh rates like 72hz: think projector).

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: XVideo and interlaced input

2001-10-19 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

  It's just something silly, I guess, but important for preserving the
  spacial position of the fields.  Say you're scaling from 720x480 to
  1024x768.  768/480 = 1.6, so, you want to scale up each field from
  240 lines up to 766.4 lines.  This gives a scaling of:
  
[top field] y=0   to y=766.4
[bot field] y=1.6 to y=768
 
 The *sources* are subpixel.  All hardware is that way.  You couldn't
 bob video otherwise.   You double the pitch and half the surface size
 and you have a single field as a source (and bottom starts a scanline
 later).  You just adjust the offset into the texture appropriately to
 get spatial alignment correct.  It's much easier for the driver to do
 than for you to do it.

  I've read this paragraph like 3 times and I still don't understand
what you mean, sorry.  I understand how doubling the pitch and halving
the surface size gives a single field as a source, no problem, but when
you scale it up you need to subpixel resize it.

  To clarify, say I'm blitting a top field from 720x480 to 1024x768:

   Top Field   Bot Field
source  scale tosource  scale to
  x  = 0  x  = 0  x  = 0  x  = 0
  y  = 0  y  = 0  y  = 1  y  = 1.6
  width  = 720width  = 720width  = 720width  = 720
  height = 240height = 766.4  height = 240height = 766.4
  pitch  = 1440   pitch  = 1440

  So, the source offsets for these fields are integers, but what I need
them scaled to is not.  Can hardware do this easily?  I'd assume you'd
have to round down from 1024x768 to something where all of these numbers
are integers, or run at 1280x960.

   With write-only port attributes you can essentially extend the
 Xv(Shm)PutImage function by specifying additional attributes before
 the request.  An attribute such as XV_DISPLAY_FIELD could be set to
 indicate that the next Put request is to be a field.  The value would
 be 0=top, 1=bottom in MPEG fashion.  It's a one-shot thing and only
 applies to the next put.  All drivers could add this without API or
 protocol changes. 

  Yes, this would be ideal.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: XVideo and interlaced input

2001-10-19 Thread Billy Biggs

Mark Vojkovich ([EMAIL PROTECTED]):

It doesn't work that way.  There's no such thing as subpixel
 destinations so you have to stop thinking about it that way.  You have
 integer rectangles on the screen but you can align how the source
 scales to that.  You can essentially assign the corners of that screen
 aligned rectangle to subpixel points within the source.  I can have
 the top line of the destination display not the top line of the source
 but adjust the filter kernel so that it starts displaying, for
 example, 5/16 of the way between the first and second lines of the
 source, effectively shifting the whole image up by 5/16 of a pixel.  I
 have subpixel alignment of the source within the destination
 rectangle.
 
I have not seen any hardware which cannot do this.

  Ok, awesome!  Makes sense.  So, should I start by adding this to the
i810 or something?  I bet I could figure that out.  So should we maybe
standardize on XV_DISPLAY_FIELD and implement that everywhere?  I'd like
to see this implemented ASAP as I know both my code and at least
mjpegtools code (if not all video players) could benefit from this.

  Seems slightly inconsistent with XvMC's signalling though.  Or we can
use the same type:

#define XVMC_TOP_FIELD  0x0001
#define XVMC_BOTTOM_FIELD   0x0002
#define XVMC_FRAME_PICTURE  (XVMC_TOP_FIELD | XVMC_BOTTOM_FIELD)

 With write-only port attributes you can essentially extend the
   Xv(Shm)PutImage function by specifying additional attributes
   before the request.  An attribute such as XV_DISPLAY_FIELD could
   be set to indicate that the next Put request is to be a field.
   The value would be 0=top, 1=bottom in MPEG fashion.  It's a
   one-shot thing and only applies to the next put.  All drivers
   could add this without API or protocol changes. 
  
Yes, this would be ideal.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]very strange XVideo behaviour (i810)

2001-10-18 Thread Billy Biggs

Sottek, Matthew J ([EMAIL PROTECTED]):

 It would be nice to have a software fallback so that you could do as
 many Xv's as you wanted (slowly) but that isn't the way Xv was
 designed. You'll have to convert the YUV data into RGB and do a
 regular XShmPutImage in the second window.

  Such a capability needs to be in the client side, since different
applications have different needs regarding the accuracy of the
Y'CbCr-RGB conversion for optimization.  So, really here we just need a
method where Xv can tell a client 'I only support one overlay at a
time'.  I'm curious why the i810 driver would allow his code to get as
far as he did.

  Sounds like a bug to me.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]very strange XVideo behaviour (i810)

2001-10-18 Thread 'Billy Biggs'

Sottek, Matthew J ([EMAIL PROTECTED]):

  Such a capability needs to be in the client side, since different
  applications have different needs regarding the accuracy of the
  Y'CbCr-RGB conversion for optimization.  So, really here we just
  need a method where Xv can tell a client 'I only support one overlay
  at a time'.
 
 Then make the interface smarter. Pass a flag to force a fail when the
 software would be needed. 90% of the apps will just want consistent
 behavior and won't want to full with fallback code.  Forcing the
 client to implement alternate rendering methods is just going to make
 for a lot of mostly broken clients that work for some people some of
 the time (i.e. what we have now), if the client is smart enough to
 have another better software renderer then they should be smart
 enough to pass another flag.

  Abstractions like SDL are reasonable and support Xv with software
fallbacks, and SDL gets released more often than X (useful for
bugfixes).  Applications which are more specific can make intelligent
decisions about how to handle the loss of hardware scaling.  In xine for
example, they have coded specific scaling routines for 4:3-16:9 scaling
(assuming square pixel of course).

  Personally, I'd like to see as little intelligence as possible in X,
but I do admit that it is unfortunate so many apps which currently use
Xv just do it directly.

  Still, I really wouldn't want to have my X server wasting its time
doing scaling.  Ugh.  It's way too expensive to have a software
fallback.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: XVideo and refresh sync

2001-10-18 Thread Billy Biggs

[EMAIL PROTECTED] ([EMAIL PROTECTED]):

 What I don't understand is why there is such an issue about root
 permissions. v4l devices, for example, have been user-accessible for a
 while. [...]

  Please consider the root issue a separate matter:  I don't want it to
hinder my suggestions for improving video output quality.

  That said, let me explain my reasoning:

  Consider a video player attempting to play a TV feed at 59.94fps.  A
frame must be blit every 16ms.  With current scheduling under linux,
nanosleep has an accuracy of 1/HZ, which for most systems is 10ms.
Film-source material at 24fps (41.6ms/blit) suffers similarily.  The
+/-5ms error here is visible, especially on big cinematic pans.
Clearly, smooth video output is only possible with better accuracy.

  My current plan of attack is simple:

  * Use /dev/rtc to cause an interupt at 1024hz (need root for 64hz).
  * Set your render thread SCHED_FIFO so that when the timer interrupt
occurs, you will preempt everything (requires root).

  I get +/-1ms accuracy for my frame blits.  This brute force method
works fine, but I still suffer slightly from not being synced to the
refresh rate.  If HZ was set to something reasonable (like, say, 1024),
then being SCHED_FIFO would only be required when you want to guarentee
that you're never very far off, which you probably only need when you're
going to sit down with your family for a sunday night movie.

  If we had the vsync as an interrupt (/dev/vsync or something), then
quality would increase with less system load!  Very nice!  But
SCHED_FIFO would still be needed to guarentee accurate blits.

  Not all video and multimedia applications require root access.
Single-frame video grabbers, teletext decoders, and some specific
hardware-assisted players are examples.  But you do need something
special for realtime audio effects processors, MIDI synthesizers, and
high quality (even TV quality!) video players.

  Check out deinterlace.sf.net (aka dscaler.sf.net).  It's an example of
a (Windows-based) video deinterlacer, something I've kinda got working
under linux.  For realtime deinterlacing, you have about 16ms _total_ to
grab a field from the capture card, process it, and send it to the
display.  This demands alot from the CPU, and will definitely require
root access to ensure accurate timing.  We can't buffer more than maybe
2 fields, since unlike using a software DVD player we have no control of
the audio (which goes direct from your DVD player into your AC3 decoder
etc).  Ouch!!


  Branden Robinson mentioned the following:

 FWIW, I've heard that the pre-emptible kernel patches to Linux improve
 latency a great deal.

  If you're talking about the low-latency kernel patches, these only
guarentee performance benefits to SCHED_FIFO processes running as root,
AFAIK.

 What I'm telling you is that DVD playback software that has to be
 setuid root, or run only by the root user, is a very unpalatable
 prospect to most general-purpose distributions.  I think that is the
 audience that most interests the current developers of XvMC, but I
 urge them to correct me if I'm wrong.

  Hey, I'm just saying that for 'high quality' output, that is, at
television framerates or theatre-smooth video, you need root.  You can
play things just fine as a user but your quality will be nowhere near
what you can achieve.


Back to [EMAIL PROTECTED]:

 Bill - if you are willing to settle for your application to
 work with one video card (i.e. ATI) you could get such functionality
 much sooner. Just e-mail me what you want (in terms of sync), and I'll
 try to include this in our extended v4l module (http://gatos.sf.net)

  Vladimir, the idea of a /dev/vsync device is looking very promising.
Maybe we should look at standardizing something in this line, but
ideally I want my software to work with all video cards eventually (so
long as they don't suck and can provide me with the vsync status or an
interrupt).

  Thoughts?

-- 
Billy Biggs [EMAIL PROTECTED]
http://www.billybiggs.com/  [EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Re: XVideo and refresh sync

2001-10-17 Thread Billy Biggs

Sottek, Matthew J ([EMAIL PROTECTED]):

   Look at the ioctls for XvMC in i810's drm. There is one that
 returns the overlay flip status. (fstatus). That returns a 32

  Aha!  Ok, see, I'm just not up to date with this DRI stuff.

  If we can do ioctl()s to /dev/dri and have this handled by the driver,
the latency is now much more reasonable!  Sounds like this is a good
possibility for a standard way to query the refresh status.

 I can add a XvMC function to get you an XImage from the XvMC surface
 so that you can use XvMC instead of Xv. Then you can get rid of the
 delay between the XvShmPut() and the actual flip.

  But could this be a solution we could implement in general?

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert