Bugzilla #306 (Building with #define BuildRender NO)

2003-06-01 Thread Matthieu Herrb
Hi,

I've attached a proposed patch to Bugzilla #306. Please review and
comment. I may have missed something important...



Matthieu
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Hi, sorry, one question

2003-06-01 Thread anonymous
Hi, it's possible to use this kind of fonts working
with xfonts structures, etc, directly trought xlib,
with no use of libbfreetype or similars? thanks

http://www10.brinkster.com/repos/see.jpg



___
Yahoo! Messenger - Nueva versión GRATIS
Super Webcam, voz, caritas animadas, y más...
http://messenger.yahoo.es
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Problems with radeon driver (ATI Radeon 9000 If (RV250))

2003-06-01 Thread Simon Urbanek
Hi all,

I have several problems with the ATI Radeon drivers in XFree86 on my 
Mac G4 Windtunnel (dual 1.4GHz).

Summary:
1) CRT + TMDS dual head configuration doesn't work
2) In all configurations colors are completely wrong
3) closing X blanks all monitors
I have tested following versions of XFree86:
Debian sid "official" 4.2.1
Michel Daenzer's 4.2.1 DRI build
Debian "inofficial" 4.3.0
latest CVS build (by myself) as of yesterday (4.3.99...)
The first two worked even worse (no image at all), so in the following 
I'll refer to the later two which produce exactly the same results.

Problem 1)
I have a TMDS flat panel (DVI-D) on the DVI port of the card and an 
analog flat panel on the ADC port (via ADC2VGA cable). Both panels get 
correctly
detected (see attached log file), but the analog one gets no signal 
after X is started.

Option "MonitorLayout" "CRT, TMDS" doesn't help (nothing really 
changes, since both monitors get correctly detected even wihtout this). 
I tried all tricks I could think of, but the analog one (on the ADC 
port) gets no signal (even after X closes).

Funny enough, using a DVI2VGA adapter and analog input of the *digital* 
panel causes both panels to work - i.e. changing the mode of the *panel 
that works* causes the other one to start working as well. This means 
that "CRT, CRT" combination works. It is really annoying since I have a 
digital panel and I don't want to run it in analog mode which sucks.

Problem 2)
No matter what combination (dual or single head) the colors are always 
wrong. This is independent of the depth used (every time differently 
"wrong" colors of course).

I analyzed it for the 24-bit mode using the digital panel. I wrote a 
small proggy that writes directly to the frame buffer. I found out that 
the sequence to set RGB colors (using most significant 4 bit only) is 
0x00G0RB00 (beware, Macs are big-endian), that is 0x00f0 is fully 
saturated green, 0xf000 fully saturated red etc. Since the least 
significant 4 bits are hard to distinguish visually I can't tell for 
sure but it looks like they are located at 0xNN0N. At any rate the 
lower 4 bits and higher 4 bits are split - and hence the colors are 
broken.

Problem 3)
Shutting down X blanks both screens - i.e. the frame buffer is not 
correctly restored. This is somewhat painful since after closing X you 
can access the
box via ssh only.

Other relevant info:
kernel is 2.4.20-ben10, frame buffer works only with "video=ofonly".
The system is debian "sid" (up-to-date). The CVS version of XFree86 was 
compiled using gcc 3.2 on the same machine.

I don't know who's working on the radeon driver, but any help would be 
appreciated. I'd be delighted to help to track all this further down if 
possible ...

Cheers,
Simon


XF86Config-4
Description: Binary data


x-log
Description: Binary data


Re: (no subject)

2003-06-01 Thread Jouni Tulkki
On Sun, 01 Jun 2003 12:48:32 +0300 (EET DST) Jouni Tulkki <[EMAIL PROTECTED]>
wrote:

> There appears to be a memory leakage problem on the client side when
> using X
> shared memory extension. This leak only appears when the size of the
> shared
> image is less then (about) 128kB. The whole allocated memory leaks,
> meaning it's
> not freed properly. The version of X I'm using is 4.2.1.
> ___
> Devel mailing list
> [EMAIL PROTECTED]
> http://XFree86.Org/mailman/listinfo/devel
> 

Wrong alert... I found the leak from my own code. 
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


(no subject)

2003-06-01 Thread Jouni Tulkki
There appears to be a memory leakage problem on the client side when using X
shared memory extension. This leak only appears when the size of the shared
image is less then (about) 128kB. The whole allocated memory leaks, meaning it's
not freed properly. The version of X I'm using is 4.2.1.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: S3 968 with IBM 526 DAC

2003-06-01 Thread Andrew C Aitchison
On Sat, 31 May 2003, Tothwolf wrote:

> I haven't been able to follow development lately, but is anyone actively
> working on adding IBM 526 support to the new S3 server? I have a number of
> Diamond Stealth 64 VRAM boards that use the S3 968 with an IBM 526, and
> I'd really like to put them to use.

I'm not, as I don't have any such cards, but I consider it one of the
more interesting S3 cards, as the IBM 526 is one of the few chips which 
supports 8+24 overlays.

I see that the i128 driver supports the IBM 526.

I have detailed docs of the IBM 526 (and differences from the 524 and 
624) but my i128 has a SilverHammer DAC, which i128IDMDAC.c says is
essentially the same as the IBMRGBxxx DACs,
but with fewer options and a different reference frequency
Do you know anything more about this chip ?

-- 
Andrew C. Aitchison Cambridge, England
[EMAIL PROTECTED]

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


S3 968 with IBM 526 DAC

2003-06-01 Thread Tothwolf
I haven't been able to follow development lately, but is anyone actively
working on adding IBM 526 support to the new S3 server? I have a number of
Diamond Stealth 64 VRAM boards that use the S3 968 with an IBM 526, and
I'd really like to put them to use.

I also have an add-on board for those boards that adds video capture
support. The heart of the board appears to be an IBM VIP905, and it also
uses a C-Cube CL450-P160 MPEG Decoder, a TI TMS320AV110PBM DSP, and a
Philips saa7110 analog front-end/digital video decoder. I haven't been
able to find any info regarding support for the VIP905, but it appears
some of the early Matrox add-on capture devices used the same chip.

-Toth
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


[PATCH] radeon mergedfb support for cvs

2003-06-01 Thread Alex Deucher
The attached patch adds mergedfb support (a single framebuffer with two
viewports looking into it) to the radeon driver.  the options are
consistant with the sis and mga drivers.  I've also replaced the old
clone mode with the clone mode supplied by the mergedfb code.  It's
behavior follows that of the previous previous clone code.  I've tested
it on an a radeon m6.  HW accelerated 3D works on both heads.  Please
consider for inclusion in xfree86 cvs.

Thanks,

Alex

__
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com

mergedfb-cvs.diff
Description: mergedfb-cvs.diff


Re: RFC: OpenGL + XvMC

2003-06-01 Thread Mark Vojkovich
On Sat, 31 May 2003, Ian Romanick wrote:

> Mark Vojkovich wrote:
> > On Fri, 30 May 2003, Ian Romanick wrote:
> > 
> >>Mark Vojkovich wrote:
> >>
> >>>   I'd like to propose adding a XvMCCopySurfaceToGLXPbuffer function
> >>>to XvMC.  I have implemented this in NVIDIA's binary drivers and
> >>>am able to do full framerate HDTV video textures on the higher end
> >>>GeForce4 MX cards by using glCopyTexSubImage2D to copy the Pbuffer
> >>>contents into a texture.
> >>
> >>This shoulds like a good candidate for a GLX extension.  I've been 
> >>wondering when someone would suggest somthing like this. :)  Although, I 
> >>did expect it to come from someone doing video capture work first.
> > 
> >I wanted to avoid something from the GLX side.  Introducing the
> > concept of an XFree86 video extension buffer to GLX seemed like a hard
> > sell.  Introducting a well establish GLX drawable type to XvMC 
> > seemed more reasonable.
> 
> Right.  I thought about this a bit more last night.  A better approach 
> might be to expose this functionality as an XFree86 extension, then 
> create a GLX extension on top of it.  I was thinking of an extension 
> where you would bind a "magic" buffer to a pbuffer, then take a snapshot 
> from the input buffer to the pbuffer.  Doing that we could create 
> layered extensions for binding v4l streams to pbuffers.  This would be 
> like creating a subclass in C++ and just over-riding the virtual 
> CaptureImage method.  I think that would be much nicer for application code.

   This isn't capture.  It's decode.  XvMC is a video acceleration
architecture not a capture architecture.  Even with capture, HW capture
buffer formats don't always line up nicely with pbuffer or texture formats.

> 
> At the same time, all of the real work would still be done in the X 
> extension (or v4l).  Only some light-weight bookkeeping would live in GLX.
> 
> >>Over the years there have been a couple extensions for doing things 
> >>this, both from SGI.  They both work by streaming video data into a new 
> >>type of GLX drawable and use that to source pixel / texel data.
> >>
> >>   http://oss.sgi.com/projects/ogl-sample/registry/SGIX/video_source.txt
> >>   http://oss.sgi.com/projects/ogl-sample/registry/SGIX/dmbuffer.txt
> >>
> >>The function that you're suggesting here is a clear break from that.  I 
> >>don't think that's a bad thing.  I suspect that you designed it this way 
> >>so that the implementation would not have to live in the GLX subsystem 
> >>or in the 3D driver, correct?
> > 
> >That was one of the goals.   I generally view trying to bind 
> > a video-specific buffer to an OpenGL buffer as a bad idea since they
> > always end up as second class.  While there have been implementations
> > that could use video buffers as textures, etc... they've always had
> > serious limitations like the inability to have mipmaps, or repeat, or
> > limited filtering ability or other disapointing things that people
> > are sad to learn about.  I opted instead for an explicit copy from
> > a video-specific surface to a first-class OpenGL drawable.  Being
> > able to do HDTV video textures on a P4 1.2 Gig PC with a $100 video
> > card has show this to be a reasonable tradeoff.
> 
> The reason you would lose mipmaps and most of the texture wrap modes is 
> because video streams rarely have power-of-two dimensions.  In the past, 

   That's one of the reasons.

> hardware couldn't do mipmapping or GL_WRAP on non-power-of-two textures. 
>   For the most part, without NV_texture_rectangle, you can't even use 
> npot textures. :(

   And NV_texture_rectangle are still second class compared to normal
textures.  No video formats are powers of two, unfortunately.

> 
> >>With slightly closer integration between XvMC and the 3D driver, we 
> >>ought to be able to do something along those lines.  Basically, bind a 
> >>XvMCSurface to a pbuffer.  Then, each time a new frame of video is 
> >>rendered the pbuffer would be automatically updated.  Given the way the 
> >>XvMC works, I'm not sure how well that would work, though.  I'll have to 
> >>think on it some more.
> > 
> > 
> >Mpeg frames are displayed in a different order than they are
> > rendered.  It's best if the decoder has full control over what goes
> > where and when.
> 
> Oh.  That does change things a bit.

   Yes.  It's not capture.  I'm decoding HDTV mpeg2 program streams
from my harddisk.  Normally XvMC would display these in the overlay.
I've made a mechanism to copy to a pbuffer rather than (or in addition
to) displaying in the video overlay.


> 
> >>>Status
> >>>XvMCCopySurfaceToGLXPbuffer (
> >>>  Display *display,
> >>>  XvMCSurface *surface,
> >>>  XID pbuffer_id,
> >>>  short src_x,
> >>>  short src_y,
> >>>  unsigned short width,
> >>>  unsigned short height,
> >>>  short dst_x,
> >>>  short dst_y,
> >>>  int flags
> >>>);
> >>
> >>One quick comment.  Don't use 'short', use 'int'.  On every existing and 
> >>future platform that we're

Re: RFC: OpenGL + XvMC

2003-06-01 Thread Ian Romanick
Mark Vojkovich wrote:
On Fri, 30 May 2003, Ian Romanick wrote:

Mark Vojkovich wrote:

  I'd like to propose adding a XvMCCopySurfaceToGLXPbuffer function
to XvMC.  I have implemented this in NVIDIA's binary drivers and
am able to do full framerate HDTV video textures on the higher end
GeForce4 MX cards by using glCopyTexSubImage2D to copy the Pbuffer
contents into a texture.
This shoulds like a good candidate for a GLX extension.  I've been 
wondering when someone would suggest somthing like this. :)  Although, I 
did expect it to come from someone doing video capture work first.
   I wanted to avoid something from the GLX side.  Introducing the
concept of an XFree86 video extension buffer to GLX seemed like a hard
sell.  Introducting a well establish GLX drawable type to XvMC 
seemed more reasonable.
Right.  I thought about this a bit more last night.  A better approach 
might be to expose this functionality as an XFree86 extension, then 
create a GLX extension on top of it.  I was thinking of an extension 
where you would bind a "magic" buffer to a pbuffer, then take a snapshot 
from the input buffer to the pbuffer.  Doing that we could create 
layered extensions for binding v4l streams to pbuffers.  This would be 
like creating a subclass in C++ and just over-riding the virtual 
CaptureImage method.  I think that would be much nicer for application code.

At the same time, all of the real work would still be done in the X 
extension (or v4l).  Only some light-weight bookkeeping would live in GLX.

Over the years there have been a couple extensions for doing things 
this, both from SGI.  They both work by streaming video data into a new 
type of GLX drawable and use that to source pixel / texel data.

  http://oss.sgi.com/projects/ogl-sample/registry/SGIX/video_source.txt
  http://oss.sgi.com/projects/ogl-sample/registry/SGIX/dmbuffer.txt
The function that you're suggesting here is a clear break from that.  I 
don't think that's a bad thing.  I suspect that you designed it this way 
so that the implementation would not have to live in the GLX subsystem 
or in the 3D driver, correct?
   That was one of the goals.   I generally view trying to bind 
a video-specific buffer to an OpenGL buffer as a bad idea since they
always end up as second class.  While there have been implementations
that could use video buffers as textures, etc... they've always had
serious limitations like the inability to have mipmaps, or repeat, or
limited filtering ability or other disapointing things that people
are sad to learn about.  I opted instead for an explicit copy from
a video-specific surface to a first-class OpenGL drawable.  Being
able to do HDTV video textures on a P4 1.2 Gig PC with a $100 video
card has show this to be a reasonable tradeoff.
The reason you would lose mipmaps and most of the texture wrap modes is 
because video streams rarely have power-of-two dimensions.  In the past, 
hardware couldn't do mipmapping or GL_WRAP on non-power-of-two textures. 
 For the most part, without NV_texture_rectangle, you can't even use 
npot textures. :(

With slightly closer integration between XvMC and the 3D driver, we 
ought to be able to do something along those lines.  Basically, bind a 
XvMCSurface to a pbuffer.  Then, each time a new frame of video is 
rendered the pbuffer would be automatically updated.  Given the way the 
XvMC works, I'm not sure how well that would work, though.  I'll have to 
think on it some more.


   Mpeg frames are displayed in a different order than they are
rendered.  It's best if the decoder has full control over what goes
where and when.
Oh.  That does change things a bit.

Status
XvMCCopySurfaceToGLXPbuffer (
 Display *display,
 XvMCSurface *surface,
 XID pbuffer_id,
 short src_x,
 short src_y,
 unsigned short width,
 unsigned short height,
 short dst_x,
 short dst_y,
 int flags
);
One quick comment.  Don't use 'short', use 'int'.  On every existing and 
future platform that we're likely to care about the shorts will take up 
as much space as an int on the stack anyway, and slower / larger / more 
instructions will need to be used to access them.
   This is an X-window extension.  It's limited to the signed 16 bit
coordinate system like the X-window system itself, all of Xlib and
the rest of XvMC.
So?  Just because the values are limited to 16-bit doesn't necessitate 
that they be stored in a memory location that's only 16-bits.  If X were 
being developed from scratch today, instead of calling everything short, 
it would likely be int_fast16_t.  On IA-32, PowerPC, Alpha, SPARC, and 
x86-64, this is int.  Maybe using the C99 types is the right answer anyway.

  This function copies the rectangle specified by src_x, src_y, width,
 and height from the XvMCSurface denoted by "surface" to offset dst_x, dst_y 
 within the pbuffer identified by its GLXPbuffer XID "pbuffer_id".
 Note that while the src_x, src_y are in XvMC's standard left-handed
 coordinate system and specify the upper left hand