[RFC][PATCH 2/2] Add option to preserve aspect in SDL display scaling.

2010-09-27 Thread Michal Suchanek
The output is not centered.

I am not sure how to achieve that because SDL just sets a video mode
which starts at upper left corner of the window.

It would be possible to make the mode larger but then the zoom routines
would have to take an offset.

Signed-off-by: Michal Suchanek hramr...@centrum.cz
---
 qemu-options.hx |   13 +
 ui/sdl.c|   13 +++--
 vl.c|4 
 3 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index fd96b6a..bd25396 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -629,6 +629,19 @@ workspace more convenient.
 ETEXI
 
 #ifdef CONFIG_SDL
+DEF(keep-aspect, 0, QEMU_OPTION_preserve_aspect,
+-keep-aspectpreserve aspect ratio of display output\n,
+QEMU_ARCH_ALL)
+#endif
+STEXI
+...@item -keep-aspect
+...@findex -keep-aspect
+Normally the SDL output scales the guest display to fill the SDL window. With
+this option the guest display is scaled as large as possible preserving the
+aspect ratio possibly leaving some empty space in the window.
+ETEXI
+
+#ifdef CONFIG_SDL
 DEF(alt-grab, 0, QEMU_OPTION_alt_grab,
 -alt-grab   use Ctrl-Alt-Shift to grab mouse (instead of Ctrl-Alt)\n,
 QEMU_ARCH_ALL)
diff --git a/ui/sdl.c b/ui/sdl.c
index f599d42..5ac9a77 100644
--- a/ui/sdl.c
+++ b/ui/sdl.c
@@ -38,6 +38,7 @@
 #include x_keymap.h
 #include sdl_zoom.h
 
+int sdl_preserve_aspect = 0;
 static DisplayChangeListener *dcl;
 static SDL_Surface *real_screen;
 static SDL_Surface *guest_screen = NULL;
@@ -65,12 +66,12 @@ static Notifier mouse_mode_notifier;
 
 static void sdl_update(DisplayState *ds, int x, int y, int w, int h)
 {
-//printf(updating x=%d y=%d w=%d h=%d\n, x, y, w, h);
 SDL_Rect rec;
 rec.x = x;
 rec.y = y;
 rec.w = w;
 rec.h = h;
+//fprintf(stderr, SDL updating x=%d y=%d w=%d h=%d\n, x, y, w, h);
 
 if (guest_screen) {
 if (!scaling_active) {
@@ -105,7 +106,7 @@ static void do_sdl_resize(int new_width, int new_height, 
int bpp)
 {
 int flags;
 
-//printf(resizing to %d %d\n, w, h);
+fprintf(stderr, SDL resizing to %d %d\n, new_width, new_height);
 
 flags = SDL_HWSURFACE|SDL_ASYNCBLIT|SDL_HWACCEL|SDL_RESIZABLE;
 if (gui_fullscreen)
@@ -113,6 +114,14 @@ static void do_sdl_resize(int new_width, int new_height, 
int bpp)
 if (gui_noframe)
 flags |= SDL_NOFRAME;
 
+if (guest_screen) {
+float wr = (float)new_width / (float) guest_screen-w,
+  hr = (float)new_height / (float) guest_screen-h,
+  r = (wr  hr) ? hr : wr;
+new_width = (float)guest_screen-w * r + 0.5;
+new_height = (float)guest_screen-h * r + 0.5;
+fprintf(stderr, SDL adjusting resize to %d %d\n, new_width, 
new_height);
+}
 width = new_width;
 height = new_height;
 real_screen = SDL_SetVideoMode(width, height, bpp, flags);
diff --git a/vl.c b/vl.c
index ee641bc..67d35bb 100644
--- a/vl.c
+++ b/vl.c
@@ -193,6 +193,7 @@ QEMUClock *rtc_clock;
 int vga_interface_type = VGA_NONE;
 static int full_screen = 0;
 #ifdef CONFIG_SDL
+extern int sdl_preserve_aspect;
 static int no_frame = 0;
 #endif
 int no_quit = 0;
@@ -2421,6 +2422,9 @@ int main(int argc, char **argv, char **envp)
 case QEMU_OPTION_no_frame:
 no_frame = 1;
 break;
+case QEMU_OPTION_preserve_aspect:
+sdl_preserve_aspect = 1;
+break;
 case QEMU_OPTION_alt_grab:
 alt_grab = 1;
 break;
-- 
1.7.1

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[RFC][PATCH 1/2] Add option to disable PS/2 mouse.

2010-09-27 Thread Michal Suchanek
Hello

I tried to patch qemu to allow disabling the PS/2 mouse.

This patch works for me, when I disable the mouse Windows no longer
detects it.

I am not sure this is entirely correct. Specifically there is
KBD_MODE_DISABLE_MOUSE bit which can probably still be disabled and
KBD_MODE_MOUSE_INT is enabled.

Thanks

Michal

Signed-off-by: Michal Suchanek hramr...@centrum.cz
---
 hw/isa.h|1 +
 hw/pckbd.c  |   36 +---
 qemu-options.hx |9 +
 vl.c|4 
 4 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/hw/isa.h b/hw/isa.h
index aaf0272..80ab6bb 100644
--- a/hw/isa.h
+++ b/hw/isa.h
@@ -31,6 +31,7 @@ ISADevice *isa_create(const char *name);
 ISADevice *isa_create_simple(const char *name);
 
 extern target_phys_addr_t isa_mem_base;
+extern int isa_psaux;
 
 void isa_mmio_init(target_phys_addr_t base, target_phys_addr_t size, int be);
 
diff --git a/hw/pckbd.c b/hw/pckbd.c
index 6e4e406..4557e14 100644
--- a/hw/pckbd.c
+++ b/hw/pckbd.c
@@ -169,7 +169,7 @@ static void kbd_update_irq(KBDState *s)
 }
 }
 qemu_set_irq(s-irq_kbd, irq_kbd_level);
-qemu_set_irq(s-irq_mouse, irq_mouse_level);
+if (s-mouse) qemu_set_irq(s-irq_mouse, irq_mouse_level);
 }
 
 static void kbd_update_kbd_irq(void *opaque, int level)
@@ -205,10 +205,11 @@ static uint32_t kbd_read_status(void *opaque, uint32_t 
addr)
 
 static void kbd_queue(KBDState *s, int b, int aux)
 {
-if (aux)
-ps2_queue(s-mouse, b);
-else
+if (aux) {
+if (s-mouse) ps2_queue(s-mouse, b);
+} else {
 ps2_queue(s-kbd, b);
+}
 }
 
 static void ioport92_write(void *opaque, uint32_t addr, uint32_t val)
@@ -323,12 +324,13 @@ static void kbd_write_command(void *opaque, uint32_t 
addr, uint32_t val)
 static uint32_t kbd_read_data(void *opaque, uint32_t addr)
 {
 KBDState *s = opaque;
-uint32_t val;
+uint32_t val = 0;
 
-if (s-pending == KBD_PENDING_AUX)
-val = ps2_read_data(s-mouse);
-else
+if (s-pending == KBD_PENDING_AUX) {
+if (s-mouse) val = ps2_read_data(s-mouse);
+} else {
 val = ps2_read_data(s-kbd);
+}
 
 DPRINTF(kbd: read data=0x%02x\n, val);
 return val;
@@ -354,13 +356,13 @@ static void kbd_write_data(void *opaque, uint32_t addr, 
uint32_t val)
 kbd_queue(s, val, 0);
 break;
 case KBD_CCMD_WRITE_AUX_OBUF:
-kbd_queue(s, val, 1);
+if (s-mouse) kbd_queue(s, val, 1);
 break;
 case KBD_CCMD_WRITE_OUTPORT:
 ioport92_write(s, 0, val);
 break;
 case KBD_CCMD_WRITE_MOUSE:
-ps2_write_mouse(s-mouse, val);
+if (s-mouse) ps2_write_mouse(s-mouse, val);
 break;
 default:
 break;
@@ -430,9 +432,10 @@ void i8042_mm_init(qemu_irq kbd_irq, qemu_irq mouse_irq,
 {
 KBDState *s = qemu_mallocz(sizeof(KBDState));
 int s_io_memory;
+int mouse_enabled = isa_psaux;
 
 s-irq_kbd = kbd_irq;
-s-irq_mouse = mouse_irq;
+if (mouse_enabled) s-irq_mouse = mouse_irq;
 s-mask = mask;
 
 vmstate_register(NULL, 0, vmstate_kbd, s);
@@ -440,7 +443,8 @@ void i8042_mm_init(qemu_irq kbd_irq, qemu_irq mouse_irq,
 cpu_register_physical_memory(base, size, s_io_memory);
 
 s-kbd = ps2_kbd_init(kbd_update_kbd_irq, s);
-s-mouse = ps2_mouse_init(kbd_update_aux_irq, s);
+s-mouse = NULL;
+if (mouse_enabled) s-mouse = ps2_mouse_init(kbd_update_aux_irq, s);
 qemu_register_reset(kbd_reset, s);
 }
 
@@ -454,7 +458,7 @@ void i8042_isa_mouse_fake_event(void *opaque)
 ISADevice *dev = opaque;
 KBDState *s = (DO_UPCAST(ISAKBDState, dev, dev)-kbd);
 
-ps2_mouse_fake_event(s-mouse);
+if (s-mouse) ps2_mouse_fake_event(s-mouse);
 }
 
 void i8042_setup_a20_line(ISADevice *dev, qemu_irq *a20_out)
@@ -478,9 +482,10 @@ static const VMStateDescription vmstate_kbd_isa = {
 static int i8042_initfn(ISADevice *dev)
 {
 KBDState *s = (DO_UPCAST(ISAKBDState, dev, dev)-kbd);
+int mouse_enabled = isa_psaux;
 
 isa_init_irq(dev, s-irq_kbd, 1);
-isa_init_irq(dev, s-irq_mouse, 12);
+if (mouse_enabled) isa_init_irq(dev, s-irq_mouse, 12);
 
 register_ioport_read(0x60, 1, 1, kbd_read_data, s);
 register_ioport_write(0x60, 1, 1, kbd_write_data, s);
@@ -490,7 +495,8 @@ static int i8042_initfn(ISADevice *dev)
 register_ioport_write(0x92, 1, 1, ioport92_write, s);
 
 s-kbd = ps2_kbd_init(kbd_update_kbd_irq, s);
-s-mouse = ps2_mouse_init(kbd_update_aux_irq, s);
+s-mouse = NULL;
+if (mouse_enabled) s-mouse = ps2_mouse_init(kbd_update_aux_irq, s);
 qemu_register_reset(kbd_reset, s);
 return 0;
 }
diff --git a/qemu-options.hx b/qemu-options.hx
index a0b5ae9..fd96b6a 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -1746,6 +1746,15 @@ Three button serial mouse. Configure the guest to use 
Microsoft protocol.
 @end table
 ETEXI
 
+DEF(no-psaux, 0, QEMU_OPTION_nopsaux, \
+-no-psaux   disable PS/2 mouse\n

Re: [Linux-fbdev-devel] drm_fb_helper: Impossible to change video mode

2010-03-11 Thread Michal Suchanek
On 10 March 2010 22:04, Ville Syrjälä syrj...@sci.fi wrote:
 On Wed, Mar 10, 2010 at 06:11:29PM +, James Simmons wrote:

  I don't think so. There is another driver which does this -
  vesa/uvesa. For these it is not possible to change the resolution from
  fbdev, it just provides some framebuffer on top of which fb
  applications or fbcons run.

 Only because that is the only way to do it. The other options was to have
 x86emul in the kernel. That was not going to happen.

  I guess equivalent of xrandr would be what people would want but the
  current fbdev capabilities are far from that.
  Since KMS provides these capabilities already I would think adding a
  tool that manipulates KMS directly (kmset?) is the simplest way.

 Still would have to deal with the issue of keeping the graphical console
 in sync with the changes.

  There are other drivers that support multihead already (matroxfb, any
  other?) and have their own driver-specific inteface.

 Each crtc is treated as a seperate fbdev device. I don't recall any
 special ioctls. Maybe for mirroring which was never standardized.

 matroxfb does have a bunch of custom ioctls to change the crtc-output
 mapping. omapfb is another multihead fb driver and it's more complex
 than matroxfb. Trying to make it perform various tricks through the
 fbdev API (and a bunch of custom ioctls, and a bunch of sysfs knobs)
 is something I've been doing but I would not recommend it for anyone
 who has the option of using a better API.

 I don't think the CRTC=fb_info makes much sense if the main use
 case is fbcon. fbcon will use a single fb device and so you can't see
 the console on multiple heads anyway which makes the whole thing
 somewhat pointless. And if you're trying to do something more complex
 you will be a lot better off bypassing fbdev altogether.


I guess it's also possible that somebody would want the fbdev/fbcons
cover multiple screens. This is not particularly useful with fbcons
(although curses WMs exist) but might be somewhat useful for graphical
fbdev applications.

Multiple views of the kernel virtual consoles on different heads might
be nice toy but it's probably too hard to be worth trying. And there
are always applications like jfbterm which could be perhaps slightly
adapted to use one of the other devices instead of a vc.

Thanks

Michal

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] drm_fb_helper: Impossible to change video mode

2010-03-11 Thread Michal Suchanek
On 10 March 2010 19:47, James Simmons jsimm...@infradead.org wrote:

  Yuck. See my other post about what fbdev really means in its historical
  context. The struct fb_info really maps better to drm_crtc than to
  drm_framebuffer. In fact take the case of the matrox fbdev driver. It
  creates two framebuffer devices even tho it used one static framebuffer.
  What the driver does is splits the framebuffer in two and assigned each
  part to a CRTC.
 
  The only problem with that is that it eats a lot of memory for the
  console which limits X when it starts. On cards with limited vram ,
  you might not have enough memory left for any meaningful acceleration
  when X starts.

 It would be nice to find a way to reclaim the console memory for X,
 but I'm not sure that can be done and still provide a good way to
 provide oops support.

        Ah, the power of flags. We had the same issue with user requesting a 
 mode
 change or fbcon asking for a different mode. We handled it with the flag
 FBINFO_MISC_USEREVENT. Since you are using KMS as the backend for fbcon you 
 will
 have to deal also with the ability to change the resolution with tools like 
 stty.
 I can easily see how to do this plus give you more memory like you want :-)
        For the oops are you talking about printing oops to the screen
 while X is running ? Otherwise if you experience a oops and go back to
 console mode you should be able to view it. The console text buffer is
 independent of the graphics card memory system.


Ability to print the oops over X does not seem to be that bad idea.
Since with KMS the kernel finally knows what X is doing with the
graphics it should be able to print it. Note that it may be the only
way to see it in situations when the console dies in one way or
another.

Thanks

Michal

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] drm_fb_helper: Impossible to change video mode

2010-03-11 Thread Michal Suchanek
On 10 March 2010 18:42, James Simmons jsimm...@infradead.org wrote:

  At the moment the problem with fbset is what to do with it in the
  dual head case. Currently we create an fb console that is lowest
  common size of the two heads and set native modes on both,
 
  Does that mean that fbset is supposed to work (set resolution) on drmfb?

 No we've never hooked it up but it could be made work.

 I had it to the point of almost working. I plan on working on getting it
 working again.

  Schemes which would make a multihead setup look like a single screen
  get complicated quite easily. Perhaps an option to turn off some
  outputs so that the native resolution of one output is used (instead
  of clone) would work.
 

 I've only really got two answer for this:

 (a) hook up another /dev/dri/card_fb device and use the current KMS
 ioctls to control the framebuffer, have the drm callback into fbdev/fbcon
 to mention resizes etc. Or add one or two info gathering ioctls and
 allow use of the /dev/dri/control device to control stuff.

 (b) add a lot of ioctls to KMS fbdev device, which implement some sort
 of sane multi-output settings.

 Now the second sounds like a lot of work if not the correct solution,
 you basically needs a way to pretty much expose what the KMS ioctls
 expose on the fb device, and then upgrade fbset to make sense of it all.

 Yuck. See my other post about what fbdev really means in its historical
 context. The struct fb_info really maps better to drm_crtc than to
 drm_framebuffer. In fact take the case of the matrox fbdev driver. It
 creates two framebuffer devices even tho it used one static framebuffer.
 What the driver does is splits the framebuffer in two and assigned each
 part to a CRTC.


So you get the layering naturally. On fbset - fbev layer you can
choose from the resolutions available in the current output setup, in
kmset or whatever - drm layer you can set up the outputs, merge
multiple outputs into single cloned fbdev or separate them, ..

It's obviously nice if you can set the resolution on all of fbcons,
fbdev and drm layers but getting it work on at least one layer with
proper propagation up and down also works. BTW I don't know any
application which sets linux console (or xterm for that matter)
resolution through the terminal API.

Thanks

Michal

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] drm_fb_helper: Impossible to change video mode

2010-03-11 Thread Michal Suchanek
On 11 March 2010 16:17, James Simmons jsimm...@infradead.org wrote:

  It would be nice to find a way to reclaim the console memory for X,
  but I'm not sure that can be done and still provide a good way to
  provide oops support.
 
  What do you think the average user will care about more?
 
       * Seeing kernel oops/panic output about once in a lifetime.
       * Being able to start/use X in the first place and enabling it to
         use all of VRAM.
 
  Personally, I've never even seen any kernel oops/panic output despite
  numerous opportunities for that in the couple of months I've been using
  KMS. But I have spent considerable time and effort trying to get rid of
  the pinned fbcon BO. If the oops/panic output is the only thing
  preventing that, maybe that should only be enabled via some module
  option for developers.

 I'm all for it!

 I'm looking into the details for this. It will require some changes to
 internal apis to make it to work.


Can't it print the oops on whatever is currently displayed?

It need not be a dedicated buffer as long as there is always some buffer.

But perhaps this is more complex than that.

Thanks

Michal

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] drm_fb_helper: Impossible to change video mode

2010-03-03 Thread Michal Suchanek
On 3 March 2010 06:02, Dave Airlie airl...@gmail.com wrote:
 On Mon, Mar 1, 2010 at 7:18 PM, Michal Suchanek hramr...@centrum.cz wrote:
 On 21 November 2009 05:27, Dave Airlie airl...@gmail.com wrote:

 At the moment the problem with fbset is what to do with it in the
 dual head case. Currently we create an fb console that is lowest
 common size of the two heads and set native modes on both,

 Does that mean that fbset is supposed to work (set resolution) on drmfb?

 No we've never hooked it up but it could be made work.



 Now if a user runs fbset, I'm not sure what the right answer is,
 a) pick a head in advance via sysfs maybe and set it on that.
 b) try and set the mode on both heads cloned (what to do if
 there is no common mode is another issue).


 I would say it's time to support multihead with fbset properly.

 That is people would need new fbset which sees both (all) heads, and
 fbset can then choose the head itself (and people can make it do
 something different when they don't like the default). It should also
 support setting up rotation on each head.

 For old fbset setting something visible is probably good enough.

 Schemes which would make a multihead setup look like a single screen
 get complicated quite easily. Perhaps an option to turn off some
 outputs so that the native resolution of one output is used (instead
 of clone) would work.


 I've only really got two answer for this:

 (a) hook up another /dev/dri/card_fb device and use the current KMS
 ioctls to control the framebuffer, have the drm callback into fbdev/fbcon
 to mention resizes etc. Or add one or two info gathering ioctls and
 allow use of the /dev/dri/control device to control stuff.


What about writing a drmfbset or something and have fbset call it when
it detects a drm framebuffer and warn that it does not support drm
framebuffers fully?

That way people using fbset still get something and people who want
exact control over the setup can use the new tool which uses whatever
KMS interface is available already.

Thanks

Michal

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] drm_fb_helper: Impossible to change video mode

2010-03-03 Thread Michal Suchanek
On 3 March 2010 10:23, Dave Airlie airl...@gmail.com wrote:


 I've only really got two answer for this:

 (a) hook up another /dev/dri/card_fb device and use the current KMS
 ioctls to control the framebuffer, have the drm callback into fbdev/fbcon
 to mention resizes etc. Or add one or two info gathering ioctls and
 allow use of the /dev/dri/control device to control stuff.


 What about writing a drmfbset or something and have fbset call it when
 it detects a drm framebuffer and warn that it does not support drm
 framebuffers fully?


 My main problem with calling the drm underneath the fbdev is it
 seems like a layering violation. Then again some of code in the kernel
 is also contributing to this violation. I'd really like to make fbdev more
 like an in-kernel version of what X driver have to do, and leave all the
 initial modepicking etc to the fbdev interface layer.

 If we take  the layering as
 fbcon - fbdev - kms - hw

 I feel calling ioctls on the KMS layer from userspace to do stuff for
 fbcon or fbdev
 is wrong, and we should rather expose a more intelligent set of ioctls via the
 fbdev device node. This points at quite a bit of typing.

I don't think so. There is another driver which does this -
vesa/uvesa. For these it is not possible to change the resolution from
fbdev, it just provides some framebuffer on top of which fb
applications or fbcons run.

You set the proper options on the proper layer - fonts in fbcons,
resolution in fbdev or the driver (which sucks but so far nobody came
up with a modesetting solution universal enough to work with all
drivers), and some hardware-specific options in the driver as well.

Still if most framebuffer drivers are converted to KMS there would not
be interface discrepancies. KMS would be used to set resolution and
fbdev to draw on the screen.


 So we'd need to add a bunch of KMS fb specifc ioctl like some of the other 
 fbdev
 drivers do, and then a new fbset could tkae advantage of these. I'm not sure
 how much different to the current kms interface or how powerful we really need
 to make tihs interface though, and I feel kinda bad implementing it without
 some idea what users would want from it.


I guess equivalent of xrandr would be what people would want but the
current fbdev capabilities are far from that.
Since KMS provides these capabilities already I would think adding a
tool that manipulates KMS directly (kmset?) is the simplest way.

There are other drivers that support multihead already (matroxfb, any
other?) and have their own driver-specific inteface.

Designing an unified multihead fbdev extension interface would
probably take quite a bit of typing and time. It would also help to
have something working to compare to.

Thanks

Michal

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] drm_fb_helper: Impossible to change video mode

2010-03-01 Thread Michal Suchanek
On 21 November 2009 05:27, Dave Airlie airl...@gmail.com wrote:

 At the moment the problem with fbset is what to do with it in the
 dual head case. Currently we create an fb console that is lowest
 common size of the two heads and set native modes on both,

Does that mean that fbset is supposed to work (set resolution) on drmfb?


 Now if a user runs fbset, I'm not sure what the right answer is,
 a) pick a head in advance via sysfs maybe and set it on that.
 b) try and set the mode on both heads cloned (what to do if
 there is no common mode is another issue).


I would say it's time to support multihead with fbset properly.

That is people would need new fbset which sees both (all) heads, and
fbset can then choose the head itself (and people can make it do
something different when they don't like the default). It should also
support setting up rotation on each head.

For old fbset setting something visible is probably good enough.

Schemes which would make a multihead setup look like a single screen
get complicated quite easily. Perhaps an option to turn off some
outputs so that the native resolution of one output is used (instead
of clone) would work.

Thanks

Michal

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] [mesa] svga: Fix error: cannot take address of bit-field 'texture_target' in svga_tgsi.h

2010-01-08 Thread michal
Sedat Dilek wrote on 2010-01-06 18:54:
 Compile-tested OK.

   
Thanks, commited.

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] [mesa] svga: Fix error: cannot take address of bit-field 'texture_target' in svga_tgsi.h

2010-01-06 Thread michal

Brian Paul wrote on 2010-01-06 18:07:

Sedat Dilek wrote:
  

Hi,

this patch fixes a build-error in mesa GIT master after...

commit  251363e8f1287b54dc7734e690daf2ae96728faf (patch)
configs: set INTEL_LIBS, INTEL_CFLAGS, etcmaster

From my build-log:
...
In file included from svga_pipe_fs.c:37:
svga_tgsi.h: In function 'svga_fs_key_size':
svga_tgsi.h:122: error: cannot take address of bit-field 'texture_target'
make[4]: *** [svga_pipe_fs.o] Error 1

Might be introduced in...

commit  955f51270bb60ad77dba049799587dc7c0fb4dda
Make sure we use only signed/unsigned ints with bitfields.

Kind Regars,
- Sedat -




I just fixed that.

  

Actually, we could go back to bitfields and fix broken svga_fs_key_size().

Attached a patch.

Can somebody review, test-build and commit?

From 7321aef0dfc5bb160ec8a33d1d4e686419f2ed3d Mon Sep 17 00:00:00 2001
From: Michal Krol mic...@vmware.com
Date: Wed, 6 Jan 2010 18:36:45 +0100
Subject: [PATCH] svga: Fix fs key size computation and key comparison.

This also allows us to have texture_target
back as a bitfield and save us a few bytes.
---
 src/gallium/drivers/svga/svga_state_fs.c |9 +++--
 src/gallium/drivers/svga/svga_tgsi.h |5 ++---
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/src/gallium/drivers/svga/svga_state_fs.c 
b/src/gallium/drivers/svga/svga_state_fs.c
index 272d1dd..bba80a9 100644
--- a/src/gallium/drivers/svga/svga_state_fs.c
+++ b/src/gallium/drivers/svga/svga_state_fs.c
@@ -40,8 +40,13 @@
 static INLINE int compare_fs_keys( const struct svga_fs_compile_key *a,
const struct svga_fs_compile_key *b )
 {
-   unsigned keysize = svga_fs_key_size( a );
-   return memcmp( a, b, keysize );
+   unsigned keysize_a = svga_fs_key_size( a );
+   unsigned keysize_b = svga_fs_key_size( b );
+
+   if (keysize_a != keysize_b) {
+  return (int)(keysize_a - keysize_b);
+   }
+   return memcmp( a, b, keysize_a );
 }
 
 
diff --git a/src/gallium/drivers/svga/svga_tgsi.h 
b/src/gallium/drivers/svga/svga_tgsi.h
index 043b991..737a221 100644
--- a/src/gallium/drivers/svga/svga_tgsi.h
+++ b/src/gallium/drivers/svga/svga_tgsi.h
@@ -56,7 +56,7 @@ struct svga_fs_compile_key
   unsigned compare_func:3;
   unsigned unnormalized:1;
   unsigned width_height_idx:7;
-  ubyte texture_target;
+  unsigned texture_target:8;
} tex[PIPE_MAX_SAMPLERS];
 };
 
@@ -119,8 +119,7 @@ static INLINE unsigned svga_vs_key_size( const struct 
svga_vs_compile_key *key )
 
 static INLINE unsigned svga_fs_key_size( const struct svga_fs_compile_key *key 
)
 {
-   return (const char *)key-tex[key-num_textures].texture_target -
-  (const char *)key;
+   return (const char *)key-tex[key-num_textures] - (const char *)key;
 }
 
 struct svga_shader_result *
-- 
1.6.4.msysgit.0

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev --
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] [Bugme-new] [Bug 13285] New: INTELFB: Colors display incorrectly

2009-06-06 Thread Michal Suchanek
2009/6/4 Krzysztof Helt krzysztof...@poczta.fm:
 On Wed, 3 Jun 2009 11:27:17 +0200
 Michal Suchanek hramr...@centrum.cz wrote:

 Unfortunately I did not get to testing the patch yet.

 According to the description it is supposed to resolve some confusion
 over what pipe is enabled or not.

 X server reports the pipes connected as follows:
 (II) intel(0):   Pipe A is on
 (II) intel(0):   Display plane A is now enabled and connected to pipe A.
 (II) intel(0):   Pipe B is off
 (II) intel(0):   Display plane B is now disabled and connected to pipe A.
 (II) intel(0):   Output VGA is connected to pipe none
 (II) intel(0):   Output TMDS-1 is connected to pipe A
 (II) intel(0):   Output TV is connected to pipe none

 However, I also get this warning before the outputs are listed:
 (WW) intel(0): Couldn't detect panel mode.  Disabling panel

 Is this a configuration that would likely be affected by the issue
 fixed here or do I have a different problem?


 Frankly speaking, I don't know. Please describe your problem.

 The panel mode I suppose is LVDS (LCD panel directly connected
 to the chip) which is possible only in laptops and tablets or other
 computers where the LCD panel is integrated with the main unit
 (e.g. desk-lamp like Apple computers). All other computers which you
 connect the display by external cable (DVI/HDMI or VGA) does not
 work in the panel mode.

This is a mac mini with single TMDS display only.

Thanks

Michal

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] [Bugme-new] [Bug 13285] New: INTELFB: Colors display incorrectly

2009-06-03 Thread Michal Suchanek
2009/6/2 Andrew Morton a...@linux-foundation.org:
 On Sat, 30 May 2009 13:58:33 +0200
 Krzysztof Helt krzysztof...@poczta.fm wrote:

 The intelfb driver sets color map depending on currently active pipe. 
 However, if an LVDS
 display is attached (like in laptop) the active pipe variable is never set. 
 The default value is
 PIPE_A and can be wrong.
 Set up the pipe variable during driver initialization after hardware state 
 was read.

 Also, the detection of the active display (and hence the pipe) is wrong. The 
 pipes are assigned
 to so called planes. Both pipes are always enabled on my laptop but only one 
 plane is enabled
 (the plane A for the CRT or the plane B for the LVDS). Change active pipe 
 detection code
 to take into account a status of the plane assigned to each pipe.

 The problem is visible in the 8 bpp mode if colors above 15 are used. The 
 first 16 color
 entries are displayed correctly.

 The graphics chip description is here (G45 vol. 3):
 http://intellinuxgraphics.org/documentation.html

 Signed-off-by: Krzysztof Helt krzysztof...@wp.pl

 ---
 The second version of the fix to this problem. Now, it is much more 
 sophisticated
 based on the knowledge gained from documentation available at 
 http://intellinuxgraphics.org/.

 It does not change a default behaviour (assumed pipe A) for all cases except 
 the case that only
 the plane assigned to the pipe B is active. It is enough to fix the issue 
 for me.

 I queued this.

 Please test it.

 But it would great be Dean and/or Michal were to be able to test it, please.


Thanks for the patch.

Unfortunately I did not get to testing the patch yet.

According to the description it is supposed to resolve some confusion
over what pipe is enabled or not.

X server reports the pipes connected as follows:
(II) intel(0):   Pipe A is on
(II) intel(0):   Display plane A is now enabled and connected to pipe A.
(II) intel(0):   Pipe B is off
(II) intel(0):   Display plane B is now disabled and connected to pipe A.
(II) intel(0):   Output VGA is connected to pipe none
(II) intel(0):   Output TMDS-1 is connected to pipe A
(II) intel(0):   Output TV is connected to pipe none

However, I also get this warning before the outputs are listed:
(WW) intel(0): Couldn't detect panel mode.  Disabling panel

Is this a configuration that would likely be affected by the issue
fixed here or do I have a different problem?

I am currently not using intelfb because last time I tried it produced
even worse results than efifb (which does suffer from the wrong colors
as well).

Thanks

Michal

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] [Bugme-new] [Bug 13285] New: INTELFB: Colors display incorrectly

2009-05-17 Thread Michal Suchanek
2009/5/17 Krzysztof Helt krzysztof...@poczta.fm:
 On Sat, 16 May 2009 23:19:32 -0700
 Andrew Morton a...@linux-foundation.org wrote:

 On Sun, 17 May 2009 08:17:43 +0200 Krzysztof Helt krzysztof...@poczta.fm 
 wrote:

  This is not a regression. I have reproduced it in the 2.6.28 easily.

 hm, Dean's original report had

     This does not occur in kernel 2.6.29 -- I can see the Tasmanian
     devil in a penguin mask (Tuz) just fine and can view images, etc on
     the framebuffer.


 I can confirm that Tuz is also broken on my laptop (kernel v2.6.29).
 Maybe Dean had set different color depth (vga= parameter) for the older 
 kernel?

 The dmesg output for the 2.6.29 would clear any doubts.


For me this is broken ever since 2.6.26 on a Mac Mini with all of
efifb/intelfb/vesafb but perhaps this is a different issue.

I will try to rebuild 2.6.29 with intelfb and the patch to see if that
makes a difference.

Currently efifb does give correct geometry but wrong colours for me,
the other framebuffers would also produce picture with wrong geometry
with 2.6.26.

Thanks

Michal

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables 
unlimited royalty-free distribution of the report engine 
for externally facing server and web deployment. 
http://p.sf.net/sfu/businessobjects
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[1/2] 2.6.22-git: known regressions

2007-07-20 Thread Michal Piotrowski
Hi all,

Here is a list of some known regressions in 2.6.22-git.

Feel free to add new regressions/remove fixed etc.
http://kernelnewbies.org/known_regressions

List of Aces

NameRegressions fixed since 21-Jun-2007
Adrian Bunk3
Andi Kleen 2
Andrew Morton  2
David Woodhouse2
Hugh Dickins   2
Jens Axboe 2



Unclassified

Subject : a52b1752c07 broke !SMP: error: implicit declaration of 
function `WARN_ON'
References  : http://lkml.org/lkml/2007/7/17/600
Last known good : ?
Submitter   : Uwe Kleine-König [EMAIL PROTECTED]
Caused-By   : Avi Kivity [EMAIL PROTECTED]
  commit a52b1752c077cb919b71167c54968a0b91673281
Handled-By  : ?
Status  : unknown

Subject : Section mismatch: reference to .init.data:cpu_llc_id (between 
'set_cpu_sibling_map' and 'initialize_secondary')
References  : http://lkml.org/lkml/2007/7/19/202
Last known good : ?
Submitter   : Gabriel C [EMAIL PROTECTED]
Caused-By   : Jeremy Fitzhardinge [EMAIL PROTECTED]
  commit c70df74376c1e29a04e07e23dd3f4c384d6166dd
Handled-By  : ?
Status  : unknown


Subject : Oops while modprobing phy fixed module
References  : http://lkml.org/lkml/2007/7/14/63
Last known good : ?
Submitter   : Gabriel C [EMAIL PROTECTED]
Caused-By   : Tejun Heo [EMAIL PROTECTED]
  commit 3007e997de91ec59af39a3f9c91595b31ae6e08b
Handled-By  : Satyam Sharma [EMAIL PROTECTED]
  Tejun Heo [EMAIL PROTECTED]
Status  : unknown



Block layer

Subject : broken {sd,hd}parm
References  : http://lkml.org/lkml/2007/7/16/389
Last known good : ?
Submitter   : Gabriel C [EMAIL PROTECTED]
Caused-By   : ?
Handled-By  : FUJITA Tomonori [EMAIL PROTECTED]
Status  : problem is being debugged



DRM

Subject : wine locks up system
References  : http://lkml.org/lkml/2007/7/17/128
Last known good : ?
Submitter   : Charles Gagalac [EMAIL PROTECTED]
Caused-By   : commit d4e2cbe9cb9219fc924191a6baa2369140cb5ea8
  Dave Airlie [EMAIL PROTECTED]
  Michel Dänzer [EMAIL PROTECTED]
  Kristian Høgsberg [EMAIL PROTECTED]
Handled-By  : 
Status  : unknown



Regards,
Michal

--
LOG
http://www.stardust.webpages.pl/log/

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[1/3] 2.6.22-rc3: known regressions with patches

2007-05-29 Thread Michal Piotrowski
Hi all,

Here is a list of some known regressions in 2.6.22-rc3
with patches available.

Feel free to add new regressions/remove fixed etc.
http://kernelnewbies.org/known_regressions



ARM

Subject: arch/arm/plat-s3c24xx/devs.c build errors
References : http://lkml.org/lkml/2007/5/28/18
Submitter  : Qi Yong [EMAIL PROTECTED]
Status : patch available in -arm tree



Block devices

Subject: loop devices limited to one single device
References : http://lkml.org/lkml/2007/5/16/229
Submitter  : Uwe Bugla [EMAIL PROTECTED]
Handled-By : Ken Chen [EMAIL PROTECTED]
Patch  : http://lkml.org/lkml/2007/5/21/483
Status : patch available


DRM

Subject: kernel BUG at include/linux/slub_def.h:88 kmalloc_index()
References : http://bugzilla.kernel.org/show_bug.cgi?id=8476
Submitter  : Cherwin R. Nooitmeer [EMAIL PROTECTED]
Status : Fix should be in my drm tree for the next mm.. Dave Airlie



File systems

Subject: 2.6.21-git10/11: files getting truncated on xfs
References : http://lkml.org/lkml/2007/5/9/410
Submitter  : Jeremy Fitzhardinge [EMAIL PROTECTED]
Handled-By : David Chinner [EMAIL PROTECTED]
Patch  : http://lkml.org/lkml/2007/5/12/93
Status : patch available


Memory management

Subject: bug in i386 MTRR initialization
References : http://lkml.org/lkml/2007/5/19/93
Submitter  : Andrea Righi [EMAIL PROTECTED]
Status : patch available



Regards,
Michal

--
Najbardziej brakowało mi twojego milczenia.
-- Andrzej Sapkowski Coś więcej



-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Savage/MX: all OpenGL apps receive SIGKILL upon start

2005-12-27 Thread Michal Kepien
Hello there all DRI hackers,

I own a Savage/MX in my laptop and for some time, I've been using it
successfully with the savage DRI driver. However, some time ago things got
messed up. I've been always using the latest CVS versions of X.org, Mesa and
DRM. I usually update them when I update my kernel. Ok, so here's the story. A
couple of months ago (that's an estimation actually, I don't remember the
precise date) I've updated the DRM kernel module and, as always, tested its
functionality by starting glxgears. A black window blinked once and I got the
message Terminated. I thought it may be the question of incompatibility
between current DRM and old X.org or Mesa, so I recompiled them all from CVS.
Same result. So I started Quake III - Terminated. Because I use OpenGL apps 
only from time to time, I thought it's probably some minor bug that would soon
be fixed, I forgot about the matter and hoped for the best. Yet, it's been some
time and still all apps become terminated by SIGKILL upon OpenGL initialization.
Today, I recompiled the current CVS version of X.org, Mesa and DRM and it still
did no good. The only message I get is VM: killing process glxgears in
/var/log/syslog. I've been trying to produce a backtrace, yet I'm unable to do
it (but I'm a newbie and that may explain it ;-)) X.org says DRI is enabled
and produces no errors at all. That's about everything I can tell about the
matter, if you need any more info, just ask and I'll answer ASAP.

My box: Toshiba Satellite 4270 - Celeron 500, 192 MB RAM, Savage/MX 8 MB VRAM,
Slackware 10.0 + current, custom 2.6.14.5 kernel, current CVS versions of X.org,
Mesa and DRM.

One more thing, previously DRI worked *perfectly* (performance way better than
on Windows) for quite a time (about a year I believe).

Thanks in advance for any help,
Michal Kepien


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Artifacts with very large texture coordinates

2004-12-16 Thread Michal Kepien
 Sorry folks, I attached the wrong file. This is the second time in a
 week. I have to be more careful. Now the correct program.

You're simply working too hard :-) Here you go:

http://kempniu.no-ip.com/files/teximage.jpg (Savage/IX 8 MB)

Best regards,
Michal Kepien


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. 
http://productguide.itmanagersjournal.com/
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Artifacts with very large texture coordinates

2004-12-15 Thread Michal Kepien
 I was also wondering if other hardware has similar problems. I'm
 attaching a small test program that demonstrates the effect and a
 screenshot of what I get on my ProSavageDDR. With software rendering the
 output is almost correct. Compile with 
 
   cc -lGL -lGLU -lglut  teximage.c -o teximage

Well, I don't really know what _should_ I get, but here you go :-)

http://kempniu.no-ip.com/files/texturetest.jpg (Savage/IX 8 MB)

Oh and BTW: the compile command you've shown uses the filename teximage.c, but
you attached texturetest.c ;-)

Best regards,
Michal Kepien


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. 
http://productguide.itmanagersjournal.com/
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Symbol printf from module /usr/X11R6/lib/modules/linux/libdrm.a is unresolved! [was: Unresolved symbols in X.org modules with Savage IX]

2004-11-30 Thread Michal Kepien
 Any news?
 
 After a short experience with kernel 2.6.9 and 10rc2
 I back to kernel 2.4 and this problem appears of unresolved symbols
 appears, now if I run glxinfo kill my X, also with one savage (twister K
 in my case).

Well, yes, actually I did manage to get it working (see my latest post :-))
However, the method I used definitely does not deserve to be called a
solution - the term workaround fits best in here, I guess. Try running this
Perl script while you are in your X.org CVS root directory (e.g. /usr/src/xc):

$ perl -e 'for(@ARGV){$f=$_;@_=`cat 
$_`;$_=join(,@_);s|\s+printf\((.*?)\);|/*printf($1);*/|gs;open(F,$f);print 
F;close(F);}' \
./lib/font/Type1/objects.c \
./lib/font/Type1/t1malloc.c \
./lib/font/Type1/t1stub.c \
./programs/Xserver/GL/dri/dri.c \
./programs/Xserver/hw/xfree86/os-support/linux/drm/xf86drmSL.c

That just comments out all printf's that cause the problem :^))) After
applying this... ekhem, patch ^_^, recompile X.org and it should get you under
way. If you don't want to run `make distclean' and then build the whole source
tree, you can just do the following:

$ cd ./lib/font/Type1
$ rm -f *.o *.a
$ make  make install

Repeat these steps for the other 2 directories involved (see above).

Worked OK for me, guess those printf's are just debugging messages (are they?).

Good luck,
Michal Kepien


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. 
http://productguide.itmanagersjournal.com/
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


DGA problem with Savage/IX

2004-11-29 Thread Michal Kepien
Hi there all Savage users!

I've got problem with my Savage/IX card inside a Toshiba Satellite Pro 4270
laptop. I've built all the software required to launch DRI for a Savage card
(that is, X.org, Mesa and the kernel DRM module) from CVS. My glxgears result
peaks at 394 fps, which I consider not that bad. I've also managed to run
Quake II @ 1024x768 with a stable framerate of 17 frames per second! So at the
beginning, thanks to all Savage developers for bringing us to this point.

However, I've got a problem with DGA and I'm quite unsure whether this should
already work or not. I haven't found anything on this topic on the net. Well,
the problem is I can't use DGA correctly. It does launch, but the display I
get is pretty far from what you can call a usable environment :-) This
includes e.g. MPlayer and Starcraft with wine-20041019. When I start X, an
entry Loading extension XFREE86-DGA appears in the logs and gives no errors.

When I start MPlayer with the -vo dga option, the screen gets divided into
several stripes of pixelised video. Pixelised means I get squares about
10 pixels wide (can't take any screenshots, I'm afraid). However, the picture
isn't totally random, it resembles the correct output. But you cannot call it
even a satisfactory one :-) Within MPlayer, keyboard shortcuts work (video
navigation, exiting and all this stuff).

Starcraft with wine-20041019 is a little bit more of a confusion. When I run
wine as root, my display turns into an almost-random combination of pixels.
Again, it _isn't_ totally random. I can e.g. see that rotating satellite dish
in the main menu :-) (Well, actually it's more like a coordinated motion of
pixels that I sense is a satellite dish ;-)) However, Starcraft gets my
keyboard and mouse locked up - holding the POWER button is the only solution
(heard of this _many_ times before).

So, my question is: does any of you guys happen to have such problems or is it
something with my specific configuration? Or maybe DGA support is still
experimental? If so, take your time - I just wanna know whether I have anything
to work with.

Thanks in advance for any help and good luck to all of you out there!

Michal Kepien


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. 
http://productguide.itmanagersjournal.com/
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Unresolved symbols in X.org modules with Savage IX

2004-11-25 Thread Michal Kepien
Hi there,

I'm trying to get my Savage IX running with DRI. I'm precisely following the
instructions supplied at http://dri.sourceforge.net/cgi-bin/moin.cgi/Building.
After everything is compiled, X.org starts fine, but when I review Xorg.0.log,
it turns out that something's screwed up. Here's a snippet:

Symbol printf from module /usr/X11R6/lib/modules/linux/libdrm.a is unresolved!
Symbol puts from module /usr/X11R6/lib/modules/fonts/libtype1.a is unresolved!
Symbol printf from module /usr/X11R6/lib/modules/extensions/libdri.a is 
unresolved!

Each of these errors is printed out a few times. Before it happens, X.org
tells me that DRI was successfully initialized (DRI is enabled). When I run
glxinfo (or any other OpenGL application) afterwards, it crashes my Xserver
telling me that An undefined function has been called. What's even more
frustrating, the same thing happens for me both when I update the X.org version
installed from a Slackware package (6.7.0) and when I compile the whole X.org
source tree from scratch! I'm more than confused by the contents of those
messages - unresolved symbol *printf*? Or puts? Even such a newbie as me knows
these are standard C I/O functions so I sense something weird in here...

Got any clues? Could this be a CVS error? Or maybe something's wrong with my
configuration? I'm using a self-compiled 2.4.27 kernel and Slackware 10.0 as I
mentioned before. gcc version 3.3.4, glibc version 2.3.2. Is anything else
worth mentioning? Dunno. If you'd like to review my whole Xorg.0.log, here it
is:

http://kempniu.no-ip.com/files/Xorg.0.log

I'd be grateful for any clues as I already saw posts about successful DRI
installations using Savage IX and I can't wait to get my very own working :)

Best regards,
Michal Kepien


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. 
http://productguide.itmanagersjournal.com/
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] Problems with RTCW in 16bit resolution

2003-02-02 Thread Michal Bukovjan
Hi,

when playing RTCW in 16bit screen resolution, the color of fog effects 
(panzerfaust smoke, clouds in the sky, fog on the ground) are funny.

This effect is visible even in the main screen.

To be more specific:

- panzerfaust smoke from the panzerfaust tube dropped after shooting is 
sometimes dark magenta, sometimes dark green. Should be white

- clouds in the sky on first level are dark a with funny colors, should 
be white and sky blue.

- fog on the ground in some dungeons incl. the first level is dark, 
should be white

This problems occur with both TCL enabled and disabled.
Using XFree86 4.2.99.4.
When I use 24bit resulution for X server, all seems to be fine and the 
above problems do not occur.
I am using Radeon AIW (QD) with 32MB SDRAM on Linux 2.4.21pre4.


There is also a reproducible lockup in the Rocket base mission - when 
taking the lift to the missile launching room and when you look up, my 
machine locks hard (not even kernel magic keys work). This is easily 
reproducible. The workaround is not to look up on the lift :-) This 
lockup occurs in 16bit, I am not sure about other color depth and does 
not occur anywhere else in the game.

Thanks for otherwise great drivers. I am especially looking forward to 
texmem branch, it should help those having only 32MB on-card memory, 
right? :-)

Michal Bukovjan



---
This SF.NET email is sponsored by:
SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See!
http://www.vasoftware.com
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] radeon mmio area size

2002-07-17 Thread Michal Bukovjan

Adam Duck wrote:

Michal == Michal Bukovjan [EMAIL PROTECTED] writes:



Michal Keith Whitwell wrote:
 Can someone out there check this quickly?
 
 In radeon.h in the 2d driver, the size of the radeon mmio area is 
 defined as 0x8 - however, on my r200 at least, /proc/pci shows the 
 area as being 1/8th that size:
 
 Bus  1, device   0, function  0:
 ... (omitted) ...
 Non-prefetchable 32 bit memory at 0xff5f [0xff5f].
 
 Can someone with a radeon installed make the same check?
 
Michal Here is mine (Radeon AIW - QD):

Michal   Bus  1, device   0, function  0:
Michal VGA compatible controller: ATI Technologies Inc Radeon QD (rev 0).
Michal   Non-prefetchable 32 bit memory at 0xdd00 [0xdd07].

Hey, why do you have that much???
Mine also says - like the other's here:

  Bus  1, device   5, function  0:
VGA compatible controller: ATI Technologies Inc Radeon QL (rev 0).
  Non-prefetchable 32 bit memory at 0xe900 [0xe900].

Bye, *disappointed* Adam.

Don't know ...
Is it good or bad? Is there anything I can do about this?
Is there any setting (in BIOS, or config file) that can adjust this?

Anyway, I am using latest GATOS drivers + drm on this machine and both 
XVideo and 3d works flawlessly (RTCW, Rune, chromium B.S.D.) - although 
no TCL, of course.
I wish DRI and GATOS would merge :-( Maybe someday...

Michal



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] radeon mmio area size

2002-07-16 Thread Michal Bukovjan

Keith Whitwell wrote:

 Can someone out there check this quickly?

 In radeon.h in the 2d driver, the size of the radeon mmio area is 
 defined as 0x8 - however, on my r200 at least, /proc/pci shows the 
 area as being 1/8th that size:

   Bus  1, device   0, function  0:
 ... (omitted) ...
   Non-prefetchable 32 bit memory at 0xff5f [0xff5f].

 Can someone with a radeon installed make the same check?

 Keith


Here is mine (Radeon AIW - QD):

  Bus  1, device   0, function  0:
VGA compatible controller: ATI Technologies Inc Radeon QD (rev 0).
  IRQ 10.
  Master Capable.  Latency=32.  Min Gnt=8.
  Prefetchable 32 bit memory at 0xd000 [0xd7ff].
  I/O at 0xc000 [0xc0ff].
  Non-prefetchable 32 bit memory at 0xdd00 [0xdd07].

Michal



---
This sf.net email is sponsored by: Jabber - The world's fastest growing 
real-time communications platform! Don't just IM. Build it in! 
http://www.jabber.com/osdn/xim
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] radeon mmio area size

2002-07-16 Thread Michal Kozlowski

On Tue, 16 Jul 2002, Keith Whitwell wrote:

 Can someone out there check this quickly?
 
 In radeon.h in the 2d driver, the size of the radeon mmio area is defined as 
 0x8 - however, on my r200 at least, /proc/pci shows the area as being 
 1/8th that size:
 
Bus  1, device   0, function  0:
 ... (omitted) ...
Non-prefetchable 32 bit memory at 0xff5f [0xff5f].
 
 Can someone with a radeon installed make the same check?
 
 Keith
 
Here's mine

[...]

  Bus  1, device   0, function  0:
VGA compatible controller: PCI device 1002:514c (ATI Technologies Inc) (rev 
0).
  IRQ 9.
  Master Capable.  Latency=32.  Min Gnt=8.
  Prefetchable 32 bit memory at 0xd800 [0xdfff].
  I/O at 0x9000 [0x90ff].
  Non-prefetchable 32 bit memory at 0xe100 [0xe100].

Original ATI 8500, presume it's LE

Cheers
Mike



---
This sf.net email is sponsored by: Jabber - The world's fastest growing 
real-time communications platform! Don't just IM. Build it in! 
http://www.jabber.com/osdn/xim
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] R200 video signal lost

2002-07-11 Thread Michal Kozlowski

Hi everyone,

Thanks to the help of Stefan and others I got my r200 to work, from
Stefans lspci logs I was able to figure out why I lost my signal, it was
hardware configuration related.  I had to disable Fast Writes for AGP,
simple solution took a bit of time to find, but anyone else having
problems make sure to check that :o)

Great job everyone, keep up the great work.
Cheers
Mike



---
This sf.net email is sponsored by:ThinkGeek
PC Mods, Computing goodies, cases  more
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] good buy?

2002-07-11 Thread Michal Kozlowski

Hey Ian,
Well I just got my dri working and it works reasonable well I play ut @ 
1280x960 and it's fairly smooth for alpha level drivers.  The 2d has 
always worked, works with the radeon driver.  Now I have a original ati 
8500 board and I really like it including the 2d.  Now that that 3d is 
starting to work I'm playing some games again.  Sorry don't own Q3 so 
can't say how good that is.  And yes 2d will work from the stock 4.2 X.

Cheers
Mike

 
 Is the Hercules 3D Prophet 8500 a good card? (is the 2D stable at
 1600x1200, unlike my 7500 VIVO which has a noticeable 'noise' so that
 vertical lines arent smooth, but somewhat rough looking)
 
 will it work 2D on a stock X 4.2 ?
 
 what framerate does the beta DRI accel for it give in Quake3 ?
 
 All questions I must know answers to ;)
 
 Might have a chance to upgrade soon! (like, tomorrow, so answer me quick
 please :-)



---
This sf.net email is sponsored by:ThinkGeek
PC Mods, Computing goodies, cases  more
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] R200 video signal lost

2002-07-10 Thread Michal Kozlowski

Hi,

I have a problem when I start X with dri and the radeon_drv module loaded
I lose my video signal I can still log onto the system remotely and get
the log from the startup, here is where is stops.

snip
(II) RADEON(0): Using 8 MB AGP aperture
(II) RADEON(0): Using 1 MB for the ring buffer
(II) RADEON(0): Using 2 MB for vertex/indirect buffers
(II) RADEON(0): Using 5 MB for AGP textures
(II) RADEON(0): Memory manager initialized to (0,0) (1600,2514)
(II) RADEON(0): Reserved area from (0,1200) to (1600,1202)
(II) RADEON(0): Largest offscreen area available: 1600 x 1312
(II) RADEON(0): Reserved back buffer at offset 0xf5a000
(II) RADEON(0): Reserved depth buffer at offset 0x16ad000
(II) RADEON(0): Reserved 34816 kb for textures at offset 0x1e0
(==) RADEON(0): Backing store disabled
(==) RADEON(0): Silken mouse enabled
\snip

As you can see it doesn't continue with loading the XAA driver and
input devices, the X sits there using 99.9% of my cpu.  I must say I'm
using 4.2.99.1, the latest cvs from xfree (testing the new xft library).
If I disable dri or do not load the agpgart, radeon modules then it goes
into x windows fine, without dri support of course.

Oh I also tried it on a clean dri trunk from the cvs and it did
the same thing.  I'm wondering if I'm doing something wrong, I copied the
binary files from dripkg into the appropriate directories.

Here is a snip from my config file

snip
Section Device
Identifier  ATI Radeon 8500
Driver  radeon
EndSection
/snip

Great beginning, keep up the good work everyone.  If you need any
information I would be more than willing to help, I'm looking forward to
getting this board working under linux.

Cheers
Mike




---
This sf.net email is sponsored by:ThinkGeek
Two, two, TWO treats in one.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] R200 video signal lost

2002-07-10 Thread Michal Kozlowski

I guess I forgot to mention that I'm trying out the r200 dri drivers that
Keith released a couple of days ago.  No if I reboot and do not change my
XF86Config not to load DRI then I get a lost signal.  The only reason I'm
trying DRI now is b/c of the new r200_dri driver, do I have something
configured wrong to get the r200 driver running, seems to me like the
radeon_drv should find the r200_dri automatically (radeon_dri.c line#
1310).  My kernel mod is the one provided by
r200-20020709-linux.i386.tar.bz2, no modifications.

Cheers
Mike

On Wed, 10 Jul 2002, Mike Mestnik wrote:

 Yes, I had a simular problem, rebooted and every thing was fine.
 I got my kernel mod from cvs (and patched it heavily to support devfs) then I got 
the rest of DRM
 from http://dri.sourceforge.net/snapshots/radeon-20020710-linux.i386.tar.bz2 I have 
a 7500 QW
 though.

 I'l get my 8500 ought of that windowz box and run some tests, K.



---
This sf.net email is sponsored by:ThinkGeek
Two, two, TWO treats in one.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] R200 video signal lost

2002-07-10 Thread Michal Kozlowski

Hi Stefan, good to hear someone got it working.  I'll give the
r200-0-1-branch a shot, see if I can compile it and get it running.  Would
there be a problem if I had the radeon_dri modules still in the module/dri
directory (I'll try removing it)

snip

 Did you take a look at /var/log/XFree86.0.log, maybe it gives you any
 interesting information why the server won't start (or why it blanks
 your screen)
/snip

I posted in my first post, didn't want to clutter up the email in the
reply, here is the last snippet of my log:

(II) RADEON(0): [agp] AGP Texture map mapped at 0x4452c000
(II) RADEON(0): [drm] register handle = 0xe100
(II) RADEON(0): [dri] Visual configs initialized
(II) RADEON(0): CP in BM mode
(II) RADEON(0): Using 8 MB AGP aperture
(II) RADEON(0): Using 1 MB for the ring buffer
(II) RADEON(0): Using 2 MB for vertex/indirect buffers
(II) RADEON(0): Using 5 MB for AGP textures
(II) RADEON(0): Memory manager initialized to (0,0) (1600,2514)
(II) RADEON(0): Reserved area from (0,1200) to (1600,1202)
(II) RADEON(0): Largest offscreen area available: 1600 x 1312
(II) RADEON(0): Reserved back buffer at offset 0xf5a000
(II) RADEON(0): Reserved depth buffer at offset 0x16ad000
(II) RADEON(0): Reserved 34816 kb for textures at offset 0x1e0
(==) RADEON(0): Backing store disabled
(==) RADEON(0): Silken mouse enabled

This is right before where XAA is suppose to load.

snip
 Also, what hardware are you using (mainboard chipset, model of the card,
 cpu ...)? Maybe the current driver works better on some chipsets than on
 others?

/snip

oops, okay I'm running an AMD thunderbird 1 Ghz
ABIT KT7A - RAID (chipset is VIA KT133A /VIA 686B)
harddrive in software raid using xfs on top
Live! Value
ATI 8500 64MB (x recognizes it as 8500 QL rev 0)

Thanks for trying to help me out,
Cheers
Mike




---
This sf.net email is sponsored by:ThinkGeek
Two, two, TWO treats in one.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] R200 video signal lost

2002-07-10 Thread Michal Kozlowski


On Thu, 11 Jul 2002, Stefan Lange wrote:

 compiling from cvs shouldn't be a problem, I just followed the DRI
 Compilation Guide from the website and everything went fine (you might
 have to compile the kernel drm-module manually, but that's no trouble)


Completed the compile didn't help me any still does that same thing, did
you find that you had to remove some misplaced  and  from some
files?

  This is right before where XAA is suppose to load.
 
  snip
 
 yes, XAA is the next section in my log files, do yours just stop there?

Yes it just dies, well the server keeps running at 99.9% CPU.

Stefan thanks for all your help I'm going to give up on the compile for
now, as you said 2d works great, and I haven't tried xv but I don't expect
it to work.  I really hope I'm not overlooking something simple, some time
this week I may try my hand on debugging the xserver, I've never done it
so we'll see how it goes.

There's a couple people out there that have it working so it's something
with my system I think.

Cheers
Mike



---
This sf.net email is sponsored by:ThinkGeek
PC Mods, Computing goodies, cases  more
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel