3D support for Displaylink devices

2011-06-02 Thread Alan Cox
> The window system needs support for splitting rendering and display.
> In X these are currently tied together.  The only real obstacle is
> fixing this in X.  However, this is a lot of work.  Dave Airlie has
> started working on this, but it's not really usable yet.   See:
> http://airlied.livejournal.com/71734.html
> http://cgit.freedesktop.org/~airlied/xserver/log/?h=drvlayer

In the windows world it basically works by 'borrowing' the real graphics
card for rendering and then doing the damage list/compression/uplink.

So its basically a *very* strange CRTC/scanout. To most intents and
purposes plugging one into an i915 say is an extra i915 head, and indeed
you can sensibly display the same scanout buffer on both real and link
heads.

Alan


Questions about libdrm_intel and way to share physical memory between CPU and GPU

2011-06-02 Thread Alan Cox
On Sat, 28 May 2011 09:54:01 +0100
Chris Wilson  wrote:

> On Fri, 27 May 2011 14:37:45 -0700, "Segovia, Benjamin"  intel.com> wrote:
> > Hello gurus,
> > 
> > I have two question mostly regarding libdrm_intel
> > 
> > 1/ What is the difference between drm_intel_bo_map and 
> > drm_intel_gem_bo_map_gtt ?
> bo_map uses the CPU domain, and so is CPU linear (needs sw detiling).
> bo_gtt_map uses the uncached [WC] GTT domain, and so is GPU linear
> (detiling is performed by the hardware using a fence).
> 
> > 2/ Will it be possible (or is it already possible) to directly share a 
> > regularly allocated piece of physical memory? Typical use case is the 
> > following one using OpenCL API:
> 
> Yes. I've proposed a vmap interface to bind user-pages into the GTT,
> similar to a completely unused bit of TTM functionality.

It seems to me that stolen memory and other things could all be sorted
out somewhat if the GEM layer and GEM as shmemfs backing were split apart
a bit. A 'privately backed' GEM object wouldn't be able to support
flink() but I can't find much else that would break ?

Wondering about this for things like the GMA500, and also to get back all
that memory the i9xx driver burns on a PC.


Semantics of the 'dumb' interface

2011-06-02 Thread Alan Cox
I have GEM allocation working on the GMA 500 and I can scribble in a
framebuffer (minus an odd 'last page' bug which is an off by one
somewhere I imagine) and display it with nice modetest stripes.

If I kill the program however it all goes kerblam

drm_release calls into fb_release which duely destroys the user frame
buffer. This drops the gem object reference and we start to release all
the resources but it blows up because the GEM object is still pinned in
the GTT as it's still being used as scanout.

Where I'm a bit lost right now is understanding what is supposed to
happen at this point, and how the scanout is supposed to have ended up
cleaned up and reset to the system frame buffer beforehand thus unpinning
the scanout from the GTT.

Alan


[Bug 31412] radeon memory leak

2011-06-02 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=31412





--- Comment #15 from Kevin   2011-06-02 17:24:35 ---
I'm sorry. I wasn't clear. There is NOT a memory leak.

Below is some of my 'free -m' output from above. My question is, why does the
'used -/+ buffers/cache' decrease when I 'echo 3 > /proc/sys/vm/drop_caches'?


# free -m
 total   used   free sharedbuffers cached
Mem:  5920   4318   1602  0113   2465
-/+ buffers/cache:   1740   4180
Swap: 7998  0   7998
# echo 3 > /proc/sys/vm/drop_caches
# free -m
 total   used   free sharedbuffers cached
Mem:  5920225   5695  0  3 12
-/+ buffers/cache:209   5711
Swap: 7998  0   7998

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.


Fw: [PATCH] drm: i915: correct return status in intel_hdmi_mode_valid()

2011-06-02 Thread Keith Packard
On Mon, 30 May 2011 12:48:26 +0200, Nicolas Kaiser  wrote:

>   if (mode->clock < 2)
> - return MODE_CLOCK_HIGH;
> + return MODE_CLOCK_LOW;

Seems obvious to me.

Reviewed-by: Keith Packard 

-- 
keith.packard at intel.com
-- next part --
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<http://lists.freedesktop.org/archives/dri-devel/attachments/20110602/60d5fdbd/attachment.pgp>


3D support for Displaylink devices

2011-06-02 Thread Prasanna Kumar T S M
Garry,

My first name is "PrasannaKumar". I will use my full name to prevent 
confusion :).

I want 3D acceleration for running Compiz or GNOME3 or KWin with 
composition. Currently windows Displaylink driver compresses and 
transfers pixel data where there is a change (only damaged area is 
transferred) to reduce the amount of data transfer. It is able to play 
HD video without dropping frames. So I think that 3D acceleration and 
video playback acceleration is possible. High end games cannot be played 
but the normal 3D and video operations should work without any issues. 
When displaylink introduces USB 3.0 devices the bandwidth issue will go 
away (I remember reading in Wikipedia that displaylink is working on a 
USB 3.0 product).

The displaylink framebuffer driver that comes with linux (udlfb) also 
compresses and transfers only the damaged region to conserve the USB 
bandwidth. Also CPU usage for doing the compression is very less making 
it ideal for mobile devices (may be an android mobile). 
http://www.youtube.com/watch?v=3-bLOc1qnMM=player_embedded shows 
android mobile with displaylink device. When a mobile phone is able to 
power a high resolution graphics normal desktops and notebooks can do 
provide good quality output.

PrasannaKumar Muralidharan

On 31-05-2011 15:36, Garry Hurley Jr. wrote:
> Kumar
>
> I am going to make the assumption that your culture puts the family 
> name first, so please excuse me for calling you Kumar if that is not 
> your given name.
>
> As to your question, I think I understand what you are asking for and 
> I was thinking similar things about displaying over ethernet about 
> five years ago. The problem is complex due to video refresh rates and 
> the latency of the connection. You would not get the same performance 
> on a video game, for example, unless you dropped a few dozen frames 
> per second, since the USB bus is slower than the PCI bus or even the 
> ISA bus. If you are talking about 3D acceleration, I presume you want 
> to game with it. The solution may lie in buffering, but again, your 
> performance would suffer unless you took the quality down a notch. 
> From the gamers I know, dropping quality for performance is a very 
> tricky balance. Each one is different about the quality he or she will 
> allow to be dropped in a game but when that balance is tipped, they 
> will complain or switch to a different technology.
>
> I am not saying it is not possible, but I am asking if, knowing this, 
> you truly feel it is worth the effort to try to implement it.
>
> Sent from my iPhone
>
> On May 30, 2011, at 1:30 PM, PRASANNA KUMAR 
>  <mailto:prasanna_tsm_kumar at yahoo.co.in>> wrote:
>
>> USB graphics devices from displaylink does not have 3D hardware. To 
>> get 3D effects (compiz, GNOME 3, KWin, OpenGL apps etc) with these 
>> device in Linux the native (primary) GPU can be used to provide 
>> hardware acceleration. All the graphics operation is done using the 
>> native (primary) GPU and the end result is taken and send to the 
>> displaylink device. Can this be achieved? If so is it possible to 
>> implement a generic framework so that any device (USB, thunderbolt or 
>> any new technology) can use this just by implementing device specific 
>> (compression and) data transport? I am not sure this is the correct 
>> mailing list.
>>
>> Thanks,
>> Prasanna Kumar
>> ___
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org <mailto:dri-devel at 
>> lists.freedesktop.org>
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.freedesktop.org/archives/dri-devel/attachments/20110602/5111f69a/attachment.htm>


[Bug 30922] [radeon] Noisy fan with Radeon R600 and Toshiba Satellite L500-164

2011-06-02 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=30922





--- Comment #6 from Denis Washington   2011-06-02 10:46:39 
---
As I have already stated, changing the power management options didn't change
anything at all. Even running the "low" profile doesn't stop the fan from
cranking up to full speed under normal desktop load and never spinning down
again, even if the system is idling.

I am also most certain that my laptop has a separate GPU fan; I can, in fact,
control the CPU fan per sysfs, but whatever I do, the GPU fan keeps spinning at
the same speed, so I guess there must be separate fans for CPU and GPU.

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.


[Bug 36812] GPU lockup in Team Fortress 2

2011-06-02 Thread bugzilla-dae...@freedesktop.org
https://bugs.freedesktop.org/show_bug.cgi?id=36812

--- Comment #8 from Sven Arvidsson  2011-06-02 10:15:55 PDT ---
(In reply to comment #7)
> With MESA_GLSL=nopt the code that was changed by the bisected commit is not
> being executed, so I think the real problem might be somewhere else.  I guess
> you could try bisecting again with MESA_GLSL=nopt and maybe you'll come up 
> with
> a different bad commit.

I might try this. Enrico, do you remember what the good revision you used for
the bisect was, 7.10?

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.


[Bug 31412] radeon memory leak

2011-06-02 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=31412


Jesse Zhang  changed:

   What|Removed |Added

 CC||zh.jesse at gmail.com




--- Comment #14 from Jesse Zhang   2011-06-02 05:56:22 
---
There is no leaking here. A quick google search turns up
http://sourcefrog.net/weblog/software/linux-kernel/free-mem.html.

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.


[Bug 27517] KMS breaks 3D on R200

2011-06-02 Thread bugzilla-dae...@freedesktop.org
https://bugs.freedesktop.org/show_bug.cgi?id=27517

--- Comment #10 from Keith  2011-06-02 
03:01:22 PDT ---
(In reply to comment #9)
> (In reply to comment #8)
> > Yes, some time around the introduction of KMS, acceleration stopped 
> > working. I
> > had it fine on Debian before squeeze and Ubuntu Maveric but recently 
> > upgraded
> > to Natty and it stopped working. Starting compiz gives the old
> > GLX_EXT_texture_pixmap not available error. It also complains about GL 
> > version
> > 1.4+ whereas glxinfo reports version 1.4
> 
> Latest updates on Debian squeeze have 3D acceleration bursting into life.
> Kernel: 2.6.32-5-686
> X server 1.7.7

That should have been 2D. Also, it was openGL version 1.4 whereas this card
only supports 1.3 so everything is as it should be. glxgears still gives low
values for FPS. Compiz now works fine.

I wish
/usr/lib/nux/unity_support_test -p
was not such a well kept secret, it would have helped clear up a lot of
misconceptions.

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.


[Bug 31412] radeon memory leak

2011-06-02 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=31412





--- Comment #13 from Kevin   2011-06-02 02:57:29 ---
So is this something that should be fixed or is everything working as intended?
To me, it doesn't seem correct. Unused caches shouldn't be stored in my "used"
memory.

Is there anything else that I should try to help?
Can anyone else reproduce this? I just open large pdfs and scroll through them.

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.


Re: 3D support for Displaylink devices

2011-06-02 Thread Prasanna Kumar T S M

Garry,

My first name is PrasannaKumar. I will use my full name to prevent 
confusion :).


I want 3D acceleration for running Compiz or GNOME3 or KWin with 
composition. Currently windows Displaylink driver compresses and 
transfers pixel data where there is a change (only damaged area is 
transferred) to reduce the amount of data transfer. It is able to play 
HD video without dropping frames. So I think that 3D acceleration and 
video playback acceleration is possible. High end games cannot be played 
but the normal 3D and video operations should work without any issues. 
When displaylink introduces USB 3.0 devices the bandwidth issue will go 
away (I remember reading in Wikipedia that displaylink is working on a 
USB 3.0 product).


The displaylink framebuffer driver that comes with linux (udlfb) also 
compresses and transfers only the damaged region to conserve the USB 
bandwidth. Also CPU usage for doing the compression is very less making 
it ideal for mobile devices (may be an android mobile). 
http://www.youtube.com/watch?v=3-bLOc1qnMMfeature=player_embedded shows 
android mobile with displaylink device. When a mobile phone is able to 
power a high resolution graphics normal desktops and notebooks can do 
provide good quality output.


PrasannaKumar Muralidharan

On 31-05-2011 15:36, Garry Hurley Jr. wrote:

Kumar

I am going to make the assumption that your culture puts the family 
name first, so please excuse me for calling you Kumar if that is not 
your given name.


As to your question, I think I understand what you are asking for and 
I was thinking similar things about displaying over ethernet about 
five years ago. The problem is complex due to video refresh rates and 
the latency of the connection. You would not get the same performance 
on a video game, for example, unless you dropped a few dozen frames 
per second, since the USB bus is slower than the PCI bus or even the 
ISA bus. If you are talking about 3D acceleration, I presume you want 
to game with it. The solution may lie in buffering, but again, your 
performance would suffer unless you took the quality down a notch. 
From the gamers I know, dropping quality for performance is a very 
tricky balance. Each one is different about the quality he or she will 
allow to be dropped in a game but when that balance is tipped, they 
will complain or switch to a different technology.


I am not saying it is not possible, but I am asking if, knowing this, 
you truly feel it is worth the effort to try to implement it.


Sent from my iPhone

On May 30, 2011, at 1:30 PM, PRASANNA KUMAR 
prasanna_tsm_ku...@yahoo.co.in 
mailto:prasanna_tsm_ku...@yahoo.co.in wrote:


USB graphics devices from displaylink does not have 3D hardware. To 
get 3D effects (compiz, GNOME 3, KWin, OpenGL apps etc) with these 
device in Linux the native (primary) GPU can be used to provide 
hardware acceleration. All the graphics operation is done using the 
native (primary) GPU and the end result is taken and send to the 
displaylink device. Can this be achieved? If so is it possible to 
implement a generic framework so that any device (USB, thunderbolt or 
any new technology) can use this just by implementing device specific 
(compression and) data transport? I am not sure this is the correct 
mailing list.


Thanks,
Prasanna Kumar
___
dri-devel mailing list
dri-devel@lists.freedesktop.org mailto:dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 27517] KMS breaks 3D on R200

2011-06-02 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=27517

--- Comment #10 from Keith ke...@backblow-loft.demon.co.uk 2011-06-02 
03:01:22 PDT ---
(In reply to comment #9)
 (In reply to comment #8)
  Yes, some time around the introduction of KMS, acceleration stopped 
  working. I
  had it fine on Debian before squeeze and Ubuntu Maveric but recently 
  upgraded
  to Natty and it stopped working. Starting compiz gives the old
  GLX_EXT_texture_pixmap not available error. It also complains about GL 
  version
  1.4+ whereas glxinfo reports version 1.4
 
 Latest updates on Debian squeeze have 3D acceleration bursting into life.
 Kernel: 2.6.32-5-686
 X server 1.7.7

That should have been 2D. Also, it was openGL version 1.4 whereas this card
only supports 1.3 so everything is as it should be. glxgears still gives low
values for FPS. Compiz now works fine.

I wish
/usr/lib/nux/unity_support_test -p
was not such a well kept secret, it would have helped clear up a lot of
misconceptions.

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 30922] [radeon] Noisy fan with Radeon R600 and Toshiba Satellite L500-164

2011-06-02 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=30922





--- Comment #6 from Denis Washington den...@online.de  2011-06-02 10:46:39 ---
As I have already stated, changing the power management options didn't change
anything at all. Even running the low profile doesn't stop the fan from
cranking up to full speed under normal desktop load and never spinning down
again, even if the system is idling.

I am also most certain that my laptop has a separate GPU fan; I can, in fact,
control the CPU fan per sysfs, but whatever I do, the GPU fan keeps spinning at
the same speed, so I guess there must be separate fans for CPU and GPU.

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 36812] GPU lockup in Team Fortress 2

2011-06-02 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=36812

--- Comment #8 from Sven Arvidsson s...@whiz.se 2011-06-02 10:15:55 PDT ---
(In reply to comment #7)
 With MESA_GLSL=nopt the code that was changed by the bisected commit is not
 being executed, so I think the real problem might be somewhere else.  I guess
 you could try bisecting again with MESA_GLSL=nopt and maybe you'll come up 
 with
 a different bad commit.

I might try this. Enrico, do you remember what the good revision you used for
the bisect was, 7.10?

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 31412] radeon memory leak

2011-06-02 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=31412





--- Comment #15 from Kevin kjs...@gmail.com  2011-06-02 17:24:35 ---
I'm sorry. I wasn't clear. There is NOT a memory leak.

Below is some of my 'free -m' output from above. My question is, why does the
'used -/+ buffers/cache' decrease when I 'echo 3  /proc/sys/vm/drop_caches'?


# free -m
 total   used   free sharedbuffers cached
Mem:  5920   4318   1602  0113   2465
-/+ buffers/cache:   1740   4180
Swap: 7998  0   7998
# echo 3  /proc/sys/vm/drop_caches
# free -m
 total   used   free sharedbuffers cached
Mem:  5920225   5695  0  3 12
-/+ buffers/cache:209   5711
Swap: 7998  0   7998

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Semantics of the 'dumb' interface

2011-06-02 Thread Alan Cox
I have GEM allocation working on the GMA 500 and I can scribble in a
framebuffer (minus an odd 'last page' bug which is an off by one
somewhere I imagine) and display it with nice modetest stripes.

If I kill the program however it all goes kerblam

drm_release calls into fb_release which duely destroys the user frame
buffer. This drops the gem object reference and we start to release all
the resources but it blows up because the GEM object is still pinned in
the GTT as it's still being used as scanout.

Where I'm a bit lost right now is understanding what is supposed to
happen at this point, and how the scanout is supposed to have ended up
cleaned up and reset to the system frame buffer beforehand thus unpinning
the scanout from the GTT.

Alan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Questions about libdrm_intel and way to share physical memory between CPU and GPU

2011-06-02 Thread Alan Cox
On Sat, 28 May 2011 09:54:01 +0100
Chris Wilson ch...@chris-wilson.co.uk wrote:

 On Fri, 27 May 2011 14:37:45 -0700, Segovia, Benjamin 
 benjamin.sego...@intel.com wrote:
  Hello gurus,
  
  I have two question mostly regarding libdrm_intel
  
  1/ What is the difference between drm_intel_bo_map and 
  drm_intel_gem_bo_map_gtt ?
 bo_map uses the CPU domain, and so is CPU linear (needs sw detiling).
 bo_gtt_map uses the uncached [WC] GTT domain, and so is GPU linear
 (detiling is performed by the hardware using a fence).
 
  2/ Will it be possible (or is it already possible) to directly share a 
  regularly allocated piece of physical memory? Typical use case is the 
  following one using OpenCL API:
 
 Yes. I've proposed a vmap interface to bind user-pages into the GTT,
 similar to a completely unused bit of TTM functionality.

It seems to me that stolen memory and other things could all be sorted
out somewhat if the GEM layer and GEM as shmemfs backing were split apart
a bit. A 'privately backed' GEM object wouldn't be able to support
flink() but I can't find much else that would break ?

Wondering about this for things like the GMA500, and also to get back all
that memory the i9xx driver burns on a PC.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: 3D support for Displaylink devices

2011-06-02 Thread Alan Cox
 The window system needs support for splitting rendering and display.
 In X these are currently tied together.  The only real obstacle is
 fixing this in X.  However, this is a lot of work.  Dave Airlie has
 started working on this, but it's not really usable yet.   See:
 http://airlied.livejournal.com/71734.html
 http://cgit.freedesktop.org/~airlied/xserver/log/?h=drvlayer

In the windows world it basically works by 'borrowing' the real graphics
card for rendering and then doing the damage list/compression/uplink.

So its basically a *very* strange CRTC/scanout. To most intents and
purposes plugging one into an i915 say is an extra i915 head, and indeed
you can sensibly display the same scanout buffer on both real and link
heads.

Alan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/radeon/kms/atom: initialize dig phy a bit later

2011-06-02 Thread Ari Savolainen
Commit ac89af1e1010640db072416c786f97391b85790f caused one of the monitors
attached to a dual head radeon gpu to have inverted colors (until the first
suspend/resume). Initializing dig phy a bit later fixes the problem.

---
 drivers/gpu/drm/radeon/radeon_display.c |8 
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_display.c
b/drivers/gpu/drm/radeon/radeon_display.c
index ae247ee..ddff2cf 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -1346,10 +1346,6 @@ int radeon_modeset_init(struct radeon_device *rdev)
return ret;
}

-   /* init dig PHYs */
-   if (rdev-is_atom_bios)
-   radeon_atom_encoder_init(rdev);
-
/* initialize hpd */
radeon_hpd_init(rdev);

@@ -1359,6 +1355,10 @@ int radeon_modeset_init(struct radeon_device *rdev)
radeon_fbdev_init(rdev);
drm_kms_helper_poll_init(rdev-ddev);

+   /* init dig PHYs */
+   if (rdev-is_atom_bios)
+   radeon_atom_encoder_init(rdev);
+
return 0;
 }

-- 
1.7.4.1
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Fw: [PATCH] drm: i915: correct return status in intel_hdmi_mode_valid()

2011-06-02 Thread Keith Packard
On Mon, 30 May 2011 12:48:26 +0200, Nicolas Kaiser ni...@nikai.net wrote:

   if (mode-clock  2)
 - return MODE_CLOCK_HIGH;
 + return MODE_CLOCK_LOW;

Seems obvious to me.

Reviewed-by: Keith Packard kei...@keithp.com

-- 
keith.pack...@intel.com


pgplUHlaQ5pZw.pgp
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: 3D support for Displaylink devices

2011-06-02 Thread Rob Clark
On Mon, May 30, 2011 at 12:30 PM, PRASANNA KUMAR
prasanna_tsm_ku...@yahoo.co.in wrote:
 USB graphics devices from displaylink does not have 3D hardware. To get 3D
 effects (compiz, GNOME 3, KWin, OpenGL apps etc) with these device in Linux
 the native (primary) GPU can be used to provide hardware acceleration. All
 the graphics operation is done using the native (primary) GPU and the end
 result is taken and send to the displaylink device. Can this be achieved? If
 so is it possible to implement a generic framework so that any device (USB,
 thunderbolt or any new technology) can use this just by implementing device
 specific (compression and) data transport? I am not sure this is the correct
 mailing list.

fwiw, this situation is not too far different from the SoC world.  For
example, there are multiple ARM SoC's that share the same IMG/PowerVR
core or ARM/mali 3d core, but each have their own unique display
controller..

I don't know quite the best way to deal with this (either at the
DRM/kernel layer or xorg driver layer), but there would certainly be
some benefit to be able to make DRM driver a bit more modular to
combine a SoC specific display driver (mostly the KMS part) with a
different 2d and/or 3d accelerator IP.  Of course the (or some of the)
challenge here is that different display controllers might have
different memory mgmt requirements (for ex, depending on whether the
display controller has an IOMMU or not) and formats, and that the flip
command should somehow come via the 2d/3d command stream.

I have an (experimental) DRM/KMS driver for OMAP which tries to solve
the issue by way of a simple plugin API, ie the idea being to separate
the PVR part from the OMAP display controller part more cleanly.  I
don't think it is perfect, but it is an attempt.  (I'll send patches
as an RFC, but wanted to do some cleanup first.. just haven't had time
yet.)  But I'm definitely open to suggestions here.

BR,
-R


 Thanks,
 Prasanna Kumar
 ___
 dri-devel mailing list
 dri-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/dri-devel


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel