Re: Standalone DRM application

2018-01-26 Thread Stalked By Google Insider


Tiruwork Mamaru is being impersonated by her ex husband!
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Standalone DRM application

2013-04-19 Thread Byron Stanoszek
On Thu, 18 Apr 2013, David Herrmann wrote:

> You can acquire/drop DRM-Master via drmSetMaster/drmDropMaster.
>
> If your DRM card is a PCI device, you can use the sysfs "boot_vga"
> attribute of the parent PCI device.
> (/sys/class/drm/card0/device/boot_vga)

David,

Thanks! That was exactly what I was looking for. Both ideas work wonderfully.

Regards,
  -Byron



Re: Standalone DRM application

2013-04-19 Thread Byron Stanoszek

On Thu, 18 Apr 2013, David Herrmann wrote:


You can acquire/drop DRM-Master via drmSetMaster/drmDropMaster.

If your DRM card is a PCI device, you can use the sysfs boot_vga
attribute of the parent PCI device.
(/sys/class/drm/card0/device/boot_vga)


David,

Thanks! That was exactly what I was looking for. Both ideas work wonderfully.

Regards,
 -Byron

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Standalone DRM application

2013-04-18 Thread David Herrmann
Hi

On Wed, Apr 17, 2013 at 11:05 PM, Byron Stanoszek  wrote:
> David,
>
> I'm developing a small application that uses libdrm (DRM ioctls) to change
> the
> resolution of a single graphics display and show a framebuffer. I've run
> into
> two problems with this implementation that I'm hoping you can address.
>
>
> 1. Each application is its own process, which is designed to control 1
> graphics
> display. This is unlike X, for instance, which could be configured to grab
> all
> of the displays in the system at once.
>
> Depending on our stackup, there can be as many as 4 displays connected to a
> single graphics card. One process could open /dev/dri/card0 and call
> drmModeSetCrtc() to initialize one of its displays to the requested
> resolution.
> However, whenever a second process calls drmModeSetCrtc() to control a
> second
> display on the same card, it gets -EPERM back from the ioctl.
>
> I've traced this down to the following line in
> linux/drivers/gpu/drm/drm_drv.c:
>
> DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc,
> DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
>
> If I remove the DRM_MASTER flag, then my application behaves correctly, and
> 4
> separate processes can then control each individual display on the card
> without
> issue.
>
> My question is, is there any real benefit to restricting drm_mode_setcrtc()
> with DRM_MASTER, or can we lose this flag in order to support
> one-process-per-
> display programs like the above?

Only one open-file can be DRM-Master. And only DRM-Master is allowed
to perform mode-setting. This is to prevent render-clients (like
OpenGL clients) to perform mode-setting, which should be restricted to
the compositor/...

In your scenario, you should share a single open-file between the
processes by passing the FDs to each. Or do all of that in a single
process. There is no way to split CRTCs/connectors between different
nodes or have multiple DRM-Masters on a single node at once. (There is
work going on to allow this, but it will take a while...)

You can acquire/drop DRM-Master via drmSetMaster/drmDropMaster.

>
> 2. My application has the design requirement that "screen 1" always refers
> to
> the card that was initialized by the PC BIOS for bootup. This is the same
> card
> that the Linux Console framebuffer will come up on by default, and therefore
> extra processing is required to handle VT switches (e.g. pause the display,
> restore original CRTC mode, etc.)
>
> Depending on the "Boot Display First [Onboard] or [PCI Slot]" option in the
> BIOS, this might mean either /dev/dri/card0 or /dev/dri/card1 becomes the
> default VGA card, as set by the vga_set_default_device() call in
> arch/x86/pci/fixup.c.
>
> Is there a way in userspace to identify which card# is the default card? Or
> alternatively, is there some way to get the underlying PCI bus/slot ID from
> a
> /dev/dri/card# device.

If your DRM card is a PCI device, you can use the sysfs "boot_vga"
attribute of the parent PCI device.
(/sys/class/drm/card0/device/boot_vga)

Regards
David


Standalone DRM application

2013-04-18 Thread Ilija Hadzic


On Thu, 18 Apr 2013, David Herrmann wrote:

> Hi
>
> On Wed, Apr 17, 2013 at 11:05 PM, Byron Stanoszek  
> wrote:
>> David,
>>
>> I'm developing a small application that uses libdrm (DRM ioctls) to change
>> the
>> resolution of a single graphics display and show a framebuffer. I've run
>> into
>> two problems with this implementation that I'm hoping you can address.
>>
>>
>> 1. Each application is its own process, which is designed to control 1
>> graphics
>> display. This is unlike X, for instance, which could be configured to grab
>> all
>> of the displays in the system at once.
>>
>> Depending on our stackup, there can be as many as 4 displays connected to a
>> single graphics card. One process could open /dev/dri/card0 and call
>> drmModeSetCrtc() to initialize one of its displays to the requested
>> resolution.
>> However, whenever a second process calls drmModeSetCrtc() to control a
>> second
>> display on the same card, it gets -EPERM back from the ioctl.
>>
>> I've traced this down to the following line in
>> linux/drivers/gpu/drm/drm_drv.c:
>>
>> DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc,
>> DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
>>
>> If I remove the DRM_MASTER flag, then my application behaves correctly, and
>> 4
>> separate processes can then control each individual display on the card
>> without
>> issue.
>>
>> My question is, is there any real benefit to restricting drm_mode_setcrtc()
>> with DRM_MASTER, or can we lose this flag in order to support
>> one-process-per-
>> display programs like the above?
>
> Only one open-file can be DRM-Master. And only DRM-Master is allowed
> to perform mode-setting. This is to prevent render-clients (like
> OpenGL clients) to perform mode-setting, which should be restricted to
> the compositor/...
>
> In your scenario, you should share a single open-file between the
> processes by passing the FDs to each. Or do all of that in a single
> process. There is no way to split CRTCs/connectors between different
> nodes or have multiple DRM-Masters on a single node at once. (There is
> work going on to allow this, but it will take a while...)
>

If running a custom-patched kernel is acceptable (i.e. custom-built, 
embedded system or the like), then a set of patches that I sent about a 
year ago [1] will probably do the job. The problem is that these patches 
are apparently not going upstream because there was little interest and 
there were a couple of arguments against them [2]. Originally, it's the 
work that Dave Airlie started but abandoned. I finished it off and tried 
to have it included upstream but it didn't happen.

I have an application that is similar to what is described here and I am 
using these patches to make it work. Essentially, what you do is call a 
small userspace utility (also included with the patches for libdrm) and 
specify which CRTCs/encoders/connectors/planes do you want included in the 
node and you get a new /dev/dri/render node that your application can 
use and be the master on these resources only. Then for the next node you 
call the utility again and specify a new set of display resources and then 
you run the other application on the top of that node.

The patches that are on the mailing list archive [1] are now a year old, 
but I have a rebased version for newer kernels, which I can send to 
whoever is interested in having them (I am just hesitating to pollute the 
mailing list with patches to which the maintainers have already said 
"no").

[1] http://lists.freedesktop.org/archives/dri-devel/2012-April/021326.html
[2] http://lists.freedesktop.org/archives/dri-devel/2012-September/028348.html

-- Ilija


Standalone DRM application

2013-04-18 Thread Byron Stanoszek

David,

I'm developing a small application that uses libdrm (DRM ioctls) to change the
resolution of a single graphics display and show a framebuffer. I've run into
two problems with this implementation that I'm hoping you can address.


1. Each application is its own process, which is designed to control 1 graphics
display. This is unlike X, for instance, which could be configured to grab all
of the displays in the system at once.

Depending on our stackup, there can be as many as 4 displays connected to a
single graphics card. One process could open /dev/dri/card0 and call
drmModeSetCrtc() to initialize one of its displays to the requested resolution.
However, whenever a second process calls drmModeSetCrtc() to control a second
display on the same card, it gets -EPERM back from the ioctl.

I've traced this down to the following line in linux/drivers/gpu/drm/drm_drv.c:

DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, 
DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),

If I remove the DRM_MASTER flag, then my application behaves correctly, and 4
separate processes can then control each individual display on the card without
issue.

My question is, is there any real benefit to restricting drm_mode_setcrtc()
with DRM_MASTER, or can we lose this flag in order to support one-process-per-
display programs like the above?


2. My application has the design requirement that screen 1 always refers to
the card that was initialized by the PC BIOS for bootup. This is the same card
that the Linux Console framebuffer will come up on by default, and therefore
extra processing is required to handle VT switches (e.g. pause the display,
restore original CRTC mode, etc.)

Depending on the Boot Display First [Onboard] or [PCI Slot] option in the
BIOS, this might mean either /dev/dri/card0 or /dev/dri/card1 becomes the
default VGA card, as set by the vga_set_default_device() call in
arch/x86/pci/fixup.c.

Is there a way in userspace to identify which card# is the default card? Or
alternatively, is there some way to get the underlying PCI bus/slot ID from a
/dev/dri/card# device.

Thanks,
 -Byron

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Standalone DRM application

2013-04-18 Thread David Herrmann
Hi

On Wed, Apr 17, 2013 at 11:05 PM, Byron Stanoszek gand...@winds.org wrote:
 David,

 I'm developing a small application that uses libdrm (DRM ioctls) to change
 the
 resolution of a single graphics display and show a framebuffer. I've run
 into
 two problems with this implementation that I'm hoping you can address.


 1. Each application is its own process, which is designed to control 1
 graphics
 display. This is unlike X, for instance, which could be configured to grab
 all
 of the displays in the system at once.

 Depending on our stackup, there can be as many as 4 displays connected to a
 single graphics card. One process could open /dev/dri/card0 and call
 drmModeSetCrtc() to initialize one of its displays to the requested
 resolution.
 However, whenever a second process calls drmModeSetCrtc() to control a
 second
 display on the same card, it gets -EPERM back from the ioctl.

 I've traced this down to the following line in
 linux/drivers/gpu/drm/drm_drv.c:

 DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc,
 DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),

 If I remove the DRM_MASTER flag, then my application behaves correctly, and
 4
 separate processes can then control each individual display on the card
 without
 issue.

 My question is, is there any real benefit to restricting drm_mode_setcrtc()
 with DRM_MASTER, or can we lose this flag in order to support
 one-process-per-
 display programs like the above?

Only one open-file can be DRM-Master. And only DRM-Master is allowed
to perform mode-setting. This is to prevent render-clients (like
OpenGL clients) to perform mode-setting, which should be restricted to
the compositor/...

In your scenario, you should share a single open-file between the
processes by passing the FDs to each. Or do all of that in a single
process. There is no way to split CRTCs/connectors between different
nodes or have multiple DRM-Masters on a single node at once. (There is
work going on to allow this, but it will take a while...)

You can acquire/drop DRM-Master via drmSetMaster/drmDropMaster.


 2. My application has the design requirement that screen 1 always refers
 to
 the card that was initialized by the PC BIOS for bootup. This is the same
 card
 that the Linux Console framebuffer will come up on by default, and therefore
 extra processing is required to handle VT switches (e.g. pause the display,
 restore original CRTC mode, etc.)

 Depending on the Boot Display First [Onboard] or [PCI Slot] option in the
 BIOS, this might mean either /dev/dri/card0 or /dev/dri/card1 becomes the
 default VGA card, as set by the vga_set_default_device() call in
 arch/x86/pci/fixup.c.

 Is there a way in userspace to identify which card# is the default card? Or
 alternatively, is there some way to get the underlying PCI bus/slot ID from
 a
 /dev/dri/card# device.

If your DRM card is a PCI device, you can use the sysfs boot_vga
attribute of the parent PCI device.
(/sys/class/drm/card0/device/boot_vga)

Regards
David
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Standalone DRM application

2013-04-18 Thread Ilija Hadzic



On Thu, 18 Apr 2013, David Herrmann wrote:


Hi

On Wed, Apr 17, 2013 at 11:05 PM, Byron Stanoszek gand...@winds.org wrote:

David,

I'm developing a small application that uses libdrm (DRM ioctls) to change
the
resolution of a single graphics display and show a framebuffer. I've run
into
two problems with this implementation that I'm hoping you can address.


1. Each application is its own process, which is designed to control 1
graphics
display. This is unlike X, for instance, which could be configured to grab
all
of the displays in the system at once.

Depending on our stackup, there can be as many as 4 displays connected to a
single graphics card. One process could open /dev/dri/card0 and call
drmModeSetCrtc() to initialize one of its displays to the requested
resolution.
However, whenever a second process calls drmModeSetCrtc() to control a
second
display on the same card, it gets -EPERM back from the ioctl.

I've traced this down to the following line in
linux/drivers/gpu/drm/drm_drv.c:

DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc,
DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),

If I remove the DRM_MASTER flag, then my application behaves correctly, and
4
separate processes can then control each individual display on the card
without
issue.

My question is, is there any real benefit to restricting drm_mode_setcrtc()
with DRM_MASTER, or can we lose this flag in order to support
one-process-per-
display programs like the above?


Only one open-file can be DRM-Master. And only DRM-Master is allowed
to perform mode-setting. This is to prevent render-clients (like
OpenGL clients) to perform mode-setting, which should be restricted to
the compositor/...

In your scenario, you should share a single open-file between the
processes by passing the FDs to each. Or do all of that in a single
process. There is no way to split CRTCs/connectors between different
nodes or have multiple DRM-Masters on a single node at once. (There is
work going on to allow this, but it will take a while...)



If running a custom-patched kernel is acceptable (i.e. custom-built, 
embedded system or the like), then a set of patches that I sent about a 
year ago [1] will probably do the job. The problem is that these patches 
are apparently not going upstream because there was little interest and 
there were a couple of arguments against them [2]. Originally, it's the 
work that Dave Airlie started but abandoned. I finished it off and tried 
to have it included upstream but it didn't happen.


I have an application that is similar to what is described here and I am 
using these patches to make it work. Essentially, what you do is call a 
small userspace utility (also included with the patches for libdrm) and 
specify which CRTCs/encoders/connectors/planes do you want included in the 
node and you get a new /dev/dri/renderN node that your application can 
use and be the master on these resources only. Then for the next node you 
call the utility again and specify a new set of display resources and then 
you run the other application on the top of that node.


The patches that are on the mailing list archive [1] are now a year old, 
but I have a rebased version for newer kernels, which I can send to 
whoever is interested in having them (I am just hesitating to pollute the 
mailing list with patches to which the maintainers have already said 
no).


[1] http://lists.freedesktop.org/archives/dri-devel/2012-April/021326.html
[2] http://lists.freedesktop.org/archives/dri-devel/2012-September/028348.html

-- Ilija
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Standalone DRM application

2013-04-17 Thread Byron Stanoszek
David,

I'm developing a small application that uses libdrm (DRM ioctls) to change the
resolution of a single graphics display and show a framebuffer. I've run into
two problems with this implementation that I'm hoping you can address.


1. Each application is its own process, which is designed to control 1 graphics
display. This is unlike X, for instance, which could be configured to grab all
of the displays in the system at once.

Depending on our stackup, there can be as many as 4 displays connected to a
single graphics card. One process could open /dev/dri/card0 and call
drmModeSetCrtc() to initialize one of its displays to the requested resolution.
However, whenever a second process calls drmModeSetCrtc() to control a second
display on the same card, it gets -EPERM back from the ioctl.

I've traced this down to the following line in linux/drivers/gpu/drm/drm_drv.c:

DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, 
DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),

If I remove the DRM_MASTER flag, then my application behaves correctly, and 4
separate processes can then control each individual display on the card without
issue.

My question is, is there any real benefit to restricting drm_mode_setcrtc()
with DRM_MASTER, or can we lose this flag in order to support one-process-per-
display programs like the above?


2. My application has the design requirement that "screen 1" always refers to
the card that was initialized by the PC BIOS for bootup. This is the same card
that the Linux Console framebuffer will come up on by default, and therefore
extra processing is required to handle VT switches (e.g. pause the display,
restore original CRTC mode, etc.)

Depending on the "Boot Display First [Onboard] or [PCI Slot]" option in the
BIOS, this might mean either /dev/dri/card0 or /dev/dri/card1 becomes the
default VGA card, as set by the vga_set_default_device() call in
arch/x86/pci/fixup.c.

Is there a way in userspace to identify which card# is the default card? Or
alternatively, is there some way to get the underlying PCI bus/slot ID from a
/dev/dri/card# device.

Thanks,
  -Byron