Hi Daniel,

On 8/17/22 22:18, Daniel Dadap wrote:
> On 8/17/22 10:05 AM, Hans de Goede wrote:

<snip>

>>> One further potential difficulty that I anticipate is that not all dynamic 
>>> mux systems use the EC backlight driver (or a similar, GPU-agnostic 
>>> driver), and rather have whichever GPU happens to be connected at the time 
>>> be responsible for the backlight. I had initially thought that supporting 
>>> the EC backlight interface was a requirement for OEMs to implement dynamic 
>>> mux support, but I recently learned this is not true in all cases. On 
>>> Windows, this requires coordinating the backlight controls of the two GPU 
>>> drivers across a mux switch, to load the state of the switched-away-from 
>>> GPU and set it on the switched-to GPU. I imagine for these systems we may 
>>> need to do some similar save/restore, probably managed by vga-switcheroo, 
>>> but it would require having both GPU drivers register their own native 
>>> backlight handlers (and possibly while one of them is not connected to the 
>>> panel).
>> Right, systems where the backlight control basically gets muxed from one GPU 
>> to the other GPU together with the panel's video-data lines exist. Currently 
>> Linux already register both native GPU backlight handlers in this case. e.g. 
>> /sys/class/backlight/intel_backlight and /sys/class/backlight/nouveau_bl.
>>
>> Userspace (atleast GNOME) has code which checks which GPU is actually 
>> connected to the panel using the panel's drm connector's status on each GPU 
>> (only one of which should report connected) and then uses the backlight 
>> interface associated with the connected connector.
>>
>>> Dynamic mux switching isn't actually supported on Linux, yet, so we should 
>>> be able to kick this particular can a little further down the road, but in 
>>> the meantime we should probably start planning for how best to handle this 
>>> sort of system under the "only one backlight handler per panel" model. 
>>> Maybe the vga-switcheroo handler can register its own backlight handler, 
>>> that then negotiates the actual backlight settings between the relevant GPU 
>>> drivers, possibly through a new vga-switcheroo client callback. I'll have 
>>> to think about this a bit more.
>> The "only one backlight handler per panel" model is actualy "only one 
>> backlight handler per panel"-connector since the new API uses drm properties 
>> on the drm connector object. With 2 GPUs both using their native backlight 
>> control there will be 2 connectors and userspace will/must use the one which 
>> is actually reporting that it is connected to the panel so this will work 
>> fine.
> 
> 
> That is a useful distinction. Would it fall under userspace's reponsibility, 
> then, to keep the brightness level synchronized across drm_connectors that 
> share a panel via a mux when performing a switch?

Yes typically the 2 different GPU drivers know nothing about the other GPU and 
I think it would be good to keep things that way. When switching userspace will 
see a disconnect on the panel connector on one GPU and then a connect on the 
connector on the other GPU at which point it knows that it should set the 
brightness on the now connected connector instead of on the old one.

> I suppose it's a cleaner design to leave it up to userspace to select which 
> backlight interface to manipulate.

Ack.

> That is a harder decision for userspace to make with the existing design, 
> which doesn't cleanly map sysfs backlight interfaces to individual 
> connectors, but if the UAPI is going to change to a drm_connector property, 
> backlight interfaces are obviously strongly correlated with the connectors.

Right.

>  I'm not sure how easy it is for userspace to solve the problem of 
> maintaining consistent brightness levels across a mux switch for an OLED 
> panel, where there really isn't a concept of a "backlight" to speak of, but I 
> suppose it wouldn't be any easier to do that in the kernel (e.g. with 
> vga-switcheroo).

The OLED brightness is send over DP-aux to the panel AFAIK, so as long as both 
drivers export the raw range of the panel's setting and don't try to get smart 
and scale or whatever then their should be a 1:1 mapping. Ideally for something 
like this both drivers would be using some shared drm-helper code / library to 
talk to the backlight, thus guaranteeing that both drivers interpret the scale 
the same.


>> If anything the nvidia-wmi-ec-backlight case is a but more tricky, the "only 
>> one backlight handler per panel" thing is actually more of a "only one 
>> backlight handler per laptop" rule which is intended for (to be written) drm 
>> helpers for the new properties to be able to get the handler from the 
>> backlight class in the non native case by just taking the first registered 
>> backlight handler.
>>
>> This means that in a dual GPU setup with nvidia-wmi-ec-backlight both GPU's 
>> panel connector objects will have their brightness properties pointing to / 
>> proxying the same backlight class device. Userspace should really be only 
>> writing to the one which is actually connected though. I guess we could even 
>> enforce this
>> in the kernel and reject brightness property writes on unconnected 
>> connectors.
>>
>>>> Please let me know if the proposed solution works for you and
>>>> if you want me to make ACPI_VIDEO_REGISTER_BACKLIGHT_DELAY a
>>>> module-option for the next version.
>>>
>>> I do think it should be workable, apart from the concern I mentioned above 
>>> about knowing when to set the module option to disable the ACPI video 
>>> backlight driver.
>> Note the option does not disable the ACPI video backlight driver. It 
>> disables the acpi_video code timing out and deciding to go ahead and 
>> register its backlight itself (providing that at the moment of timeout 
>> acpi_video_get_backlight_type() returns acpi_backlight_video). Any code 
>> (e.g. the nvidia binary driver) can still call 
>> acpi_video_register_backlight() itself to try and register even if the 
>> timeout handling has been disabled.
>>
>> The idea is that without the timeout the probe order looks like this:
>>
>> 1. acpi_video initializes, does not register backlight
>> 2. GPU driver initalizes, it either registers a native backlight handler;
>>     or it calls acpi_video_register_backlight()
>> 3. acpi_video_register_backlight() runs (if called) and calls:
>>     acpi_video_get_backlight_type()
>> 4.1 if acpi_video_get_backlight_type() returns acpi_backlight_video
>>     /sys/class/backlight/acpi_video# is/are registered
>> 4.2 if acpi_video_get_backlight_type() returns somerthing else, e.g.
>>     acpi_backlight_nvidia_wmi_ec, acpi_video_register_backlight()
>>     does nothing
>>
>>
>> The timeout is to ensure that 3. still happens, even if
>> there is no native GPU driver, because of e.g.
>> nomodeset on the kernel cmdline.
>>
>> With the nvidia binary driver, that driver can call
>> acpi_video_register_backlight() if necessary so the timeout
>> is not necessary.
>>
>> I'm currently preparing v3 of this patchset. I'll modify the
>> patch introducing the timeout to make it configurable
>> (with 0 disabling it completely).
> 
> 
> Okay. That clarification makes sense. Would it be reasonable to disable the 
> timeout by default on systems with Win8 or later _OSI?

Hmm, there are Win8 or later systems which actually need acpi_video because the 
other method(s) to control the backlight don't work (we have a list of quirks 
for those) and more importantly there are Win8 or later systems where the video 
bios tables say: "Don't use the GPU for backlight control". We do need the 
acpi_video_register_backlight() call to happen on both those cases eventually. 
Also this will lead to 2 different code-paths making things more complicated, 
so no I don't think that that is a good idea.

Regards,

Hans

Reply via email to