Re: Multi DPI user interface

2016-07-19 Thread Christian Hergert
On 07/19/2016 07:40 PM, Jonas Ådahl wrote:
> I think as far as API's go, we should use D-Bus API that provides a
> screen record session where the actual video frames are passed using
> pinos. That API shoulda definitely be per monitor to minimize any
> processing done in the compositor process. Encoding etc would be done in
> a separate process.

How do we deal with colorspace conversion in this case? Would we expect
the consuming application to apply it? Would it be pre-applied? (I'd
expect it to get extra difficult when a subset of applications start
performing colorspace natively and others do not).

-- Christian
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Multi DPI user interface

2016-07-19 Thread Jonas Ådahl
On Tue, Jul 19, 2016 at 01:09:15PM -0700, Christian Hergert wrote:
> On 07/19/2016 12:21 PM, Ray Strode wrote:
> > On Tue, Jul 19, 2016 at 11:04 AM Jonas Ådahl  wrote:
> >> > 2) Represent each monitor separately, generating one file for each
> > This makes the most sense to me.  Or even only do the active monitor.

This is what I've been leaning towards as well. It does require some
design work (gnome-screenshot needs to be redesigned) and probably some
D-Bus API changes.

> > 
> > Of course for screen recording (versus screenshoting), only doing the
> > active monitor could be weird, if the user moves their mouse from one
> > monitor to the other mid recording.

Eventually, I think we should use a screen cast tool (like this[0] one),
where you select the monitor(s), and where mutter/gnome-shell would only
hand over the framebuffer/pixels/... to an external process for
processing. In the mean time, if we want to avoid the
large-framebuffer-with-void-areas, we'd either need to somehow select a
output, or have one video encoder per monitor which I suspect might be a
bit heavy.

> > 
> > I don't like how, today, a small monitor next to a big monitor leads
> > to screenshots with large void areas.

+1

> 
> People I trust have said very good things about CGDisplayStream[1].
> 
> If we were to focus on an API like this then we can defer the policy of
> how to handle the situation appropriately to the target application.
> 
> Determining which applications have authorization to access this API is
> a separate issue.

I think as far as API's go, we should use D-Bus API that provides a
screen record session where the actual video frames are passed using
pinos. That API shoulda definitely be per monitor to minimize any
processing done in the compositor process. Encoding etc would be done in
a separate process.


Jonas

[0] https://cgit.freedesktop.org/~wtay/gnome-screen-recorder/

> 
> [1] https://developer.apple.com/reference/coregraphics/cgdisplaystream
> 
> -- Christian
> 
> ___
> desktop-devel-list mailing list
> desktop-devel-list@gnome.org
> https://mail.gnome.org/mailman/listinfo/desktop-devel-list
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Multi DPI user interface

2016-07-19 Thread Christian Hergert
On 07/19/2016 12:21 PM, Ray Strode wrote:
> On Tue, Jul 19, 2016 at 11:04 AM Jonas Ådahl  wrote:
>> > 2) Represent each monitor separately, generating one file for each
> This makes the most sense to me.  Or even only do the active monitor.
> 
> Of course for screen recording (versus screenshoting), only doing the
> active monitor could be weird, if the user moves their mouse from one
> monitor to the other mid recording.
> 
> I don't like how, today, a small monitor next to a big monitor leads
> to screenshots with large void areas.

People I trust have said very good things about CGDisplayStream[1].

If we were to focus on an API like this then we can defer the policy of
how to handle the situation appropriately to the target application.

Determining which applications have authorization to access this API is
a separate issue.

[1] https://developer.apple.com/reference/coregraphics/cgdisplaystream

-- Christian

___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Multi DPI user interface

2016-07-19 Thread Ray Strode
hi,

On Tue, Jul 19, 2016 at 11:04 AM Jonas Ådahl  wrote:
> 2) Represent each monitor separately, generating one file for each
This makes the most sense to me.  Or even only do the active monitor.

Of course for screen recording (versus screenshoting), only doing the
active monitor could be weird, if the user moves their mouse from one
monitor to the other mid recording.

I don't like how, today, a small monitor next to a big monitor leads
to screenshots with large void areas.

--Ray
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Multi DPI user interface

2016-07-19 Thread Mattias Bengtsson
On tis, 2016-07-19 at 23:03 +0800, Jonas Ådahl wrote:
> [...]
> Any opinions on in what way we should deal with this? What user
> interface do we want?
> 

I always found it weird that taking a screenshot or making a screencast
took the data from both screens and put them together.

I'd suggest that the simple PrtSc button just takes a screenshot of the
screen where your mouse cursor is (and for extra visibility just shows
the flash animation on that particular screen).

For Ctrl+Shift+Alt+p-screencasts I'd do the same but make up some way
to show which of the screens are being casted.

Regards,
Mattias
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Multi DPI user interface

2016-07-19 Thread Bastien Nocera
On Tue, 2016-07-19 at 23:03 +0800, Jonas Ådahl wrote:
> Hi,
> 
> Over at mutter we've been working towards supporting proper multi DPI
> setups when running GNOME using Wayland. Proper multi DPI means to
> support having multiple monitors where two or more monitors have
> significantly different DPI but applications showing correctly on
> both
> monitors at all times.
> 
> Apart from making mutter, gnome-shell and gtk+ draw things correctly,
> supporting proper multi DPI has some implications on things touching
> more than just Wayland backends here and there; more specifically
> screen
> shooting and screen casting.
> 
> Until soon, gnome-shell have always drawn the content of all the
> monitors into one large framebuffer. This framebuffer was then used
> as a
> source for screen casting and screen shots; monitors with different
> scales were still simply regions of this framebuffers where windows
> were
> enlarged. Soon mutter/gnome-shell will be able to draw each monitor
> onto
> separate framebuffers; in the future, these framebuffers may have
> different scales, i.e. there will be no way to create an exact
> representation of what is currently displayed, making it less obvious
> how to create a screenshot or screencast frame.
> 
> To illustrate, if we have two monitors, one (A) with the resolution
> 800x600 and the other (B) with 1600x1200, where the second one
> physically small enough to make it HiDPI with output scale 2, today
> we
> get the following configuration:
> 
> +--+-+
> |  | |
> |A | |
> |  | |
> +--+ B   |
>    | |
> |  | |
>    | |
> -  -  -  - +-+
> 
> A large framebuffer with two regions representing the two monitors.
> Windows would be rendered twice as large when positioned on B than on
> A.
> When dragging a window from A to B it'd "flip" the size and suddenly
> become large when mostly within B.
> 
> A proper representation of this setup should be:
> 
> +--+--+
> |  |  |
> |A |B |
> |  |  |
> +--+--+
> 
> Two regions of a coordinate space, but with B having much higher
> pixel
> density. A window would be drawn larger on B's framebuffer, but at
> the
> same time, the correct size on A's framebuffer, would it be displayed
> there.
> 
> With this comes the question; how do we provide a user interface for
> screen shooting and screen casting? As I see it there are two
> options:
> 
> 1) Scale up every monitor to the largest scale and draw onto a large
> framebuffer.
> 
> 2) Represent each monitor separately, generating one file for each
> 
> Both have bad and good sides. For example, 1) doesn't need to change
> the
> user interface, while 2) more correctly represent what is displayed.
> For
> screencasts, 2) would mean we need two video encoding streams; but
> also
> make it easier to do reasonable post production.
> 
> Any opinions on in what way we should deal with this? What user
> interface do we want?

In terms of user interface, I'm fairly certain we want screen "B" to
behave as if it was an 800x600 screen, as that's what it's acting as.

For screenshots and screencasts, you have 2 options, either you double
up everything so that "x2" is normal scale, and you have a slightly
fuzzy screen A, or you "lose data" by shrinking everything as well. I
would go for "more data". In any case, as in the original screen case,
you'd need the screens to line up.

Cheers
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Multi DPI user interface

2016-07-19 Thread Jonas Ådahl
Hi,

Over at mutter we've been working towards supporting proper multi DPI
setups when running GNOME using Wayland. Proper multi DPI means to
support having multiple monitors where two or more monitors have
significantly different DPI but applications showing correctly on both
monitors at all times.

Apart from making mutter, gnome-shell and gtk+ draw things correctly,
supporting proper multi DPI has some implications on things touching
more than just Wayland backends here and there; more specifically screen
shooting and screen casting.

Until soon, gnome-shell have always drawn the content of all the
monitors into one large framebuffer. This framebuffer was then used as a
source for screen casting and screen shots; monitors with different
scales were still simply regions of this framebuffers where windows were
enlarged. Soon mutter/gnome-shell will be able to draw each monitor onto
separate framebuffers; in the future, these framebuffers may have
different scales, i.e. there will be no way to create an exact
representation of what is currently displayed, making it less obvious
how to create a screenshot or screencast frame.

To illustrate, if we have two monitors, one (A) with the resolution
800x600 and the other (B) with 1600x1200, where the second one
physically small enough to make it HiDPI with output scale 2, today we
get the following configuration:

+--+-+
|  | |
|A | |
|  | |
+--+ B   |
   | |
|  | |
   | |
-  -  -  - +-+

A large framebuffer with two regions representing the two monitors.
Windows would be rendered twice as large when positioned on B than on A.
When dragging a window from A to B it'd "flip" the size and suddenly
become large when mostly within B.

A proper representation of this setup should be:

+--+--+
|  |  |
|A |B |
|  |  |
+--+--+

Two regions of a coordinate space, but with B having much higher pixel
density. A window would be drawn larger on B's framebuffer, but at the
same time, the correct size on A's framebuffer, would it be displayed
there.

With this comes the question; how do we provide a user interface for
screen shooting and screen casting? As I see it there are two options:

1) Scale up every monitor to the largest scale and draw onto a large
framebuffer.

2) Represent each monitor separately, generating one file for each

Both have bad and good sides. For example, 1) doesn't need to change the
user interface, while 2) more correctly represent what is displayed. For
screencasts, 2) would mean we need two video encoding streams; but also
make it easier to do reasonable post production.

Any opinions on in what way we should deal with this? What user
interface do we want?


Jonas
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list