On Wed, Aug 07, 2019 at 12:14:30AM -0400, Scott Talbert wrote:
> On Thu, 1 Aug 2019, Tobias Frost wrote:
> 
> > I could get Gnome to offer me the scaling issue on my Desktop PC with a
> > full HD monitor, so maybe that is a possiblity.
> > 
> > Some obervations on the bug:
> > Only the drawing seems to be botched. Mouse coordinates seem to be
> > matching with the window area, so e.g to select an object in slic3r you
> > would have put the mouse in the middle of the window area, not in the middle
> > where it is rendered. IOW, it seems that the coordinates to be rendered
> > are somewhere not using the scaling factor…
> > 
> > PS: https://github.com/prusa3d/PrusaSlicer/issues/864 seems related,
> > and it seems to be fixed^Wbetter in wxwidgets 3.1 …
> 
> Okay, so I think I better understand this problem now.  It seems that all of
> these applications are using wxGLCanvas.  When using OpenGL, apparently it
> operates on physical pixels, so when setting the dimensions of the GL
> viewport, the scale factor of the display has to be factored in.  Thus,
> darkradiant (and these other applications) will have to use the scale factor
> when calling glViewport().

Does this affect anything using wxGLCanvas?  Or does it depend on how
the viewport is set or something?

I don't have a hidpi display and don't know if it's possible to simulate
one.  I don't run Gnome either...

> Now, the harder part is getting the current scale factor.  In wx 3.1, you
> can call GetContentScaleFactor().  However, in wx 3.0, this always returns
> 1.  In order to fix that, we would need at least part of this commit:
> https://github.com/wxWidgets/wxWidgets/commit/f95fd11e08482697c3b0c0a9d2ccd661134480ee#diff-40dd4b5e2cdfa858afee852fae756e01
> However, I don't know if that would cause an ABI change - does adding a new
> override of a C++ function change ABI?  If so, then we would probably need
> some other way of solving this.

This is discussed here:

https://community.kde.org/Policies/Binary_Compatibility_Issues_With_C%2B%2B

| You can...
| [...]
| reimplement virtual functions defined in the primary base class
| hierarchy (that is, virtuals defined in the first non-virtual base
| class, or in that class's first non-virtual base class, and so forth) if
| it is safe that programs linked with the prior version of the library
| call the implementation in the base class rather than the derived one.
| This is tricky and might be dangerous. Think twice before doing it.
| Alternatively see below for a workaround.
| 
|     Exception: if the overriding function has a covariant return type,
|     it's only a binary-compatible change if the more-derived type has
|     always the same pointer address as the less-derived one. If in
|     doubt, do not override with a covariant return type.

So if the compiler has optimised the call for an already built
application, it would still get a scale factor of 1.0.  Not terrible
in isolation, but if there are multiple calls and only some are
optimised that would probably be problematic.

We could plan to binNMU affected programs, but that's not ideal (and
doesn't help for non-packaged applications built by users).

To address it just within Debian, we could potentially add a new
non-virtual method to return this for 3.0.x, and patch affected
applications to call that method instead until 3.2.x.

Or maybe see if upstream think it's worth finding a way to address?

Cheers,
    Olly

Reply via email to