Re: scroll lock disabled by default

2024-07-26 Thread Vladimir Dergachev



On Fri, 26 Jul 2024, Marco Moock wrote:


Am 25.07.2024 um 16:37:58 Uhr schrieb Steven J Abner:


  If I had to guess, Windows doesn't have Scroll Lock on by default.


I think my question was asked bad.

In Windows, Scroll Lock is disabled by default, but can be enabled by
pressing they scroll lock key.

On my Linux systems, this key is disabled until I activate it with
xmodmap.
After than scroll lock can be used like in windows.

My question is why is the key itself disabled by default on most X11
systems.


Curious !

I suspect what happened is that Scroll Lock function used to be defined in 
XF86Config, and when we switched to not require XF86Config the settings 
were lost to a minimal default.


See:

http://www.cs.unc.edu/Research/stc/FAQs/Video/XF86Config_5x.htm

A common use for Scroll Lock was for international keyboard layouts:

https://tldp.org/HOWTO/Intkeyb/x89.html

I think now one can simply remap the key in the configuration utility.

best

Vladimir Dergachev



--
Gruß
Marco

Send unsolicited bulk mail to 1721918278mu...@cartoonies.org


Re: Anyone do anything to the keyboard driver recently? (last 6 mos)

2024-07-09 Thread Vladimir Dergachev


On Tue, 9 Jul 2024, Alan Grimes wrote:

I'm using Gentoo linux and I'm getting the strangest symptom. The edit keys 
on my keyboard, (most of the arrow keys and the home/delete/end etc block) 
just don't respond in xwindows. They work fine in linux console mode.. I am 
absolutely baffled by this symptom. It should not be possible...  It's a 
standard USB keyboard, I am not aware of anything that would differentiate it 
from any other model on the market.. Partial workaround is to use the keypad 
in edit mode which is how I made this e-mail. =\


Hi Alan,

   My go to tool in such situations is to fire up "xev" and check which 
keysyms and modifiers show up when you press those buttons. Make sure your 
mouse is inside xev window.


   The obvious list of possibilities is some modifier being pressed (like 
Alt), wrong keymap, funky keyboard and some other program stealing the 
keysyms.


best

Vladimir Dergachev





--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


Re: Does xorg/X11 support Rembrandt [Radeon 680M] from AMD/ATI?

2024-07-05 Thread Vladimir Dergachev




On Thu, 4 Jul 2024, William Bulley wrote:


According to Vladimir Dergachev  on Thu, 07/04/24 at 
15:26:


This depends on the card and the manufacturer. For NVidia, there are usually
close-sourced NVidia drivers and an open source noveau driver that might
take some time to catchup for a newer card.

In contrast, AMD provides full open source drivers, and there is a release
page with a June 2024 release:

https://www.amd.com/en/resources/support-articles/release-notes/RN-AMDGPU-UNIFIED-LINUX-24-10-3.html

Since you card is around 2 years old, chances are the support is already in
a newer Linux release, thus I suggest you first try the latest Ubuntu
version on a usb stick. If this works this is good news.

Alternatively, try installing AMD drivers from a link above.


Thanks for the reply.  After I reached out to this list, and after doing
some (inconclusive) research on this topic -- and my ignorance of AMD, I
found out my problem: the FreeBSD handbook says to add kld_list="amdgpu"
to the /etc/rc.conf file, but I missed to additional steps needed (may
be in a different section of the handbook) which were to add two ports:

  graphics/gpu-firmware-amd-kmod
  x11-drivers/xf86-video-amdgpu

Once I built and installed those two ports, Xorg/x11 graphics worked
flawlessly on the Thinkpad T16.  Thanks again for your information.


Great that it works :)

Missed that you are running FreeBSD - nice !

Vladimir Dergachev



--
William Bulley
E-MAIL: w...@umich.edu




Re: Does xorg/X11 support Rembrandt [Radeon 680M] from AMD/ATI?

2024-07-04 Thread Vladimir Dergachev



This depends on the card and the manufacturer. For NVidia, there are 
usually close-sourced NVidia drivers and an open source noveau driver that 
might take some time to catchup for a newer card.


In contrast, AMD provides full open source drivers, and there is a release 
page with a June 2024 release:


https://www.amd.com/en/resources/support-articles/release-notes/RN-AMDGPU-UNIFIED-LINUX-24-10-3.html

Since you card is around 2 years old, chances are the support is already 
in a newer Linux release, thus I suggest you first try the latest Ubuntu 
version on a usb stick. If this works this is good news.


Alternatively, try installing AMD drivers from a link above.

best

Vladimir Dergachev

On Tue, 2 Jul 2024, William Bulley wrote:


I don't understand how xorg deals with newer hardware, but I
suspect this device (Rembrandt [Radeon 680M]) being rather
new, may not yet be supported.  Can someone please explain?

Thank you in advance.  Have a great day!

--
William Bulley
E-MAIL: w...@umich.edu




Re: [synaptics] Require minimal finger move for moving cursor

2024-04-12 Thread Vladimir Dergachev




On Fri, 12 Apr 2024, kaycee gb wrote:


Hi,

I have a "crappy" touchpad on my dell laptop that drives me crazy. I am
unable to fix the sensitivity/jump problems when clicking. I tried different
things. Sometimes it seems to help/make the jumps less present but not really
for a long time.


Dell touchpads are usually pretty good.

I suggest to check which touchpad driver is being used, there might be 
some useful messages in dmesg or /var/log/Xorg.0.log.


Another possibility is that the battery went bad, is buldging and pushing 
on the touchpad.


best

Vladimir Dergachev



Thinking about that, I wonder how it would be possible to require a minimal
finger motion before the cursor can move ? Is it doable ? Coding needed ?

Thanks in advance,
K.



Re: Debugging multiple X11 servers spawning

2023-09-18 Thread Vladimir Dergachev




On Mon, 18 Sep 2023, Dave Howorth wrote:


On Fri, 15 Sep 2023 15:02:08 -0700, Michael Sheely wrote:


Hello,

I'm trying to debug an issue with a notification daemon

On the debug thread
https://github.com/dunst-project/dunst/issues/1186#issuecomment-1677737252,
a maintainer suggests that my logs indicate multiple X11 severs are
being spawned.

As far as I know I'm not purposefully starting multiple X11 servers.
I frequently use the machine locally and also ssh in, but I don't
enable x forwarding during ssh.

I'm trying to understand if there is any particular system logs I
should look at in order to get a sense of when an X11 server was
started in order to better understand whether or not multiple X11
servers are being spawned (and if so, I'll aim to understand what is
causing it).

Would anyone happen to have pointers that might be helpful here?


I am not sure which desktop environment you are using, but on some (like 
KDE) you have an option to login as another user at the screen locker and 
this creates extra X servers.


I have also seen situations where the same user logged in twice, and this 
creates problems with configuration.


ssh cannot create X servers, so it has to be local - maybe you are 
mistaking screen unlock for logging in with a new session.


best

Vladimir Dergachev




You'll probably find logs in /var/log. Specifically /var/log/Xorg*

You might also find the output of

$ systemctl status display-manager helpful.


I'm on Debian, using i3 window manager.

Thank you!
- Michael




Re: Xlib: DisplayWidth / DisplayHeight

2023-09-12 Thread Vladimir Dergachev



On Fri, 8 Sep 2023, Zbigniew wrote:


If you are doing this for open source project, you should change your code
to:

 [..]


The code you've pasted doesn't work properly; it returns the size of
virtual screen — so you added 3x as much lines to get, in effect, the
same (incorrect) result as the few lines I pasted.


Oh, and no I don't actually know what you mean, because it depends on what
application you are writing.


Whatever it could be — an useful tool, or a silly game, open-source,
or commercial — it needs to find out the exact physical screen's
dimensions (in pixels), to learn how big is the working area.


Oh, I see. I suggest you try writing some concrete code - learning this in 
abstract can be tricky. Don't be afraid to rewrite it from scratch a few 
times. There is existing source code from X examples and tools, as well as 
other libraries, such as gtk and Qt.


best

Vladimir Dergachev




If you do teleconferencing, you might want to capture either the entire
screen, or some window.

If you want to record a movie of a game playing fullscreen, than you
probably need the position and dimensions of the game window, because
games often change video mode while keeping original virtual screen
intact.

If you want to make a better Xvnc, you probably need the code above and
you might not need xrandr.

If you are doing something else - who knows what you mean ?


So now you see (I hope that you see): „if… if… if… if… else…” etc. A
whole lot of checks and decisions that could be avoided, _IF_ the two
macros were working properly
--
best,
Teenager


Re: Xlib: DisplayWidth / DisplayHeight

2023-09-08 Thread Vladimir Dergachev



On Tue, 5 Sep 2023, Zbigniew wrote:


To help you see our point of view, imagine someone complaining to you that
even though the computer science course talked about trees and leaves
there were no actual trees anywhere in sight, and how exactly does that
help with CO2 emissions ?

Display and Screen structures are abstractions that in some cases
correspond to physical devices and in some cases (many more cases
nowadays) do not. They are named "Display" and "Screen" but what they
actually are is determined by how the code actually works.


The code I pasted had 14 lines (including two empty ones). So how it
„actually works”, according to you?


I meant Xlib and Xserver code, not the code you wrote. Read the source of 
the software you are using.


Considering your code:

#include 
#include 
int main(int argc,char **argv) {
Display *display = XOpenDisplay(NULL);
int screen_number = DefaultScreen(display);
int height = DisplayHeight(display, screen_number);
int width = DisplayWidth(display, screen_number);
XCloseDisplay(display);

printf("Height: %d\n", height);
printf("Width: %d\n",  width);

return 0;
}

The code you wrote will return the width and height of the default screen.
This might or might not correspond to a physical display.

If you are doing this in a commercial setting you can use this as is, and 
in the documentation for your program specify that it only supports 
computers with one physical screen and no panning. And then start thinking 
of a "Pro" version that makes use of xrandr.


If you are doing this for open source project, you should change your code 
to:


#include 

typedef struct {
int x;
int y;
int width;
int height;
} ACTIVE_REGION;

void get_active_region(ACTIVE_REGION *ar, int display, int screen_number)
{
#ifdef HAVE_XRANDR
fprintf(stderr, "Xrandr support not implemented yet\n");
exit(-1);
#else
ar->x=0;
ar->y=0;
ar->width=DisplayWidth(display, screen_number);
ar->height=DisplayHeight(display, screen_number);
#endif
}

int main(int argc,char **argv) {
Display *display = XOpenDisplay(NULL);
/* TODO: add support for other screens and xrandr later */
int screen_number = DefaultScreen(display);

ACTIVE_REGION ar;
get_active_region(, display, screen_number);
/* URGENT TODO: use "gengetopt" to provide command line parameters
  to override values in get_active_region */

XCloseDisplay(display);

printf("Screen area: %dx%d+%d+%d\n",  ar.width, ar.height, ar.x, 
ar.y);


return 0;
}

This is assuming you want just one monitor, such as for screen sharing for 
a Zoom or Skype-like application. There could be a situation when you want 
to know the location of all physical screens.


It is important to let users override your choice, to enable less common 
use cases such as purely virtual screens. Plain Xvnc is ok, but for 
some of my headless machines I configured X to have a hardcoded display 
with resolution slightly smaller than 4K. This way acceleration works for 
compositing and one can use vnc to access a window without full-screen.





To help you see my point of view, here's a follow-up question: and how
do you think, reading my posts — and a code snippet, you've cut out so
conveniently — what kind of „Display and Screen structure” I had in
mind? A physical device, or the one from „some case”?


To be honest, I got a perception of an enthusiastic (and not overly 
polite) teenager, who, unfortunately, grew up in a system that did not 
teach geometric proofs or any other courses with proper mathematical 
rigor. Its not really your fault - other people screwed things up 
spontaneously and on purpose, but it is a giant gap you need to fill.


As a result you struggle to define what you mean. I was trying to 
lead you to an "Aha" moment when you properly formulate what you are 
trying to achieve.


Oh, and no I don't actually know what you mean, because it depends on what 
application you are writing.


If you do teleconferencing, you might want to capture either the entire 
screen, or some window.


If you want to record a movie of a game playing fullscreen, than you 
probably need the position and dimensions of the game window, because 
games often change video mode while keeping original virtual screen 
intact.


If you want to make a better Xvnc, you probably need the code above and 
you might not need xrandr.


If you are doing something else - who knows what you mean ?

best

Vladimir Dergachev


--
regards,
Zbigniew


Re: Xlib: DisplayWidth / DisplayHeight

2023-09-05 Thread Vladimir Dergachev



On Tue, 5 Sep 2023, Zbigniew wrote:


You keep avoiding the question. WHICH SCREEN?
[..]

so will you answer the question - what screen do you mean? as i mentioned
before. i think you have a far too basic view of screen and i'm trying to
paint just a few of the possible scenarios to have you think about this.
this discussion is going nowhere otherwise.


I've got a feeling you are trying to dilute this exchange by
artificially introducing conceptual confusion. Just as Pilate asked:
„but what exactly is truth?” — you keep asking: „which actually
screen?”.


To help you see our point of view, imagine someone complaining to you that 
even though the computer science course talked about trees and leaves 
there were no actual trees anywhere in sight, and how exactly does that 
help with CO2 emissions ?


Display and Screen structures are abstractions that in some cases 
correspond to physical devices and in some cases (many more cases 
nowadays) do not. They are named "Display" and "Screen" but what they 
actually are is determined by how the code actually works.


To properly use them you need to read the documentation and
source code.

If you need to find out parameters of physical devices attached to your 
computer you need to use the tools that can handle the complexity of the 
setup.


The functions you are looking at clearly cannot do it, so the first thing 
you should have done is looked around for alternatives and open source 
code that have done something similar to what you want to do.


best

Vladimir Dergachev

Re: TWM & Odd Menu Issue

2023-09-02 Thread Vladimir Dergachev




On Fri, 1 Sep 2023, Graham Bentley wrote:


Hi,

How to go about diagnosing this?

---

Lifelong fan of TWM; still using today and came across your pages/site.


Yay TWM !!



The main reason for this enquiry, is to ask if you suffering recently the 
problem of TWM not working well with latest Firefox and Chromium browsers 
using mouse interaction particularly?


I use TWM 'raw' (no compositor and so on) and both browsers have worked 
without issue until the latest esr release for Firefox and a few releases ago 
for Chromium (not entirely sure when for Chromium). Other programs do not 
have any similar issues that I have noticed.


The problem is that some popup/transient windows that appear when mouse 
clicking on menu options in those browsers (and even some 'select' inputs on 
webpages) are now closing instantly preventing interaction - flash on and off 
so to speak.  I'm not technically advanced enough to sort out why, but I am 
curious about whether other folk might be able to have a stab.


I have seen similar behaviour for firefox with KDE, and also for KDE 
panel. This happens after the session was up for a while (months), and was 
seen on Kubuntu 20.04. Once the issue occurs, turning compositor on and 
off does not help.


In my case this is fixed with a restart, and also, sometimes, by making 
sure that all windows are within the screen - somehow a window that have 
been moved aside so that part of it is beyond the screen causes this, and 
moving it back fixes the problem.


Another thing to try is to make sure you don't have some phantom key 
pressed - "xev" is helpful for this.


best

Vladimir Dergachev



The problem is more captured here: 
https://www.linuxquestions.org/questions/linux-software-2/latest-chromium-and-firefox-popup-transient-windows-show-and-close-immediately-using-twm-and-when-using-mouse-4175728539/


Thanks



Re: Xlib: DisplayWidth / DisplayHeight

2023-08-31 Thread Vladimir Dergachev

have a look, please, at the man page:

  int DisplayHeight(Display *display, int screen_number);
  int DisplayWidth(Display *display, int screen_number);

„What screen is that”? The one described by the parameters.
('display' is a pointer to a 'Display' structure returned by a
previous call to XOpenDisplay() )


So the man page for XOpenDisplay explains that you get the Display 
structure by passing a description of which Xserver you are connecting to.


So using modern (2023) terminology:

Display *display - a structure describing Xserver instance (there
 could be more than one running on the same computer)

int screen_number - an index of one of "root drawables"

I don't know exactly why they named things as they did. One possibility is 
that the idea was that a display could consist of several physical 
devices, like an airport display for arrivals and deparatures.





5. In fact the whole use of these macros is pretty much broken.


At least one person here notices and understands this. Yes, that's why
I suggested a fix.


One could discuss whether you like the name, but the macros give a 
bounding box of what could be drawn (0, 0, screen_width, screen_height), 
so this is actually useful.





I can go on... I understand on the surface what you say - but you can't
always
get what you want and to me it seems your understanding of X is very
superficial and thus you don't know the right way to do things



Then could you, please, suggest a „replacement functions” for these
two, that I could use to get the dimensions of physical screen —
whether is panning used, or not, is Xrandr used for that panning, or
anything else, is it Linux, or any of xBSD's etc.? In all these cases
t'll be still Xorg server, anyway.


Take a look at libxrandr, there are more details in an earlier e-mail.

best

Vladimir Dergachev

Re: Xlib: DisplayWidth / DisplayHeight

2023-08-30 Thread Vladimir Dergachev



Just to make it easier for anyone who is reading this thread in the 
archives:


At the moment the library you need is libxrandr (on Ubuntu install with
apt install libxrandr-dev), and read the man page Xrandr.

It is also useful to read the paper describing Xrandr protocol:

https://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt

best

Vladimir Dergachev



Re: Xlib: DisplayWidth / DisplayHeight

2023-08-30 Thread Vladimir Dergachev



On Wed, 30 Aug 2023, Zbigniew wrote:


What you want is to find out the width and height of physical screen you
have.


Indeed. That's what DisplayWidth and DisplayHeight functions have been
created for.


To do that you need to use the subsystem that manages them - which
is xrandr. And don't forget to specify which of 5 screen you have running
you actually mean.


No, dear Volodya,

  „The  DisplayHeight  macro returns the height of the specified screen in
  pixels.

  The DisplayWidth macro returns the width of the screen in pixels.”

This is what I want, and this is what — as „man” page states — I


And this what you get. Now when I say "Screen" I mean large rectangular 
matrix of pixels I can paint on. Which is pretty much what any application 
cares about - you don't need to paint outside of screen.


What do you mean by "Screen" and why ?

Vladimir Dergachev


should get, regardless of presence of any „subsystems”. I want nothing
more than is described there.

Are you serious when stating, that during creation of a program I
should play guessing game „what kind of 'subsystem' the user may
employ”?
--
best :)
Z.


Re: Xlib: DisplayWidth / DisplayHeight

2023-08-30 Thread Vladimir Dergachev



On Wed, 30 Aug 2023, Zbigniew wrote:



So I would expect (in my particular case) to get 1920 and 1200 values,
and NOT dimensions of virtual screen, I mean 2520 and 1575


The behavior prescribed for these macros is to return the width and
height of the screen, and doesn't provide for the existence of concepts
such as panning or Xinerama.


You may want to read again my initial post: yes, the functions
described should „return the width and height of the screen”, as you
wrote — regardless if panning was, or it wasn't used.

You don't seem to understand this, I'm afraid?


The notion of "Screen" for Xserver is just a rectangular matrix of pixels 
where drawing is occurring.


It may or may not be displayed on a physical display, and the displayed 
portion might not be the full screen and may even be duplicated, in part 
or in full.


In particular, it is common to use Xvnc to create completely virtual 
screens that are only accessible through VNC viewer. Most people expect 
all their applications to work there too.


What you want is to find out the width and height of physical screen you 
have. To do that you need to use the subsystem that manages them - which 
is xrandr. And don't forget to specify which of 5 screen you have running 
you actually mean.


best

Vladimir Dergachev

Re: Keeping the Screen Turned off While Getting Inputs

2023-08-28 Thread Vladimir Dergachev




On Mon, 28 Aug 2023, Ahmad Nouralizadeh wrote:


The laptop model is `Asus N501JW` running `Ubuntu 18.04`. The discrete GPU is 
`GeForce GTX 960M`.
The link below says that by default the system uses `Intel HD graphics` (the 
iGPU) and using the discrete GPU requires a proprietary driver! Does this mean 
that having `nouveau` is not enough?


I don't have that particular card, but I think noveau should work with it.
And if you really need to you can install NVidia drivers.

However, you might want to upgrade Ubuntu - the latest LTS release is 
22.04.


Many tools are improved, including perf and Xorg.

best

Vladimir Dergachev



https://askubuntu.com/a/766282/926952

It seems that I was wrong and only one GPU is being used?!




Re: Keeping the Screen Turned off While Getting Inputs

2023-08-28 Thread Vladimir Dergachev




On Mon, 28 Aug 2023, Ahmad Nouralizadeh wrote:


Is it possible to prevent the Xserver from using the iGPU and only use the 
discrete GPU. I found no BIOS options for this. Why should the two GPUs work 
simultaneously?




Urmm.. BIOS is the wrong place to look for this - if you are trying to 
alter how Xserver (Xorg) works you should first read documentation for 
Xorg.  (man xorg.conf and so on).


Google searches help too, a useful keyword is "optimus" which was the name 
of dual gpu configuration when it was first introduced.


I am not giving explicit instructions because I don't know which hardware 
and OS you are using (you did not describe), and I might not know off the 
top of my head if your configuration is sufficiently different from mine.


I practically never use discrete card of the Optimus - it produces too 
much heat and makes laptop fans spin. I did try the discrete card a couple 
of times playing with CUDA, but the laptop GPU was too underpowered to be 
of use.


There might be a GUI app, a useful place to look is nvidia-settings, if 
you have Nvidia card and closed source Nvidia drivers. Alternatively 
search for information for "noveau" driver (Nvidia card) or amdgpu (AMD 
card).


best

Vladimir Dergachev


Re: Keeping the Screen Turned off While Getting Inputs

2023-08-27 Thread Vladimir Dergachev




On Sun, 27 Aug 2023, Ahmad Nouralizadeh wrote:


Thanks Alan and Vladimir!These are very effective clues to help me understand 
the whole architecture, but I will need some experiments! :D

I may continue with this thread later to ask questions about the main problem 
discussed here (i.e., turning off the screen), if I find my approach feasible! 
I think that the kernel structure storing
the display state is `struct drm_crtc_state`, particularly, its `enabled` and 
`active` fields.

There exists a `linux_pm` mailing list which seems to be related to my 
question. But it seems to be for development purposes, and not for learning! 
Where do you think I can ask the
gpu/power-management related questions at the kernel level?


There is no harm in asking - most developers had to learn at some stage.

But I would expect people on linux_pm would be focused on issues other 
than GPU power management.


best

Vladimir Dergachev



Regards.




Re: Keeping the Screen Turned off While Getting Inputs

2023-08-27 Thread Vladimir Dergachev




On Sun, 27 Aug 2023, Ahmad Nouralizadeh wrote:


Perhaps I didn't express my question precisely. I understand that you are 
talking about the mmap function in the kernel which is usually a function 
pointer in vm_operations...

My question is about the userspace structure of X11. IIUC, we have X11 clients, 
which are GUI apps.
They have a portion of the X11 related libraries (those needed for clients) 
mapped into their address space. As the app and the X11 libraries (client code 
in X11) are in the same address space the
graphical data are accessible by both. Xserver is a separate process (i.e., 
Xorg). How are the graphical data sent to the server? Does it use shared 
memory? Multiple shared memory regions to service
each client?


First of all plain X11 does not use shared memory - the graphics requests 
are sent over a socket. Do as root "ls -l /proc/XXX/fd" where XXX is the 
pid of the Xorg. That socket is very fast !


Remember that originally X11 did only 2d graphics. POSIX shared memory 
support was added later via Xshm extension and was meant for transferring 
images.


The OpenGL is also an extension and how exactly it communicates is up to 
the driver. The driver is split into two parts - the part that sits in 
the kernel and the part that is in the Xserver.


The general trend is to make things faster you want to bypass as many 
layers as you can.


So you would setup your OpenGL window by talking to Xserver via a socket, 
and the Xserver will inform the kernel driver. Then you would send your 
data and rendering commands to the card via a kernel driver - preferably 
with as few kernel calls as you can get away with.


For example, running glxgears on my computer, with Xorg running on 
internal i915 Intel card, I see in /proc/XXX/fd:


lrwx-- 1 volodya volodya 64 Aug 27 13:03 0 -> /dev/pts/133
lrwx-- 1 volodya volodya 64 Aug 27 13:03 1 -> /dev/pts/133
lrwx-- 1 volodya volodya 64 Aug 27 13:03 2 -> /dev/pts/133
lrwx-- 1 volodya volodya 64 Aug 27 13:03 3 -> 'socket:[353776996]'
lrwx-- 1 volodya volodya 64 Aug 27 13:03 4 -> /dev/dri/card0
lrwx-- 1 volodya volodya 64 Aug 27 13:03 5 -> /dev/dri/card0
lrwx-- 1 volodya volodya 64 Aug 27 13:03 6 -> /dev/dri/card0
lrwx-- 1 volodya volodya 64 Aug 27 13:03 7 -> /dev/dri/card0
[...]

The file descriptors 0,1,2 are standard input, output and error. File 
descriptor 3 is the socket to talk to Xserver, and the rest is the device 
created by the kernel driver. I don't know why intel driver needs four of 
them.


Looking in /proc/xxx/maps there many entries, with lots of them looking 
like:


7fe9ac736000-7fe9ac836000 rw-s 203853000 00:0e 12497 
anon_inode:i915.gem
7fe9ac836000-7fe9ac83a000 rw-s 1109bc000 00:0e 12497 
anon_inode:i915.gem
7fe9ac83a000-7fe9ac84a000 rw-s 3267cc000 00:0e 12497 
anon_inode:i915.gem
7fe9ac90a000-7fe9ac91a000 rw-s 260574000 00:0e 12497 
anon_inode:i915.gem
7fe9ac91a000-7fe9ac92a000 rw-s 60d483000 00:0e 12497 
anon_inode:i915.gem

This has something to do with communicating with kernel driver. Looks like 
it needs a lot of buffers to do that. A few would make sense, but I got 21 
total which is too much.


On the other hand, on a different computer with an NVidia card, I see the 
following in /proc/XXX/fd for a plasmashell (KDE desktop):


lrwx-- 1 volodya volodya 64 Aug 27 13:13 11 -> /dev/nvidiactl
lrwx-- 1 volodya volodya 64 Aug 27 13:13 12 -> /dev/nvidia-modeset
lrwx-- 1 volodya volodya 64 Aug 27 13:13 13 -> /dev/nvidia0
lrwx-- 1 volodya volodya 64 Aug 27 13:13 14 -> /dev/nvidia0
lrwx-- 1 volodya volodya 64 Aug 27 13:13 15 -> /dev/nvidia-modeset
lrwx-- 1 volodya volodya 64 Aug 27 13:13 17 -> /dev/nvidia0
lrwx-- 1 volodya volodya 64 Aug 27 13:13 18 -> /dev/nvidia0
[...]

nvidiactl is unique - this is how things are triggerred, but there are 
many, many opened file descriptors to nvidia-modeset and, especially, 
nvidia0.


The contents of /prox/XXX/maps match in complexity:

7f0e6c00c000-7f0e6c00d000 rw-s  00:05 476
/dev/nvidia0
7f0e6c00d000-7f0e6c00e000 rw-s  00:05 476
/dev/nvidia0
7f0e6c00e000-7f0e6c00f000 rw-s  00:05 475
/dev/nvidiactl
7f0e6c00f000-7f0e6c01 rw-s  00:05 475
/dev/nvidiactl
7f0e6c01-7f0e6c011000 rw-s  00:05 475
/dev/nvidiactl
7f0e6c011000-7f0e6c012000 rw-s 00044000 00:01 4096   
/memfd:/.nvidia_drv.XX (deleted)
7f0e6c021000-7f0e6c024000 rw-s  00:05 475
/dev/nvidiactl
7f0e6c0d-7f0e6c0e3000 rw-s  00:05 475
/dev/nvidiactl

and many more similar entries.

However, in both cases the focus is on communication with the kernel 
driver and th

Re: Keeping the Screen Turned off While Getting Inputs

2023-08-27 Thread Vladimir Dergachev
d region does, maybe a different view of the 
registers - prefetching makes access much faster, so you would read and 
write non-critical data there, issue a barrier of some sort and then 
trigger by writing to register in non-prefetchable space. This is pure 
speculation, read noveau driver to find out.


best

Vladimir Dergachev


Re: Keeping the Screen Turned off While Getting Inputs

2023-08-27 Thread Vladimir Dergachev




On Sun, 27 Aug 2023, Ahmad Nouralizadeh wrote:


> The framebuffer that is displayed on the monitor is always in video card
> memory. There is a piece of hardware (CRTC) that continuously pulls data
> from the framebuffer and transmits it to the monitor.

So the framebuffer memory should normally be in the kernel (Perhaps in special 
cases could be mapped in the userspace?!). IIUC, XServer works on the app GUI 
data in the userspace and sends it to the
kernel to finally arrive at the framebuffer. Correct? Does it use some kind of 
ioctl()?




Not necessarily - for very old cards you would issue a special command to 
transfer data or paint a line.


Modern video cards (and video drivers) usually work like this - the 
graphics card exposes several regions that work like memory over PCIe bus 
- i.e. the CPU can access them by issuing a "mov" command to an address 
outside main CPU memory (assuming the graphics card is a physical PCIe 
card).


One of the regions is the entire video card memory that includes the 
framebuffer. This way you can transfer data by simplying copying it to the 
memory mapped region.


This however is slow, even with modern CPUs, because of limitations of 
PCIe bandwidth and because the CPUs are not well suited to the task.


Instead a second memory region contains "registers" - special memory 
locations that, when written, make magic happen. Magic is an appropriate 
word here because the function of those registers is entirely arbitrary - 
their function is picked by hardware designers and there aren't any strict

constraints to force a particular structure.

For example, one register could contain starting x coordinate, another 
starting y, another ending x, another ending y, one more contain a color, 
and finally a special register that, when written, will draw a line in the 
framebuffer from start to end using that color.


This is much faster than using a CPU because only a few values are 
transferred - rest of the work is done by the video card.


And this is how video cards used to work a few decades back, and partially 
still do. However, for modern needs this is still too slow.


So one more feature of video cards is that they have "PCIe bus master" - 
the ability to access main CPU memory directly and retrieve (or write) 
data there.


So instead of transferring data to the framebuffer (for example) by having 
the CPU write there, the CPU will write to video card registers the 
addresses (plural) of memory regions to transfer and then trigger the 
transfer by writing a special register. The video card will do the work.


The transfer to the framebuffer is not very interesting, but what you can 
do is PCI bus master to registers instead. This is usually done by a 
dedicated unit, so it is not exactly like writing to the registers, but 
this makes for a good simplified explanation.


So now you have a memory region in main memory where CPU has assembled 
data like "address of register of starting X", "value of starting X", 
"register address for color of starting point", "value of color" and so 
on, finishing "address of trigger register", "Trigger !".


And this now looks like instructions for a very, very weird VLIW (very 
long instruction word) processor.


The OpenGL driver now works by taking OpenGL commands and compiling them 
to sequences of these weird GPU instructions that are placed into memory 
buffer. When enough of these accumulate, the video card is given the 
trigger to go and execute them, and something gets painted.


If you need to paint a picture, another buffer is allocated, picture data 
is written into it, and then a special command is created instructing to 
pull data from that buffer.


Now, over the past few decades the video cards evolved to be slightly less 
weird VLIW processors - they are getting rid of dedicated commands like 
draw a line from X to Y, in favor of commands like "compute  dot product 
between arrays of 4-dimensional vectors".


They still have the weird multi-tier PCIe bus master, and multiple caches
used to access multiple types of memory: framebuffer memory, texture 
memory, main memory and a few others. And a weird quirks that make doing 
interesting programming with GPUs tricky.


So now, if you start some OpenGL app on Linux and look into /proc/XXX/maps 
you should be able to find several memory regions that have been mapped by 
the graphics driver. Some of those is real memory, some are registers and 
are entirely virtual - there isn't any physical DRAM backing them.


These aren't all the regions exposed by video card, because if multiple 
apps write to video card register directly it will lock up hard, freezing 
PCIe bus. Instead, this is arbitrated by the kernel driver.


best

Vladimir Dergachev



Re: Keeping the Screen Turned off While Getting Inputs

2023-08-26 Thread Vladimir Dergachev




On Sun, 27 Aug 2023, Ahmad Nouralizadeh wrote:


> In order to display anything on the screen the video card needs an array
>of data given color of each pixel. This is usually called "framebuffer"
>because it buffers data for one frame of video.

Thank you for the enlightening explanation! An unrelated question: IIUC the 
framebuffer is a shared memory in userspace. I see a huge amount of memory 
(around 1GB) in the kernel space related to sth
called the GEM layer. Why is this large allocation needed?


The framebuffer that is displayed on the monitor is always in video card 
memory. There is a piece of hardware (CRTC) that continuously pulls data 
from the framebuffer and transmits it to the monitor.


A notable special case is when the "video card" is part of the CPU, in 
this case the main memory can serve dual purpose: most of it used by the 
main CPU, while a portion of main memory is allocated to the video card 
(there are BIOS options to change the amount).


This has impact on the performance - the CRTC needs to send a frame to the 
monitor at the refresh rate and it needs to pull data from memory - 
everything else has to wait.


If you are using a 4K (3840x2160) monitor that refreshes at 60 Hz, with 
each pixel a customary 32 bits, the CRTC needs 2 GB/s of bandwidth.




> When you request "dpms off" all this does is tell monitor to turn off the
> light and save power. Everything that normally be drawn will still be
> drawn, as you can verify using x11vnc and vncviewer.

How does the interactive input cause screen reactivation? Is it signaled in 
software? If yes, perhaps the signal could be hidden by some small changes in 
the software to prevent the reactivation.


There is likely a piece of software that sends "dpms on" the moment a 
cursor moves. Probably in the Xserver itself.


best

Vladimir Dergachev



> From the point of view of a benchmark you need to be very careful not
> alter the task, as modern systems love to optimize.

I will have to do some approximations using a combination of the processor and 
IMC counters.






Re: Keeping the Screen Turned off While Getting Inputs

2023-08-26 Thread Vladimir Dergachev



On Sat, 26 Aug 2023, Ahmad Nouralizadeh wrote:


> > However, I would have expected that VLC would produce a lot
> > GPU/iGPU accesses even without drawing anything, because it would
> > try to use GPU decoder.

For the discrete GPU, the turned off screen requires much smaller bandwidth in 
any benchmark (reduces from 2GB/s to several KB/s). The same seems to be true 
with iGPU. Of course, there might exist
some DRAM accesses originating from the GPU/iGPU. But the main traffic seems to 
fade. These assumptions are based on my experiments and I could be wrong.
(P.S.: VLC seems to be aware of the screen state. The rendering thread will 
stop when the screen is off (mentioned 
here:https://stackoverflow.com/q/76891645/6661026).)

> > Displaying video is also often done using GL or Xvideo - plain X is
> > too slow for this.
I'm looking for a simpler solution. I'm not familiar with these Xorg-related 
concepts! It seems a bit strange that turning off screen requires so much 
effort! If `xset dpms force off` would not
cause screen activation with user input or `xrandr --output...` wouldn't cause 
segfault, everything would be fine.


Here is a simplified explanation:

In order to display anything on the screen the video card needs an array 
of data given color of each pixel. This is usually called "framebuffer" 
because it buffers data for one frame of video.


For every monitor you plugged in there is a separate framebuffer, unless 
they display the same thing (mirror).


To draw, the CPU either sends data directly to framebuffer, requests video 
card to pull data from RAM, or does some more complicated combination of 
the two (this includes GL and Xvideo acceleration).


So you have a system CPU -> Video Card -> Monitor

When you request "dpms off" all this does is tell monitor to turn off the 
light and save power. Everything that normally be drawn will still be 
drawn, as you can verify using x11vnc and vncviewer.


When you request "xrandr ... --off" you are requesting the equivalent of 
physically unplugging monitor cable. The framebuffer associated with that 
monitor will get destroyed. That's likely why you saw that gnome-panel 
error - some library it relies on could not deal with the fact that the 
framebuffer it was supposed to draw into suddenly disappeared.


From the point of view of a benchmark you need to be very careful not 

alter the task, as modern systems love to optimize.

For example, many applications will stop drawing when their window is 
fully obscured (don't know about vlc, but likely).


However, this behaviour will change depending on whether compositor is 
enabled, and even depending on how many windows are open as compositor has 
limits.


best

Vladimir Dergachev




>edit to add: google suggests another candidate might be something
>called pin-instatPin works at the source code level. It counts source-level 
accesses which might not reach DRAM (e.g., services by caches).

> > best
> >
> > Vladimir Dergachev 



Re: Keeping the Screen Turned off While Getting Inputs

2023-08-26 Thread Vladimir Dergachev




On Sat, 26 Aug 2023, Ahmad Nouralizadeh wrote:


>> Those accesses might not stop with just the display off - some
>> applications may keep redrawing.
Will these accesses cause iGPU or dedicated GPU accesses to the DRAM? I think 
that those redrawings originate from the processor.

>I'm not sure a graphical benchmark will run without a graphical system
>running?
Yes, VLC is one of the benchmarks and will not run without GUI.


You can start system with plain X and twm for window manager - this would 
produce minimal load on the GPU.


However, I would have expected that VLC would produce a lot GPU/iGPU 
accesses even without drawing anything, because it would try to use GPU 
decoder.


Displaying video is also often done using GL or Xvideo - plain X is too 
slow for this.




>Maybe do the reverse of what I suggested. Run the benchmark but send
>the output to a remote display.
Will it avoid screen activation in the local machine?


There should be a rather drastic difference in speed between VLC 
displaying locally and in a remote X using network.


best

Vladimir Dergachev



>Since IMC counters appear to be a feature of the powerpc architecture,
>you might get a better response from some list/forum specific to that
>architecture.

IMC stands for the Integrated Memory Controller. The DRAM controller has some 
internal counters for counting different types of memory accesses. For example, 
for my laptop it is documented here:
https://software.intel.com/content/www/us/en/develop/articles/monitoring-integrated-memory-controller-requests-in-the-2nd-3rd-and-4th-generation-intel.html

Do you have any suggestions about the cause of the xrandr error? It works 
perfectly in the virtual machine!





Re: Keeping the Screen Turned off While Getting Inputs

2023-08-26 Thread Vladimir Dergachev




On Sat, 26 Aug 2023, Ahmad Nouralizadeh wrote:


I want to count the processor-initiated memory accesses. On my 4K display, a 
huge number of accesses originate from the iGPU and dedicated GPU. I want to 
exclude these accesses. The IMC counter can
only track the dedicated GPU accesses. Therefore, I have to turn the screen off 
to exclude those originated from the iGPU.


Those accesses might not stop with just the display off - some 
applications may keep redrawing.


The simplest solution would be to boot to console mode with X off. The 
display will still work, but GPU usage would be minimal.


There is more than one console (usually), you can switch between them with 
Alt-F1, Alt-F2, etc..


There are also ways to restrict profiling to a single process,
like "perf top -p 12345".

best

Vladimir Dergachev



On Saturday, August 26, 2023, 08:10:15 PM GMT+4:30, Dave Howorth 
 wrote:


On Sat, 26 Aug 2023 15:28:52 + (UTC)
Ahmad Nouralizadeh  wrote:

> I need to run a set of (graphical) benchmarks with the screen
> disabled.


Can I ask why? What is you're trying to accomplish? Somehow affect the
benchmarks? Stop people seeing the benchmarks being performed?

And what is the benchmark measuring? Elapsed time or CPU time or what?

Turn the display off and run the benchmarks by ssh-ing in from another
machine?




Re: Something is keeping my X awake

2022-07-25 Thread Vladimir Dergachev




On Sat, 23 Jul 2022, martin f krafft wrote:



Regarding the following, written by "Vladimir Dergachev" on 2022-07-21 at 17:28 
Uhr -0400:
As Carsten suggested, it seems that it's Firefox. I've quit the browser, and now xset q 
reports "Monitor is Off" (logged in over SSH), which it hasn't done in a long 
time.

This of course now begs the question: what is the browser doing to keep X awake 
by jiggling XScreensaver regularly, and worse yet: preventing DPMS shutoff. I 
am not watching videos, but I do havem
plenty open tabs. Is any one of them able to keep my screen busy like this?


A youtube tab will do it. Make sure no videos are playing or paused.



And how can I disable that? I generally keep the browser running, and don't 
want to have to shut it down every time I want to save energy and screen 
lifetime during idle periods.


Same here. I have a few hundred browser windows with who knows how many 
tabs and it works. So it is not Firefox per se.


Maybe you have some weird site loaded that does this to get ads displayed ?
Try closing tabs until the behaviour disappears.

best

Vladimir Dergachev



Re: Something is keeping my X awake

2022-07-21 Thread Vladimir Dergachev


This normally happens when movie player or VNC turns it off (you don't 
want the screen locking while the movie is playing).


If any of such programs were terminated before they could restore the 
regular behaviour, the screen won't lock.


Try using "xset q" to see what the current state is. You can use "xset 
+dpms" to enable power saving again.


best

Vladimir Dergachev

On Thu, 21 Jul 2022, martin f krafft wrote:



Hey there,

On my Thinkpad T490, something is keeping the display awake such that 
XScreensaver will not lock the machine, and DPMS will never let the screen turn 
off.

There are lots of suspects, with the trackpad, the nipple, and a Lenovo wireless 
keyboard attached. However, even after I unplug the wireless receiver, 
and disable the trackpad and nipple with
xinput disable …, the machine will not rest.

I've ruled out that this is a problem with XScreensaver. I mean, maybe it is, 
but after I lock it manually, it just keeps interrupting with the password 
prompt, as if e.g. the mouse was moved or a
key pressed.

At this stage I am wondering what tool there's availeble to me that could shed 
some light on this. I've tried xev, but there are no events whatsoever.

I'd appreciate any hints!!

Best,

--

@martinkrafft | https://matrix.to/#/#madduck:madduck.net

a friend is someone with whom 
you can dare to be yourself


spamtraps: madduck.bo...@madduck.net 



Re: nouveau going off the deep end...

2022-06-20 Thread Vladimir Dergachev




On Mon, 20 Jun 2022, Robert Heller wrote:


How do I turn off the compositor?  Do I need it?


Look in settings - this depends on whether you use KDE or unity or
something else.


I use "something else": Mate, but with FVWM as the window manager and without
the menu crap and without the file manager.


Ahh, I used FVWM a while back. Nice !

Try 20.04 or later - an install image on a USB stick should be enough to 
figure out whether the crashes still happen.


Maybe the newer driver will fix it for you.

best

Vladimir Dergachev



Re: nouveau going off the deep end...

2022-06-19 Thread Vladimir Dergachev




On Sun, 19 Jun 2022, Robert Heller wrote:


I don't use any other 3D programs (maybe FreeCAD).  KiCaD is not a 3D program,
it is 2D.  This is an integrated video chipset on the motherboard -- I don't
have a separate video card.


Nowadays a lot of rendering goes through 3d engine - as long as one has 
capability, why not ? And it makes it easier to work with images, alpha, 
etc.


Thus just because a program is 2d does not mean it does not use the 3d 
graphics.


For example, one of the simplest way to make a movie player that is 
capable of displaying multiple streams is to use opengl to paint frames on 
2d faces. It could make a rotating cube, of course, but that is hard to 
watch :)




How do I turn off the compositor?  Do I need it?


Look in settings - this depends on whether you use KDE or unity or 
something else.


The compositor is used for desktop effects like window zoom or making your 
windows partially transparent.


Another suggestion is to try upgrading to 20.04.

best

Vladimir Dergachev



At Sun, 19 Jun 2022 17:03:28 -0400 (EDT) Vladimir Dergachev 
 wrote:





On Sun, 19 Jun 2022, Robert Heller wrote:


I am running Ubuntu 18.04 on an AMD Phenom(tm) II X4 945 Processor, 8Gig of
RAM, with a NVIDIA Corporation C77 [GeForce 8200] (rev a2) video chipset.
There is some sort of bug in the version of KiCaD I have
(4.0.7+dfsg1-1ubuntu2) with its pcbnew program that puts my machine in a state
where I have to use the "magic" SysRq key to forceably reboot it (I can ssh in
from another computer, but /sbin/reboot does not work).


Judging by the messages it looks like a lockup in a video card.

Do other 3d programs run fine ? Try a 3d-game like quake or similar.

Try turning off the compositor. Also keep in eye on the fan and GPU
temperature.

It could be a bug in the driver, but nouveau worked quite well for me on
both 18.04 and 20.04 for many years.

best

Vladimir Dergachev



I've included the last of the kernel log.  It looks like something is broken
in nouveau, which I am guessing has something to do with the video somehow.
(And no, I am not going to download and install NVIDIA's video driver.)

I don't know if this is a kernel problem (I current have kernel
4.15.0-187-generic), or something in X Server.

Jun 19 16:08:38 sauron kernel: [860959.174609] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:38 sauron kernel: [860959.175311] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:38 sauron kernel: [860959.175982] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:38 sauron kernel: [860959.176651] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:38 sauron kernel: [860959.177303] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:40 sauron kernel: [860961.177974] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:40 sauron kernel: [860961.178678] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:40 sauron kernel: [860961.179406] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:40 sauron kernel: [860961.180074] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:40 sauron kernel: [860961.180727] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:42 sauron kernel: [860963.181410] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:42 sauron kernel: [860963.182059] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:42 sauron kernel: [860963.182730] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:42 sauron kernel: [860963.183398] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:42 sauron kernel: [860963.184051] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:44 sauron kernel: [860965.184723] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:44 sauron kernel: [860965.185425] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:44 sauron kernel: [860965.186153] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:44 sauron kernel: [860965.186879] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:44 sauron kernel: [860965.187587] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:46 sauron kernel: [860967.188320] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:46 sauron kernel: [860967.189022] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:46 sauron kernel: [860967.189760] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:46 sauron kernel: [860967.190429] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:46 sauron kernel: [860967.191082] nouveau :0

Re: nouveau going off the deep end...

2022-06-19 Thread Vladimir Dergachev




On Sun, 19 Jun 2022, Robert Heller wrote:


I am running Ubuntu 18.04 on an AMD Phenom(tm) II X4 945 Processor, 8Gig of
RAM, with a NVIDIA Corporation C77 [GeForce 8200] (rev a2) video chipset.
There is some sort of bug in the version of KiCaD I have
(4.0.7+dfsg1-1ubuntu2) with its pcbnew program that puts my machine in a state
where I have to use the "magic" SysRq key to forceably reboot it (I can ssh in
from another computer, but /sbin/reboot does not work).


Judging by the messages it looks like a lockup in a video card.

Do other 3d programs run fine ? Try a 3d-game like quake or similar.

Try turning off the compositor. Also keep in eye on the fan and GPU 
temperature.


It could be a bug in the driver, but nouveau worked quite well for me on 
both 18.04 and 20.04 for many years.


best

Vladimir Dergachev



I've included the last of the kernel log.  It looks like something is broken
in nouveau, which I am guessing has something to do with the video somehow.
(And no, I am not going to download and install NVIDIA's video driver.)

I don't know if this is a kernel problem (I current have kernel
4.15.0-187-generic), or something in X Server.

Jun 19 16:08:38 sauron kernel: [860959.174609] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:38 sauron kernel: [860959.175311] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:38 sauron kernel: [860959.175982] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:38 sauron kernel: [860959.176651] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:38 sauron kernel: [860959.177303] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:40 sauron kernel: [860961.177974] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:40 sauron kernel: [860961.178678] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:40 sauron kernel: [860961.179406] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:40 sauron kernel: [860961.180074] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:40 sauron kernel: [860961.180727] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:42 sauron kernel: [860963.181410] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:42 sauron kernel: [860963.182059] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:42 sauron kernel: [860963.182730] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:42 sauron kernel: [860963.183398] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:42 sauron kernel: [860963.184051] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:44 sauron kernel: [860965.184723] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:44 sauron kernel: [860965.185425] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:44 sauron kernel: [860965.186153] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:44 sauron kernel: [860965.186879] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:44 sauron kernel: [860965.187587] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:46 sauron kernel: [860967.188320] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:46 sauron kernel: [860967.189022] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:46 sauron kernel: [860967.189760] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:46 sauron kernel: [860967.190429] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:46 sauron kernel: [860967.191082] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:48 sauron kernel: [860969.191762] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:48 sauron kernel: [860969.192465] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:48 sauron kernel: [860969.193136] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:48 sauron kernel: [860969.193804] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:48 sauron kernel: [860969.194455] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS2:  []
Jun 19 16:08:50 sauron kernel: [860971.195124] nouveau :02:00.0: gr: PGRAPH 
TLB flush idle timeout fail
Jun 19 16:08:50 sauron kernel: [860971.195771] nouveau :02:00.0: gr: 
PGRAPH_STATUS 0501 [BUSY CTXPROG CCACHE_PREGEOM]
Jun 19 16:08:50 sauron kernel: [860971.196441] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS0: 0008 [CCACHE]
Jun 19 16:08:50 sauron kernel: [860971.197108] nouveau :02:00.0: gr: 
PGRAPH_VSTATUS1:  []
Jun 19 16:08:50 sauron kernel: [860971.197760] nouveau :0

Re: Feature request, but must be universallly accepted by ALL blanker authors

2020-10-03 Thread Vladimir Dergachev




On Sat, 3 Oct 2020, Gene Heskett wrote:


which suggests there are several ways to disable it, including one
where you can simulate user activity.

This was just a quick look, it would probably be best to talk to
light-locker developers.

best

Vladimir Dergachev


And where might I find those folks?


Try contacting developers listed in

https://github.com/the-cavalry/light-locker/blob/master/MAINTAINERS

Vladimir Dergachev



Thank you Vladimir.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
- Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: Feature request, but must be universallly accepted by ALL blanker authors

2020-10-03 Thread Vladimir Dergachev




On Fri, 2 Oct 2020, Gene Heskett wrote:


Greetings x-people;

The LinuxCNC people have just brought it up from Debian wheezy to buster
for a base install.

But the security paranoia is going to get someone maimed or killed.

Someone has decreed that the screen blanker must be subject to a new
login before anything can be done about a runaway machine with enough
horsepower at its disposal to kill.


Just to chime in, this is not an X problem per se, but rather the desktop 
environment.


X does not have a utility to ask for your password - just a way to blank 
the screen and turn off the monitor.


If you were to prevent X from blanking the screen, then what would likely 
happen is that it will not go blank, but instead you will see either a 
screensaver or a password prompt from whatever screen locker your desktop 
environment is using.


So the issue is really with the desktop environment you are using - there 
should be a control to disable screenlocker.


I am using KDE on my laptop and it does have it.

My CNC machine is controlled from Jetson nano, and I was able to disable 
password prompt, but the screensaver does kick in and I have to muck
around with monitor control key to turn it back on, as the monitor 
touchscreen turns off with the monitor.


best

Vladimir Dergachev



I have now been 3 days looking for a way to disable this blanker, trying
several methods by way of xset, only to find 15 minutes later that its
been undone and the blanker kicks in regardless.

So I am proposing that an env variable be named an agreed upon name, and
that its presence totally disables any and ALL screen blanker's
regardless of whose desktop of the day is installed.  We can incorporate
the setting of this, on launching LinuxCNC, and unsetting it when
LinuxCNC is being shut down.

If you agree that safety overrides paranoia, please consider this as part
of the supplied X11 implementations.

In the meantime, since xset seems powerless to disable it, can someone
tell me how, in xfce4, to disable it. Haveing it kick in in 10 minutes,
while the machine is carving a part, and a miss-command does something
wrong that needs to be stopped as quickly as possible, having a locked
screen requiring a login via a swarf covering equipt keyboard is simply
dangerous to both the operator and the machine.  So I'm asking how do I
get rid of it, totally.  We can operate a monitors power switch if we
are done for the day, but we can't tolerate anything getting in the way
of controlling that runaway machine with one keystroke during the day.

Please advise.  And thank you.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
- Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: Feature request, but must be universallly accepted by ALL blanker authors

2020-10-03 Thread Vladimir Dergachev




On Sat, 3 Oct 2020, Gene Heskett wrote:


On Saturday 03 October 2020 00:39:27 Vladimir Dergachev wrote:


On Fri, 2 Oct 2020, Gene Heskett wrote:

Greetings x-people;

The LinuxCNC people have just brought it up from Debian wheezy to
buster for a base install.

But the security paranoia is going to get someone maimed or killed.

Someone has decreed that the screen blanker must be subject to a new
login before anything can be done about a runaway machine with
enough horsepower at its disposal to kill.


Just to chime in, this is not an X problem per se, but rather the
desktop environment.


Agreed.


X does not have a utility to ask for your password - just a way to
blank the screen and turn off the monitor.

If you were to prevent X from blanking the screen, then what would
likely happen is that it will not go blank, but instead you will see
either a screensaver or a password prompt from whatever screen locker
your desktop environment is using.


Which for xfce4 is light-locker. But removing it with apt destroys the
system by removeing 70+ other packages including ours.  Thats not an
acceptable solution.


This is surprising. It is hard for me to see why your package - or any 
package - would depend on a screensaver. I would normally expect only the 
meta package that pulls in the full environment to depend on it.





FWIW, I have tried mightily, to lengthen the intervals from 10 minutes on
this stretch install running TDE. But something resets it to 10 minutes
before the 10 minutes is up. So xset is neutered and worthless. Frankly,
linux is as bad as winderz in determining what you can and cannot do. If
there was an alternative that put the machines control back in the uses
hands, I'd jump on it like stink on a skunk.


Looking on github, I found the following file

https://github.com/the-cavalry/light-locker/blob/master/src/light-locker-command.c

which suggests there are several ways to disable it, including one where 
you can simulate user activity.


This was just a quick look, it would probably be best to talk to 
light-locker developers.


best

Vladimir Dergachev

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: X is consuming ~100 GiB of RAM(!)

2017-12-06 Thread Vladimir Dergachev


Also, given the the high usage does not happen outside of gnome session, 
perhaps this is connected to compositing..


best

Vladimir Dergachev

On Wed, 6 Dec 2017, Hi-Angel wrote:


The troubleshooting link you provided states that the high memory
usage typically belongs to some other application. Sorry, I am just an
occasional bystander here, and can't tell much of technical details,
but I imagine it works like this(I hope someone will correct me on
details): an app requests, for example, a glx object, and XServer
allocates one. When the app is done with the object, it requests
XServer to deallocate it. The point is: although this memory accounted
on part of XServer process — it is actually owned by the app. The link
also states that you can use `xrestop` application to see the owners
and amounts of the memory.

On 5 December 2017 at 21:14, Ewen Chan <chan.e...@gmail.com> wrote:

To Whom It May Concern:

Hello everybody. My name is Ewen and I am new to this distribution list.

So let me start with a little bit of background and the problem statement of
what I am seeing/encountering.

I am running a SuperMicro Server 6027TR-HTRF
(https://www.supermicro.com/products/system/2u/6027/sys-6027tr-htrf.cfm)
(which uses a Matrox G200eW graphics chip and it has four half-width nodes,
each node has two processor, each processor is an Intel Xeon E5-2690 (v1)
(8-core, 2.9 GHz stock, HTT disabled) running SuSE Linux Enterprise Server
12 SP1 (SLES 12 SP1).

Here are some of the outputs from the system:

ewen@aes4:~> X -version

X.Org X Server 1.15.2
Release Date: 2014-06-27
X Protocol Version 11, Revision 0
Build Operating System: openSUSE SUSE LINUX
Current Operating System: Linux aes4 3.12.49-11-default #1 SMP Wed Nov 11
20:52:43 UTC 2015 (8d714a0) x86_64
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.12.49-11-default
root=UUID=fc4dcdb9-2468-422c-b29f-8da42fd7dec0
resume=/dev/disk/by-uuid/1d5d8a9c-218e-4b66-b094-f5154ab08434 splash=silent
quit showopts crashkernel=123M,high crashkernel=72M,low
Build Date: 12 November 2015  01:23:55AM

Current version of pixman: 0.32.6
 Before reporting problems, check http://wiki.x.org
 to make sure that you have the latest version.
ewen@aes4:~> uname -a
Linux aes4 3.12.49-11-default #1 SMP Wed Nov 11 20:52:43 UTC 2015 (8d714a0)
x86_64 x86_64 x86_64 GNU/Linux

The problem that I am having is that I am running a CAE analysis application
and during the course of the run, X will eventually consume close to 100 GiB
of RAM (out of 125 GiB installed)

ewen@aes4:~> date
Tue Dec 5 05:08:28 EST 2017
ewen@aes4:~> ps aux | grep Xorg
root 2245 7.7 79.0 271100160 104332316 tty7 Ssl+ Nov25 1078:19 /usr/bin/Xorg
:0 -background none -verbose -auth /run/gdm/aut
h-for-gdm-9L7Ckz/database -seat seat0 -nolisten tcp vt7
ewen 11769 0.0 0.0 10500 944 pts/1 R+ 05:08 0:00 grep --color=auto Xorg

This does not occur when I perform the same analysis in runlevel 3 and when
I switch back to runlevel 5 and I am using GNOME for the desktop
environment, regardless of whether I initiate the analysis via a Terminal
inside GNOME or I ssh into the system (via cygwin from a Windows box), the
host server's X memory usage will continually increase as the analysis
progresses.

In trying to research this issue, I have found that I can either restrict
the amount of cache that X does via ulimit -m (Source:
https://wiki.ubuntu.com/X/Troubleshooting/HighMemory) or I can edit
xorg.conf by adding this option:

Option "XaaNoPixmapCache"

(Source: https://www.x.org/releases/current/doc/man/man5/xorg.conf.5.xhtml)

Would that be the recommended solution to the problem that I am experiencing
with X?

A couple of other notes:

ewen@aes4:~> free -g
 total   used   free sharedbuffers cached
Mem:   125125  0  0  0  3
-/+ buffers/cache:122  3
Swap:  256170 85
ewen@aes4:~> cat /proc/sys/vm/vfs_cache_pressure
200

Your help and commentary would be greatly appreciated. Thank you.

Sincerely,

Ewen Chan

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: X is consuming ~100 GiB of RAM(!)

2017-12-06 Thread Vladimir Dergachev


Keep in mind that Xorg will show memory usage from mapping graphics 
memory.. which could be large on your card.


Also, are you using CUDA ?

best

Vladimir Dergachev

On Wed, 6 Dec 2017, Hi-Angel wrote:


Oh, wow, this looks like a Xorg bug then. I'd recommend trying latest Xorg then 
— yours one is 3 years old, hopefully it's something fixed. If it won't help, 
I'd recommend report a bug.
Although ulimit workaround worth a try, but If this is really a memory leak, I 
doubt it'd help much. What I think would happen is that Xorg won't be able to 
allocate resources on apps'
behalf, making apps to crash.

On 6 December 2017 at 00:49, Ewen Chan <chan.e...@gmail.com> wrote:
  Thank you, Hi-Angel.
I thought so too originally, but when I am launching the analysis via a 
terminal on the console (or even via ssh from cygwin into the system) and it is 
still exhibiting the same
behaviour despite the fact that there isn't any graphical component running 
beyond just runlevel 5 (and having GNOME running on X), issuing the ps aux 
command shows Xorg being the
culprit for the high memory consumption.

In trying to perform the forensic analysis, I would think that it would be true 
if there is a graphic component that's actually running, but there isn't 
(beyond runlevel
5/GNOME-on-X).

X is supposed to release the memory back into the available pool, but it 
doesn't -- it just keeps increasing.

So even after the application has terminated, if X doesn't release the memory 
back, then ps aux will show X as being the process that's holding the memory.

Again, the idea of providing the first link was to limit how much RAM can X use 
for the caching/retention (using ulimit -m somehow and editing 
/etc/security/limits.conf) and I raised
the question (on the SLES forum) how would I know what I should set the limit 
at? Too low and it will crash often. Too high, and I am back to this current 
problem that I am
experiencing now.


[IMAGE]


​(sorry that the output of xrestop above is a screenshot because I am twice 
remotely logged in (first to home system and then again via the IPMI to the 
console).)

xrestop only shows about 22 MiB.

ps aux | grep Xorg is still showing about 100 GiB tied to the Xorg process.

Thanks.

Sincerely,
Ewen


On Tue, Dec 5, 2017 at 4:28 PM, Hi-Angel <hiangel...@gmail.com> wrote:
  The troubleshooting link you provided states that the high memory
  usage typically belongs to some other application. Sorry, I am just an
  occasional bystander here, and can't tell much of technical details,
  but I imagine it works like this(I hope someone will correct me on
  details): an app requests, for example, a glx object, and XServer
  allocates one. When the app is done with the object, it requests
  XServer to deallocate it. The point is: although this memory accounted
  on part of XServer process — it is actually owned by the app. The link
  also states that you can use `xrestop` application to see the owners
  and amounts of the memory.

  On 5 December 2017 at 21:14, Ewen Chan <chan.e...@gmail.com> wrote:
  > To Whom It May Concern:
  >
  > Hello everybody. My name is Ewen and I am new to this distribution list.
  >
  > So let me start with a little bit of background and the problem 
statement of
  > what I am seeing/encountering.
  >
  > I am running a SuperMicro Server 6027TR-HTRF
  > (https://www.supermicro.com/products/system/2u/6027/sys-6027tr-htrf.cfm)
  > (which uses a Matrox G200eW graphics chip and it has four half-width 
nodes,
  > each node has two processor, each processor is an Intel Xeon E5-2690 
(v1)
  > (8-core, 2.9 GHz stock, HTT disabled) running SuSE Linux Enterprise 
Server
  > 12 SP1 (SLES 12 SP1).
  >
  > Here are some of the outputs from the system:
  >
  > ewen@aes4:~> X -version
  >
  > X.Org X Server 1.15.2
  > Release Date: 2014-06-27
  > X Protocol Version 11, Revision 0
  > Build Operating System: openSUSE SUSE LINUX
  > Current Operating System: Linux aes4 3.12.49-11-default #1 SMP Wed Nov 
11
  > 20:52:43 UTC 2015 (8d714a0) x86_64
  > Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.12.49-11-default
  > root=UUID=fc4dcdb9-2468-422c-b29f-8da42fd7dec0
  > resume=/dev/disk/by-uuid/1d5d8a9c-218e-4b66-b094-f5154ab08434 
splash=silent
  > quit showopts crashkernel=123M,high crashkernel=72M,low
  > Build Date: 12 November 2015  01:23:55AM
  >
  > Current version of pixman: 0.32.6
  >          Before reporting problems, check http://wiki.x.org
  >          to make sure that you have the latest version.
  > ewen@aes4:~> uname -a
  > Linux aes4 3.12.49-11-default #1 SMP Wed Nov 11 20:52:43 UTC 2015 
(8d714a0)
  > x86_64 x86_64 x86_64 GNU/Linux
 

Re: X is consuming ~100 GiB of RAM(!)

2017-12-05 Thread Vladimir Dergachev



On Tue, 5 Dec 2017, Ewen Chan wrote:


Not really sure.
Someone suggested that I tried Xvfb but I didn't really know how I can use that 
without using an X server already, and again, in trying to conduct my own due 
diligence research into the
issue, I stumbled upon using ssh -Y and enabling X11 forwarding via ssh so I 
will have to see how that works next (unless there are other suggestions that 
come before that that I can also
quickly test out as well).


If your app relies on GL you don't want to use ssh -Y.

If it does not, then I recommend running it in Xvnc instead.

best

Vladimir Dergachev



Thanks.

On Tue, Dec 5, 2017 at 6:36 PM, Vladimir Dergachev <volo...@mindspring.com> 
wrote:

  Also, given the the high usage does not happen outside of gnome session, 
perhaps this is connected to compositing..

  best

  Vladimir Dergachev

  On Wed, 6 Dec 2017, Hi-Angel wrote:

The troubleshooting link you provided states that the high memory
usage typically belongs to some other application. Sorry, I am just 
an
occasional bystander here, and can't tell much of technical details,
but I imagine it works like this(I hope someone will correct me on
details): an app requests, for example, a glx object, and XServer
allocates one. When the app is done with the object, it requests
XServer to deallocate it. The point is: although this memory 
accounted
on part of XServer process — it is actually owned by the app. The 
link
also states that you can use `xrestop` application to see the owners
and amounts of the memory.

On 5 December 2017 at 21:14, Ewen Chan <chan.e...@gmail.com> wrote:
  To Whom It May Concern:

  Hello everybody. My name is Ewen and I am new to this 
distribution list.

  So let me start with a little bit of background and the 
problem statement of
  what I am seeing/encountering.

  I am running a SuperMicro Server 6027TR-HTRF
  
(https://www.supermicro.com/products/system/2u/6027/sys-6027tr-htrf.cfm)
  (which uses a Matrox G200eW graphics chip and it has four 
half-width nodes,
  each node has two processor, each processor is an Intel Xeon 
E5-2690 (v1)
  (8-core, 2.9 GHz stock, HTT disabled) running SuSE Linux 
Enterprise Server
  12 SP1 (SLES 12 SP1).

  Here are some of the outputs from the system:

  ewen@aes4:~> X -version

  X.Org X Server 1.15.2
  Release Date: 2014-06-27
  X Protocol Version 11, Revision 0
  Build Operating System: openSUSE SUSE LINUX
  Current Operating System: Linux aes4 3.12.49-11-default #1 
SMP Wed Nov 11
  20:52:43 UTC 2015 (8d714a0) x86_64
  Kernel command line: 
BOOT_IMAGE=/boot/vmlinuz-3.12.49-11-default
  root=UUID=fc4dcdb9-2468-422c-b29f-8da42fd7dec0
  resume=/dev/disk/by-uuid/1d5d8a9c-218e-4b66-b094-f5154ab08434 
splash=silent
  quit showopts crashkernel=123M,high crashkernel=72M,low
  Build Date: 12 November 2015  01:23:55AM

  Current version of pixman: 0.32.6
           Before reporting problems, check http://wiki.x.org
           to make sure that you have the latest version.
  ewen@aes4:~> uname -a
  Linux aes4 3.12.49-11-default #1 SMP Wed Nov 11 20:52:43 UTC 
2015 (8d714a0)
  x86_64 x86_64 x86_64 GNU/Linux

  The problem that I am having is that I am running a CAE 
analysis application
  and during the course of the run, X will eventually consume 
close to 100 GiB
  of RAM (out of 125 GiB installed)

  ewen@aes4:~> date
  Tue Dec 5 05:08:28 EST 2017
  ewen@aes4:~> ps aux | grep Xorg
  root 2245 7.7 79.0 271100160 104332316 tty7 Ssl+ Nov25 
1078:19 /usr/bin/Xorg
  :0 -background none -verbose -auth /run/gdm/aut
  h-for-gdm-9L7Ckz/database -seat seat0 -nolisten tcp vt7
  ewen 11769 0.0 0.0 10500 944 pts/1 R+ 05:08 0:00 grep 
--color=auto Xorg

  This does not occur when I perform the same analysis in 
runlevel 3 and when
  I switch back to runlevel 5 and I am using GNOME for the 
desktop
  environment, regardless of whether I initiate the analysis 
via a Terminal
  inside GNOME or I ssh into the system (via cygwin from a 
Windows box), the
  host server's X memory usage will continually increase as the 
analysis
  progresses.

  In trying to research this is