> > Been there, done that. Annoying.
> What exactly is "annoying"?

See below.

> > The Hardware has quite some state, and not all of it can be retirieved
> > easily. Thus basically IMHO we will have to face the fact, that the
> > application will have to cooperate a little when switching away.
> If Amigas can do it, 

Amigas have a very limited hardware choice ... Yes, they are less braindead.
Thus it makes it easier.

> if AROS can do it, then also kgi can do it. Even windows can do it, heck.

Windows cannot. Look at any semi-crashed application (i.e. one in an endless
loop) there. If it does not redraw a window on its own anymore, it will
not revert to the previos state when you move something over it.

> Yes, and you will have to save that stuff somewhere _anyway_, because how
> else are you going to restore it when the application's VT gets switched
> on again? 

You don't. You tell the app to redraw. That is what X does, that is what
almost everything else does. Try it.

LibGGI's X target nicely shows what is 

I have once very strongly belived in the same idea as you do, and I can
understand you very well. 

I have though over it long quite some time ago and fought hard to keep
my point, but I just couldn't. 


As I said: I have implemented such background-saving, and it was quite
an ugly hack. and almost _no_ application really needed it, and it still
did not solve the accel problem.

Let me detail on that:

1. Why is it an ugly hack?

Because switching away form an application doesn't happen nicely scheduled
between the lines of a C program. It happens in an interrupt handler
that originated from the keyboard. The application is very probably
running and possibly interrupted within some drawing loop. Say it just
does a memset() inside the mmaped graphics RAM.

Now to make such a switch transparent to the application, I need to:

a) make sure I got enough RAM ready to put the Video RAM into.
   In order to achieve that, I may have to page out somthing.
   Thus I have to schedule. I cannot do that from an interrupt handler.
b) Thus I set a flag that a switch is pending, file a reschedule request
   and wait for the scheduler to become active. 
   Now the swapping can start.
c) I wait for the RAM to be actually available, blocking the application
   in the meantime.
d) store away the current Video RAM into the backbuffer RAM.
e) I modify mapping tables such that the Vidram gets transparently replaced
   by the backbuffer RAM.
f) I unblock the application.


2. Why doesn't it solve the accel problem?

While the method depicted in 1. solves the problem that the FB gets offline,
it does not solve the problem of the accel becoming offline, because
it cannot be replaced transparently. If I just map RAM to where the accel
regs just were, the app will happily write to it, but nothing will happen,
thus leaving the framebuffer in an inconsistent state, as the app will
_think_ it has just written a Box to memory, while in truth it has not.

As long as a switchaway can happen anywhere in the application code, 
there is no proper way to handle that situation. The application may be in
the middle of sending a complex accel request which cannot be restarted
at that point, because the state in the accel engine cannot be saved
completely.

When a switchaway occurs, it has to be made sure, that the application 
is in a consistent state WRT the accel. And IMHO only the application
itself can ensure this, if we do not want to restrict ourselves from
fast-pathed accel access.

It worked in scrdrv, because it did all accel access in kernelspace, 
which was thus by definition atomic.

3. Why does almost no application need it?

Because most applications (movieplayers, GUIs, demos, etc.) can redraw 
their screen within a split second.

Those that cannot (say a mandelbrot renderer) should rather try to 
explicitly backbuffer.

> So, you see, you need an offscreen buffer anyway.

No. You can just redraw.

> > Now what happens on switch away: We will then _NEED_ the memory.
> > Giving it only if available doesn't help. Either I can rely on being
> > backing-stored or not. Thus the mem would have to be paged free
> > if necessary ... can take quite some time when mem is tight and
> > disks are slow.
> It happens all the times when using X, 

It does not. You can turn on backing store in the X server, but that
is pretty optional and rarely really needed. It makes much sense over
slow lines, where redrawing is very expensive, but generally not locally.

> so why shoudln't it happen when using the console? If the user opens 10
> apps all using big screens at big depths, then he should be aware that he
> needs a lot of ram.

What's when we are swapping already? Want to wait for a few Megs to be
swapped free to buffer the output of an application that will not
be relevant on switchback anyway?


Bottom line:

Yes, it should be possible for an application to save its screen contents
on switchaway. However I do not recommend to try to outsmart the application
and do that behind its back. Tell the app that it should save its stuff,
if it wants to keep it and be done with it. If app doesn't listen,
app gets put to sleep and may wake up with a destroyed graphics output.


If you really, really want that behaviour: Code it. 

You will then quickly see why I am warning about doing it that way.


CU, Andy

-- 
= Andreas Beck                    |  Email :  <[EMAIL PROTECTED]>             =

Reply via email to