Seem to find lots of excuses not to work on the user interface, lately. This time its all about improving performance.
You might say that there's no need any more, with the OpenGL rendering and such. But even then there are still some bottlenecks present. 1. The most obvious is in the GUI, actually. My suspicion falls on the font rendering, which always renders every glyph anew (pixel for pixel), applies the gaussian blur effect (pixel for pixel) and only then is caching the whole text. I would suggest a solution like in v0.3, where we cache individual glyphs as they are requested. So the second time it's needed, it's a very cheap blitting operation. Of course we do need to keep separate glyphs for different colors and font sizes, but since the individual glyphs are small, that should be a big issue. And we can save on memory consumption elsewhere. Which takes us to 2. Images are loaded the first time a sprite needs to be rendered. The problem is that we load all images used by the sprite at that time, instead of just the one image that actually needs to be drawn. So to keep down initial loading time, it might be a good idea to further defer the loading of images to individual frames of a sprite. We probably need some of the meta-data beforehand, however. Like the size an image is going to be, whether it has an alpha channel or is masked, so we may need to read at least the header of each frame anyway. Doing so would conserve memory, but wouldn't necessarily improve speed, as we need to access the image files anyway. 3. Finally, even with the accelerated rendering, we should really cut down on drawing sprites that are hidden by other objects. If you run latest worldtest with the wastesedge map and press 'd', you'll see how much stuff is drawn that ends up invisible. The idea is, that with optimization #2 in place, sprites that are hidden don't have to be loaded in the first place, since we're not going to draw them. The implementation would be pretty simple too, as the algorithm is the same that's used for cutting up the shadow. We start with a list that contains the whole area of the tile we check. If we find that another tile overlaps (which we already check) we subtract the other tiles' area and store the remaining parts (if any) back into the list. If the list turns out empty, we know our tile is completely hidden and we can throw it out of the render queue. That has other benefits too: right now we compare tiles against all object in the queue, since we need to assure it is below any other tile before we can draw it. But we might figure out its obscured much earlier and can thus leave the loop earlier. And moreover, if objects drop out of the queue this way, we might have less problems with overlapping or intersecting objects, where we cannot decide what to render first. Unfortunately, there's a tiny problem left. We know the size of an image and where it ends up on screen. But we know not if it might be partly translucent or transparent, in which case it would not truly hide other tiles beneath. We can only rely on the image format for that. If it has not alpha channel, no per surface alpha and is not masked, only then we can be sure that it's hiding anything below. So it will only work out if we properly save all opaque images in 24bit RGB format. Since I have given the most detailed thoughts to no. 3, I might work at that first. But again, if there's interest to help out, just let me know so the same work isn't down twice. Kai _______________________________________________ Adonthell-devel mailing list Adonthell-devel@nongnu.org http://lists.nongnu.org/mailman/listinfo/adonthell-devel