On Thursday 11 November 2004 20:41, Sven Neumann wrote:
Hi,
Dov Kruger [EMAIL PROTECTED] writes:
I noticed that gimp is very slow for large images compared with
Photoshop. We were recently processing some 500Mb images, and on
a fast machine with 2Gb, gimp is crawling along, while on a
slower machine with only 512 Mb, photoshop is considerably
faster. I attributed it to a massive amount of work in
photoshop, using sse instructions, etc. but then noticed that the
default viewer in redhat allows me to load images far faster even
than adobe, and zoom in and out with the mouse wheel in realtime.
Granted, because you are editing the image, not just displaying
it, there has to be some slowdown, but I wondered if there is any
way I can tweak gimp, do I somehow have it massively
de-optimized. When I first set up gimp-2.0, I tried both 128 and
512 Mb tile cache sizes. 512 seems to work a lot better, but it's
still pretty bad. Any idea as to the area of the speed advantage
of Adobe?
If you are processing large images and have 2GB available, why do
you cripple GIMP by limiting it to only 512 MB of tile cache size?
The point here is no news for us.
The GIMP is not as fast as it can possibly be one day for large
images.
I've put some thought on it these days - (just thinking, no code), and
one idea that came by. What I intend with writing this is that
everybody have it in mind when making the transition to GEGL - when
it will be a favorable time to implement it:
All images in the GIMP could be represented twice internaly - on e
with the real image representation, and a second layer stack
representing just what is being seeing on the screen. All work that
should feel realtime-like should be done first on the screen
representation, and them processed in background, on the actual layer
stack.
This would allow, overall, a faster use of the tools, including the
paint and color correction ones.
It could also clean-up some situations like the JPEG save preview
layer, and the darkening seen in teh current crop tool - as these
things would not be in the real image data, just on the display
shadow.
In GEGL terms, that means two graphs for every image.
Of course none of this is imediate, and I am thinking on a discussion
that should mature from now to, say, some 3 or 4 months, if GEGL will
be put in the next release.
While there may be a first impression that this would take up more
memory and resources than having a single representation of the
image, I'd like to put in consideration thde following numbers:
A typical photo I open up for viewing/correcting is 2048x1576 (My
camera's resolution). That would take up, in raw memory, no undo
tiles considered, more than 9 Megabytes for a single layer. Each of
which bytes should be crunched each time I make a small adjust on
the curves tool.
On the other hand, I view this same image on a window that is about
800x600 - 1.5MB in size.
Of course that care must be taken for that this doesn't slow
everything down when
I know this is no news, it is hard to do, and all that. But it is
nonetheless a model that we have to keep in mind, for, at this point,
it seems no less important that implementing tiles had been some day.
Ok, I may also have got it all backwards, and there may be a way of
optimizing the current model without two image graphs at all. :-)
But it still a discussion that should be mature in a foreseable
future.
Sven
Regards,
Joao
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer