Re: Performance of Gimp vs. photoshop for large images
>I was used to adjusting photoshop to prevent it swapping, and tried to do >that with GIMP, the above figures were based on my idea that X wanted 30 >megs and gimp seemed to want about 10 or so, leaving about 20 left for the >image. All wrong, terribly wrong. I set the cache to 30 megs last night >and it sped up tremendously, although it's still damned slow. It now Exactly: Gimp's swap is the maximum RAM that the app will request for images and all related data (undos), not extra RAM, but total. >takes about 20-30 seconds or so to do a levels change, still about 1/4 of >the speed of photoshop but bearable, so no need for me to move OS yet >again ;-) Uuummm... have you recompiled with optimizations, BTW? >It is a small amount, now I've gotten over the levels problem I've tried a >few more things in gimp, and ended up halving the resolution of my image >to try gimp out, I won't be able to do any serious work until I get a >machine with more RAM at least. I've got a Seagate Cheetah on a fast/wide >SCSI adapter but even that's not fast enough to stop the swapping getting >on my nerves! Maybe your disk layout is wrong. Setting a good partitioning can be a really complex thing. Place your swap in the middle, and the Gimp swap near. There are docs that describe how to set a disk for better perforance (I have not tested, but I know that the machines do not go slow either, so the tactics at least do not hurt). And remember: 30 MB is just your image, so after the first operation, Gimp always swaps. GSR
Re: Performance of Gimp vs. photoshop for large images
On 7 Jun, Sven Neumann wrote: > As profiling the Gimp shows, there's the need and room for > optimization. Marc and Daniel did work on this last weekend > and hopefully we will soon see those changes in CVS. I'd be very glad I you could apply this patches I'll send them to you soon -- Servus, Daniel
Re: Performance of Gimp vs. photoshop for large images
On Wed, 7 Jun 2000, pixel fairy wrote: >> It was set to 8, or 10, or 15, I've tried them all, >> 8 seemed to be faster >> as that stopped Gimp having to swap to VM or push >> other apps out to disc. > > 8,10,15? where do you get these numbers? your tile > cache should be alot more than that. I was used to adjusting photoshop to prevent it swapping, and tried to do that with GIMP, the above figures were based on my idea that X wanted 30 megs and gimp seemed to want about 10 or so, leaving about 20 left for the image. All wrong, terribly wrong. I set the cache to 30 megs last night and it sped up tremendously, although it's still damned slow. It now takes about 20-30 seconds or so to do a levels change, still about 1/4 of the speed of photoshop but bearable, so no need for me to move OS yet again ;-) I breathed a sigh of relief at that I can tell you! > go with the mac. The problem is that it'll cost me money. The company is about to shell out for a laptop for me, it'll be a PIII500 with 192 megs of RAM and a fast disc, I'll be running linux on it natch, so will be able to take it home and DHCP it into my network to load images and edit them much faster. This way I get a new home machine but don't have to pay for it ;-) Other than that I'd like the mac, they appeal to me in some ways, but it would only be used for photo editing as I don't rate them for anything else (or Windows for that matter). It seems a bit extravagant to spend some 200UKP or so on a machine when I'm surrounded by them.. > 64megs is a small amount of ram for images that heavy. > im surprised your getting decent performance from > photoshop. It is a small amount, now I've gotten over the levels problem I've tried a few more things in gimp, and ended up halving the resolution of my image to try gimp out, I won't be able to do any serious work until I get a machine with more RAM at least. I've got a Seagate Cheetah on a fast/wide SCSI adapter but even that's not fast enough to stop the swapping getting on my nerves!
Re: Performance of Gimp vs. photoshop for large images
Hi, > at 4k x 3k for most things you do see the line comming > down. its slow for hue/sat/lightness adjustments, and > fastest for curves. i may try this on windows, but i > think some of the others on this list have already > beaten that horse enough. Someone already mentioned it, but I want to repeat it again for clarification: Gimp-1.1.23 has a bug in the preview code. The function that updates the thumbnails shown in the L&C-dialog is called for every pixel that is changed. Of course this slows down things considerably. This has been fixed in CVS. As profiling the Gimp shows, there's the need and room for optimization. Marc and Daniel did work on this last weekend and hopefully we will soon see those changes in CVS. Salut, Sven
Re: Performance of Gimp vs. photoshop for large images
--- [EMAIL PROTECTED] wrote: > On Mon, 5 Jun 2000, pixel fairy wrote: > > Hmm, you can adjust contrast, colour curves and > levels in more or less > realtime? Can you tell me what hardware you have? celeron with ati rage fury (32MB), only for its support of hardware gamma. my ancient PCI mellenium (not melleium II) with 8 megs ran faster in 1600x1200 than this thing does at 1280x1024. 640 megs / big cheap IDE drive, which i have not tweaked with hdparm yet, but your better off putting your gimp swap (and photoshop swap) on a fast scsi drive anyway. at 4k x 3k for most things you do see the line comming down. its slow for hue/sat/lightness adjustments, and fastest for curves. i may try this on windows, but i think some of the others on this list have already beaten that horse enough. > Also how do you get > 32bpp images into gimp in the first place? 32bpp is not the 10 bits per color channel that your talking about, its RGBA (8bits per channel, A is for alpha, which for gimp is opacity) of course, i now realize that this was usually only RGB. > It was set to 8, or 10, or 15, I've tried them all, > 8 seemed to be faster > as that stopped Gimp having to swap to VM or push > other apps out to disc. 8,10,15? where do you get these numbers? your tile cache should be alot more than that. > Cheers for that, I'll try it, I'm running XFree4 at > the moment. It seems > to get the DPI-rating right for my monitor which is > a good start. i was half joking (even though i use this method myself), its not a good idea to screw up your calibration unless you can get it back easily. > I am seriously considering this, one of the reasons > I'm still using a PPro > 200 with 64 megs is because it's fine for my usual > linux needs, I don't > want to upgrade it for photo work until I know > whether I want to use linux > or whether I jump to using a Mac with all the > benefits that brings (on the > photo editing side). go with the mac. from what ive gathered linux runs fine on most macs and theres always mac on linux, which probably runs photoshop, meaning you can easily have both. if you try this, please tell me (and/or the group) how it works. 64megs is a small amount of ram for images that heavy. im surprised your getting decent performance from photoshop. ive found that as images get bigger, there are things that photoshop does faster, but then as they get really big (with respect to available resources) the gimp will be able to deal with images photoshop cant. but this is a moot point with you, since you like having the extra head room (color depth) while tweaking/fixing etc. __ Do You Yahoo!? Yahoo! Photos -- now, 100 FREE prints! http://photos.yahoo.com
Re: Performance of Gimp vs. photoshop for large images (fwd)
Hyperborean wrote: > > Could it be that Photoshop does the previews only on the > visible pixels? I'm with George on this one! Pre-vue mode should be visible pixels only. -- Jon Winters http://www.obscurasite.com/jon/ "Everybody Loves The GIMP!" http://www.gimp.org/
Re: Performance of Gimp vs. photoshop for large images (fwd)
Andy Thomas wrote: > What version of gimp are you using? The recent CVS versions had a real > bug in the updating of previews when using the levels/curves stuff. On the advice of Andy and others I disabled the layer pre-vue images and things speeded up quite a bit. (they also mentioned this is fixed in the current CVS release) On a PIII 450, 256MB pc133 RAM, Matrox G400 32MB machine a levels adjustment on the image takes 4 seconds. 4 seconds is good but its still twice as long as photoshop. :-\ -- Jon Winters http://www.obscurasite.com/jon/ "Everybody Loves The GIMP!" http://www.gimp.org/
Re: Performance of Gimp vs. photoshop for large images (fwd)
Scavenging the mail folder uncovered Jon Winters's letter: > > I'm forwarding this from gimp-user for anyone who is not on that list. > There was a question regarding performance and configuration but I can't > seem to get Gimp to outperform Photoshop. > > TIA for any configuration tweeks that may help me. (so far the only thing > i've done is adjust the tile cache) pretty strange. here a g3 notebook with gimp and tile cache set to 128Mb outperforms a g4 with potoshop! but we are using extremely *BIG* psd files (>200mb, 200 layers) and linux fs layer is much better than mac, right? ciao, federico -- Federico Di Gregorio MIXAD LIVE System Programmer [EMAIL PROTECTED] Debian GNU/Linux Developer & Italian Press Contact[EMAIL PROTECTED] 99.% still isn't 100% but sometimes suffice. -- Me
Re: Performance of Gimp vs. photoshop for large images (fwd)
Hi, I'm forwarding this from gimp-user for anyone who is not on that list. There was a question regarding performance and configuration but I can't seem to get Gimp to outperform Photoshop. TIA for any configuration tweeks that may help me. (so far the only thing i've done is adjust the tile cache) -- Jon Winters http://www.obscurasite.com/ "Everybody loves the GIMP!" http://www.gimp.org/ -- Forwarded message -- Date: Tue, 6 Jun 2000 09:52:55 -0500 (CDT) From: Jon Winters <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED], [EMAIL PROTECTED] Subject: Re: Performance of Gimp vs. photoshop for large images Hello all, Yesturday I requested that our friend send me a copy of his image so that I could try the test on my computer at work. (PIII 400 128MB, Matrox G400, WinNT) I chose to test with levels because I adjust levels or curves on almost every image I edit. In Photoshop (v5.0) the redraw after letting go of one of the levels sliders was less than two seconds. (default 'out of the box' photoshop configuration) In the gimp I was surprised that the performance is indeed terrible. With the tile cache set to 72MB it took 40 seconds. With the tile cache set to 96MB it took 16 seconds. Moving the tile cache to 128MB (on this 128MB machine) knocked it down to 11 seconds. Is there some other configuration that I am missing? Years ago I worked as a photographer and our standard image size, in our studio using a Leaf Digital Camera Back, was around 100MB. This kind of performance hit would seriously hamper productivity and pretty much force the use photoshop. Tonight I'll run the same test on my computer at home. (Dual PIII 450, 256MB ram, 32MB G400, RedHat 6.2/Helix Gnome) Thanks -- Jon Winters http://www.obscurasite.com/ "Everybody loves the GIMP!" http://www.gimp.org/
Re: Performance
On 4 Feb, Raphael Quinet wrote: > I wouldn't be too sure about that. On a system that I was previously > administering (students' network at the university), I have seen some > users that were using /var/tmp or /tmp to store their applications > while they were logged in, and deleted the stuff afterwards. In our university you just have a chance to compile anything if you are using /tmp. It is also a very convenient place because everything else will go over NFS and thus is dog slow. You can even leave your programms there if you make sure that you get the same machine back if you need them or the /tmp-machine is at least running Linux... :) It makes also sense to crosscompile projects like GIMP on an Alpha maschine with plenty of memory and very fast RAID clusters. > progress? On third thought... If your disk quota is exceeded, you > will not even get the core dump. On fourth thought... :-) Who in > their right mind would use the Gimp on a systems that has such strict > constraints? I do sometimes, but you are right: In general it's better to convince the sysadmin to install a programm in a place where everyone might use it than forcing eveyone to do it for himself. Unfortunately you won't have a big chance to get your favourite latest snapshot of some software installed :( -- Servus, Daniel
Re: Performance
On Fri, 04 Feb 2000, Kelly Lynn Martin <[EMAIL PROTECTED]> wrote: > On Fri, 4 Feb 2000 09:52:30 +0100 (MET), [EMAIL PROTECTED] (Raphael Quinet) said: > >I disagree. This would only encourage some users to re-compile their > >own version of the Gimp in a private directory in order to get around > >the hardcoded limits. > > Frankly, I disagree. Systems where admins are likely to impose such > restrictions are going to be ones where users don't have enough space > to compile private copies of Gimp. I wouldn't be too sure about that. On a system that I was previously administering (students' network at the university), I have seen some users that were using /var/tmp or /tmp to store their applications while they were logged in, and deleted the stuff afterwards. The quota was something like 5 Mb on the home directory and much larger in the temporary directories (for good reasons) so they took advantage of that. Some of them were re-compiling every time, some others had stored the compiled binaries on some external ftp servers and were downloading them in /tmp every time they needed them. This had some obvious impact on security... Some other users were hiding their applications in some system directories that had to be writable by all, such as /usr/local/lib/emacs/lock or /var/spool/mail... Anyway, I would not be surprised that any limit that could be hardcoded in the Gimp would be circumvented by some frustrated users who would re-compile their own version of the main executable and put it somewhere when they need it. And as Mark said in another message, it is not our job to enforce local policies (of course we should not make them un-enforceable either) so if the admin wants to restrict disk or memory usage, they should use other means than the Gimp: ulimit and quota are some examples. > >Being a system administrator myself, I believe that an admin should > >always suggest some limits (and maybe use some social engineering to > >encourage users to respect these limits) but should avoid hard > >limits. > > It depends on the kind of users you have and the hardware you're > running. Imposing hard limits is sometimes the only way to deal with > certain types of users. Yes, it is sometimes very hard to convince some users. But here is an example: on one system with limited disk space (old DEC 3100 Ultrix workstations), we had set up some quotas and the disks very constantly full. All users were using the maximum space available under their quota, and they only started cleaning up when they had exceeded their quota. Then we tried an experiment: instead of decreasing the quotas, we decided to increase them significantly for everybody, but every week a "high score" list of disk usage was printed at the entrance of the terminal room, with the names of the top 50 users. This was not a perfect solution, but there was enough social pressure to make sure that nobody stayed at the top of the list for a long time. This solved several problems: most users started to clean up their home directory before entering the top 20, and those who had a valid reason to consume more disk space could easily explain it to the others. Those who could not explain why they consumed so much disk space had to make some room so that others could continue working. Well, that's only an example and it cannot be applied in all cases (e.g. the users have to know and trust each other to some extent, otherwise such a system will just generate suspicion or hatred between them). Ah well, it looks like I got carried away and this is off-topic for this list. Sorry... > >On the other hand, if ulimits are used to limit the maximum file size > >or CPU usage, there is not much that we could do about it. Same if > >disk quotas are activated. The Gimp can have some control over its > >memory usage, but many parts of the code assume that the disk space > >is unlimited (or is not the main constraint). > > Yup. It might be nice to catch SIGXCPU and try to do an orderly > shutdown before the SIGKILL does ya' in, though. :) As long as this is not in glib or libgimp, otherwise I know that some members of this list would complain about plug-ins and signal handlers :-) On second thought... The default for SIGXCPU and SIGXFSZ is to generate a core dump. Maybe it would be better to get a core dump and be able to get whatever is left inside, instead of desesperately trying to save the file and getting a SIGKILL while this is in progress? On third thought... If your disk quota is exceeded, you will not even get the core dump. On fourth thought... :-) Who in their right mind would use the Gimp on a systems that has such strict constraints? -Raphael
Re: Performance
On Fri, 4 Feb 2000 09:52:30 +0100 (MET), [EMAIL PROTECTED] (Raphael Quinet) said: >I disagree. This would only encourage some users to re-compile their >own version of the Gimp in a private directory in order to get around >the hardcoded limits. Frankly, I disagree. Systems where admins are likely to impose such restrictions are going to be ones where users don't have enough space to compile private copies of Gimp. >Being a system administrator myself, I believe that an admin should >always suggest some limits (and maybe use some social engineering to >encourage users to respect these limits) but should avoid hard >limits. It depends on the kind of users you have and the hardware you're running. Imposing hard limits is sometimes the only way to deal with certain types of users. >On the other hand, if ulimits are used to limit the maximum file size >or CPU usage, there is not much that we could do about it. Same if >disk quotas are activated. The Gimp can have some control over its >memory usage, but many parts of the code assume that the disk space >is unlimited (or is not the main constraint). Yup. It might be nice to catch SIGXCPU and try to do an orderly shutdown before the SIGKILL does ya' in, though. :) Kelly
Re: Performance
On Thu, 03 Feb 2000, Kelly Lynn Martin <[EMAIL PROTECTED]> wrote: > On Thu, 3 Feb 2000 19:33:31 +0100 (CET), [EMAIL PROTECTED] said: > > If you have a shared maschine the best would be to let the > >administrator choose how much memory each user will get because > >users'll ALWAYS try to get what they can even if it makes no > >sense > > It might be a good idea to have a compile-time configuration option > for maximum cache size, I disagree. This would only encourage some users to re-compile their own version of the Gimp in a private directory in order to get around the hardcoded limits. Being a system administrator myself, I believe that an admin should always suggest some limits (and maybe use some social engineering to encourage users to respect these limits) but should avoid hard limits. Because most users do not like hard limits and they start wasting their time and the admins' time trying to work around them. > and it might also be a good idea for gimp to > check its ulimits and adjust its cache size so as to avoid running > over its data segment limit or maximum resident set size. Some > admins use these as a way to prevent resource hogging. That would be a good idea, indeed. If the limit on memory is rather low but there is still some room left on the disk, then it would be good to lower the tile-cache-size. This would ensure that the Gimp would not die prematurely because of malloc problems when it could have swapped some tiles to disk. On the other hand, if ulimits are used to limit the maximum file size or CPU usage, there is not much that we could do about it. Same if disk quotas are activated. The Gimp can have some control over its memory usage, but many parts of the code assume that the disk space is unlimited (or is not the main constraint). -Raphael
Re: Performance
On Thu, Feb 03, 2000 at 07:33:31PM +0100, [EMAIL PROTECTED] wrote: > If you have a shared maschine the best would be to let the > administrator choose how much memory each user will get because > users'll ALWAYS try to get what they can even if it makes no > sense This is none of our business. If that is an issue the admin has to take care of enforcing systemwide limits anyway. -- -==- | ==-- _ | ---==---(_)__ __ __ Marc Lehmann +-- --==---/ / _ \/ // /\ \/ / [EMAIL PROTECTED] |e| -=/_/_//_/\_,_/ /_/\_\ XX11-RIPE --+ The choice of a GNU generation | |
Re: Performance
> > > If you have a shared maschine the best would be to let the > >administrator choose how much memory each user will get because > >users'll ALWAYS try to get what they can even if it makes no > >sense > > It might be a good idea to have a compile-time configuration option > for maximum cache size, and it might also be a good idea for gimp to > check its ulimits and adjust its cache size so as to avoid running > over its data segment limit or maximum resident set size. Some > admins use these as a way to prevent resource hogging. > I thought about making it dependant on whether the sysadmin put a default value into the gimprc_user (which is the file that gets copied into the users .gimp directory later. It should be trivial to parse this file during the user_installation step. If the tile-cache-size is set, use the value and skip the extra dialog. If we document this even sysadmins that use binary packages have a chance to set a default value globally. However, I will certainly not find the time to hack this in the next weeks... Salut, Sven
Re: Performance
On 03 February, 2000 - Elan Feingold sent me these 0.7K bytes: > Instead of guessing at fixed amounts, why not: > > - Detect how much memory the user has. > - Pick a reasonable default in terms of percentage (say, 50%). Gee, that'll work well on our 4G multiuser box.. No answer is the true answer.. unfortunately.. /Tomas -- Tomas Ögren, [EMAIL PROTECTED], http://www.ing.umu.se/~stric/ |- Student at Computing Science, University of Umeå `- Sysadmin at {cs,ing,acc}.umu.se
Re: Performance
On Thu, 3 Feb 2000 19:33:31 +0100 (CET), [EMAIL PROTECTED] said: > If you have a shared maschine the best would be to let the >administrator choose how much memory each user will get because >users'll ALWAYS try to get what they can even if it makes no >sense It might be a good idea to have a compile-time configuration option for maximum cache size, and it might also be a good idea for gimp to check its ulimits and adjust its cache size so as to avoid running over its data segment limit or maximum resident set size. Some admins use these as a way to prevent resource hogging. Kelly
Re: Performance
On 3 Feb, Raphael Quinet wrote: > I think that asking the user is the best solution in any case, because > you can hope that the user has some vague idea of how much memory is > or will be available on the system he is using (shared or personal > computer). This will not work in all cases (e.g. dumb users, or users > who have a home directory mounted on a network of heterogenous > machines) but it will probably be better than any attempt at guessing > what is best. If you have a shared maschine the best would be to let the administrator choose how much memory each user will get because users'll ALWAYS try to get what they can even if it makes no sense -- Servus, Daniel
Re: Performance
On 3 Feb, Arcterex wrote: > I think that this was discussed some time back and the conclusion was > that if you have 5 users on your system all using gimp and each using > 50%... well, you see where that could be a problem. In that case you could adjust the value manually. But bear in mind: If you have 5 users manipulating big pictures via remote X you will have other perfomance problems than too little memory... -- Servus, Daniel
Re: Performance
On Thu, 3 Feb 2000, Arcterex <[EMAIL PROTECTED]> wrote: > I think that this was discussed some time back and the conclusion was that > if you have 5 users on your system all using gimp and each using 50%... > well, you see where that could be a problem. I agree. Most of the time, I use the Gimp on a multi-user system that has plenty of memory, but would still swap if each user would set their tile-cache-size to 512 Mb or more. However... I think that the problems coming from having that value set too low are worse than the problems you get when you set it too high. And anyway, it only hurts on a multi-user system iff all users are currently trying to consume (more than) their share of memory. But the "percentage of memory" approach has other problems that were discussed before. First, what do you define as "available memory"? Is that the total amount of RAM, the total amount of RAM minus what is taken by the "default applications" (whatever that means), the total amount of RAM available when the Gimp is started? And measuring the available memory is not easy: this is trivial under Linux, but keep in mind that it must also work for Solaris, IRIX, HP-UX, AIX, Windows 95, Windows NT, OS/2 and many others... > Maybe have a note pop up when first run asking the user how much memory > they want to reserve for the Gimp, and note that this amount (percentage?) > will be used for all Gimp sessions or something like that. I think that asking the user is the best solution in any case, because you can hope that the user has some vague idea of how much memory is or will be available on the system he is using (shared or personal computer). This will not work in all cases (e.g. dumb users, or users who have a home directory mounted on a network of heterogenous machines) but it will probably be better than any attempt at guessing what is best. In any case, there is something that we could do right now: add one or two lines of text in the preferences dialog box, explaining what the "tile cache size" means, or even better: a prominent message saying something like: "please look at the help page for some tips about this very important option". I think that the majority of Gimp users do not know what this option means and how it can influence the performance of the application. I personally set it to 150 Mb a long time ago and I have always been satisfied with the results (that's on a Solaris box - on my home PC I set it to 50 Mb). So I would suggest: 1 - Increase the default value to something more useful than 10 Mb. Maybe 32 Mb. I think that too much is better than too little. 2 - Explain what this option means (in the preferences dialog). 3 - Add a nice prompt during the installation or any other smart thing that we can do to draw the users' attention on this option. The last step can be done after 1.2 if necessary. -Raphael
Re: Performance
Sven Neumann said... | |Shouldn't we increase the default for the tile_cache_size? GIMP was shipped |with the default of 10MB years ago. Memory is cheap nowadays and I guess we |can expect the average user to have more RAM available. I'd suggest setting |it to 32MB. I'd say go for it. We could always add a warning to users with less memory (or even decrease it automagically in cases where we can detect this). |... I propose to add a dialog to the |user_installation step that lets the user specify the tile_cache_size and |the swap directory. Of course such a dialog would also have a few short |sentences explaining the importance of these settings together with hints |for a good choice. I think this is an *excellent* idea. -Miles
Re: Performance
> > Instead of guessing at fixed amounts, why not: > > > > - Detect how much memory the user has. > > - Pick a reasonable default in terms of percentage (say, 50%). > > - Let the user change this default, also in terms of percentage. > > > > That way, the default the Gimp ships with will work with all systems, and > > also on a given system if the user dumps more memory in, the Gimp will > > automagically have better performance. > > I agree that magic numbers are foolish to use, but I do think that the > method for choosing a default should be carefully planned. Your system > sounds good, but the entire issue merits some discussion and possible > alternatives. I think that this was discussed some time back and the conclusion was that if you have 5 users on your system all using gimp and each using 50%... well, you see where that could be a problem. Maybe have a note pop up when first run asking the user how much memory they want to reserve for the Gimp, and note that this amount (percentage?) will be used for all Gimp sessions or something like that. My $0.02 -- Arcterex -=|=- [EMAIL PROTECTED] -=|=- http://arcterex.ufies.org '... I was worried they were going to say "you don't have enough LSD in your system to do UNIX programming."' -- Paul Tomblin in a.s.r
Re: Performance
On Thu, Feb 03, 2000 at 10:35:33AM -0600, Elan Feingold wrote: > > Shouldn't we increase the default for the tile_cache_size? GIMP > > was shipped with the default of 10MB years ago. Memory is cheap > > nowadays and I guess we can expect the average user to have > > more RAM available. I'd suggest setting it to 32MB. > > Instead of guessing at fixed amounts, why not: > > - Detect how much memory the user has. > - Pick a reasonable default in terms of percentage (say, 50%). > - Let the user change this default, also in terms of percentage. > > That way, the default the Gimp ships with will work with all systems, and > also on a given system if the user dumps more memory in, the Gimp will > automagically have better performance. I agree that magic numbers are foolish to use, but I do think that the method for choosing a default should be carefully planned. Your system sounds good, but the entire issue merits some discussion and possible alternatives. Zach -- Zachary Beane [EMAIL PROTECTED] PGP mail welcome. http://www.xach.com/pgpkey.txt
RE: Performance
> Shouldn't we increase the default for the tile_cache_size? GIMP > was shipped with the default of 10MB years ago. Memory is cheap > nowadays and I guess we can expect the average user to have > more RAM available. I'd suggest setting it to 32MB. Instead of guessing at fixed amounts, why not: - Detect how much memory the user has. - Pick a reasonable default in terms of percentage (say, 50%). - Let the user change this default, also in terms of percentage. That way, the default the Gimp ships with will work with all systems, and also on a given system if the user dumps more memory in, the Gimp will automagically have better performance. Just my $0.02, -Elan
Re: Performance
Sven Neumann wrote: > > > >You should definitely increase your tile cache size from the default 10mb. > > >It should help performance. > > Shouldn't we increase the default for the tile_cache_size? GIMP was shipped > with the default of 10MB years ago. Memory is cheap nowadays and I guess we > can expect the average user to have more RAM available. I'd suggest setting > it to 32MB. Yes! > I want to present an older idea once again since the discussion about the > tile_cache_size is bac alive. I propose to add a dialog to the > user_installation step that lets the user specify the tile_cache_size and > the swap directory. Of course such a dialog would also have a few short > sentences explaining the importance of these settings together with hints > for a good choice. Yes(2)! > > However I've noticed that GIMP 1.1.15 has a little bug when changing this > > value with the Preferences dialog. It seems to be fixed to 10Mb, the only way > > is changing manually gimprc. ;) > > I've discovered this bug this morning, I'm not sure at all is a true bug, > > I haven't checked the sources yet. > > Anyone experienced the same problem? > > Here it works like a charme and IIRC has been fixed a week ago. Yes(3)! I noticed that some prefs. widgets were connected to the wrong varibles while browsing preferences_dialog.c. I hope that I've fixed all of them correctly. bye, --Mitch
Re: Performance
> >You should definitely increase your tile cache size from the default 10mb. > >It should help performance. Shouldn't we increase the default for the tile_cache_size? GIMP was shipped with the default of 10MB years ago. Memory is cheap nowadays and I guess we can expect the average user to have more RAM available. I'd suggest setting it to 32MB. I want to present an older idea once again since the discussion about the tile_cache_size is bac alive. I propose to add a dialog to the user_installation step that lets the user specify the tile_cache_size and the swap directory. Of course such a dialog would also have a few short sentences explaining the importance of these settings together with hints for a good choice. > However I've noticed that GIMP 1.1.15 has a little bug when changing this > value with the Preferences dialog. It seems to be fixed to 10Mb, the only way > is changing manually gimprc. ;) > I've discovered this bug this morning, I'm not sure at all is a true bug, > I haven't checked the sources yet. > Anyone experienced the same problem? Here it works like a charme and IIRC has been fixed a week ago. Salut, Sven
Re: Re: Performance
On Thu, 3 Feb 2000, Andrew Kieschnick wrote: >> I use SuSE Linux 6.2. I have 128 MB RAM. I use the default values for >> tile caching. I have a EIDE IBM 6,4 GB and 10 GB. I use on both a 128 >> MB partition as swap. >You should definitely increase your tile cache size from the default 10mb. >It should help performance. Yes, you're right. ;) However I've noticed that GIMP 1.1.15 has a little bug when changing this value with the Preferences dialog. It seems to be fixed to 10Mb, the only way is changing manually gimprc. ;) I've discovered this bug this morning, I'm not sure at all is a true bug, I haven't checked the sources yet. Anyone experienced the same problem? Happy GIMPing! Marco -- //\/\ Marco (LM) Lamberto e-mail:[EMAIL PROTECTED] (remove 'nOsPaMz-') The Sunny Spot - http://www.geocities.com/Tokyo/1474/
Re: Re: Performance
On Thu, 3 Feb 2000, Martin Weber wrote: > I use SuSE Linux 6.2. I have 128 MB RAM. I use the default values for > tile caching. I have a EIDE IBM 6,4 GB and 10 GB. I use on both a 128 > MB partition as swap. You should definitely increase your tile cache size from the default 10mb. It should help performance. later, Andrew Kieschnick
Re: Re: Performance
I use SuSE Linux 6.2. I have 128 MB RAM. I use the default values for tile caching. I have a EIDE IBM 6,4 GB and 10 GB. I use on both a 128 MB partition as swap. Martin On Wed, Feb 02, 2000 at 08:13:56AM -0800, Martin Weber wrote: > Here some performance tests on an Intel Celeron 333 with 128 MB: > BMP file, grayscale (8-bit), 1x7500 > > loading with ImageMagick 5.1.1: 20 min > loading with GIMP 1.1.15: 10 min > loading with Photopaint 8: 1 min 39 sec > loading with Photoshop 5: 14 sec > > saveing with GIMP 1.1.15: 2 min 25 sec > saveing with Photopaint 8: 23 sec > saveing with Photoshop 5: 11 sec What do you have your tile cache set to? How much RAM is actually available to the GIMP? ... more details would probably help. My Dual PII/500 with 256MB of RAM and a 7200RPM IDE drive and 128MB tile cache loads big TIFFs in seconds. Tom -- --Tom Rathborne[EMAIL PROTECTED] -- http://www.aceldama.com/~tomr/ --"I seem to be having tremendous difficulty with my life-style."
Re: Performance
On Wed, Feb 02, 2000 at 08:13:56AM -0800, Martin Weber wrote: > Here some performance tests on an Intel Celeron 333 with 128 MB: > BMP file, grayscale (8-bit), 1x7500 > > loading with ImageMagick 5.1.1: 20 min > loading with GIMP 1.1.15: 10 min > loading with Photopaint 8: 1 min 39 sec > loading with Photoshop 5: 14 sec > > saveing with GIMP 1.1.15: 2 min 25 sec > saveing with Photopaint 8: 23 sec > saveing with Photoshop 5: 11 sec What do you have your tile cache set to? How much RAM is actually available to the GIMP? ... more details would probably help. My Dual PII/500 with 256MB of RAM and a 7200RPM IDE drive and 128MB tile cache loads big TIFFs in seconds. Tom -- --Tom Rathborne[EMAIL PROTECTED] -- http://www.aceldama.com/~tomr/ --"I seem to be having tremendous difficulty with my life-style."