Re: [Gegl-developer] VIPS and GEGL performance and memory usage comparison
On Fri, Jan 29, 2016 at 5:41 AM, Sven Claussnerwrote: > On 28.1.2016 at 10:29 PM Daniel Rogers wrote: >> I am confused. What technical reason exists to assume gegl cannot be as >> fast as vips? Is it memory usage? Extra necessary calculations? Some way >> in which parallelism is not as possible? > > you might have misunderstood me. The performance comparison only shows > that VIPS outperforms GEGL at least in this test. > Technical reasons can be found here: > http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use > > In a mail John explained the differences to me: > "Gegl is really targeting interactive applications, not batch > processing, and it's doing a lot of work that no one else is doing, > like conversion to scRGB, transparency, caching, and so on." GEGL is doing single precision 32bit floating point processing for all operations, thus should not introduce the type of quantization problems 8bpc/16bpc pipelines introduce for multiple filters - at the expense of much higher memory bandwidth - the GEGL tile cache size (and swap backend) should be tuned if doing benchmarks. If this benchmark is similar to one done years ago, VIPS was being tested with a hard-coded 8bpc 3x3 sharpening filter while GEGL was rigged up to use a composite meta operation pipeline based unsharp mask using gaussian blur and compositing filters in floating point. These factors are probably more a cause of slow-down than the startup time loading all the plug-in shared objects, which still takes more than a second on my machine per started GEGL process. /pippin ___ gegl-developer-list mailing list List address:gegl-developer-list@gnome.org List membership: https://mail.gnome.org/mailman/listinfo/gegl-developer-list
Re: [Gegl-developer] VIPS and GEGL performance and memory usage comparison
On Jan 29, 2016 6:20 AM, "Øyvind Kolås"wrote: > > GEGL is doing single precision 32bit floating point processing for all > operations, thus should not introduce the type of quantization > problems 8bpc/16bpc pipelines introduce for multiple filters - at the > expense of much higher memory bandwidth - the GEGL tile cache size > (and swap backend) should be tuned if doing benchmarks. If this > benchmark is similar to one done years ago, VIPS was being tested with > a hard-coded 8bpc 3x3 sharpening filter while GEGL was rigged up to > use a composite meta operation pipeline based unsharp mask using > gaussian blur and compositing filters in floating point. These factors > are probably more a cause of slow-down than the startup time loading > all the plug-in shared objects, which still takes more than a second > on my machine per started GEGL process. Ah so this is interesting. So I feel like rather than removing gegl from that list of benchmarks, it would be better to build more benchmarks, especially ones that call out all the advantages of gegl. E.g. minimal updates, deep pipeline accuracy, etc. It is worth calling out gegls limitations and being honest with them for three reasons. First, they are not fundamental to the design of gegl. Just having a vips backend proves that. Second, a lot of the tricks vips does, gegl really can learn from, and having benchmarks that do not look so good is a great way to call out opportunities for improvement. And third, benchmarks help users make good decisions about whether gegl is a good fit for their needs. Transparency is one of the deeply valuable benefits of open source. In terms of technical projects I feel having this benchmark and the discussion about it inspires: - Gegl could load plugins in a more demand driven way, reducing startup costs. - Gegl could have multiple pipelines optimized for different use cases. - A fast 8 bit pipeline is great for previews or single operation stacks, or when accuracy is not as important for the user. - Better threading, including better I/O pipelining is a great idea to lift from vips. - Anyone can do dynamic compilation nowadays with llvm. Imagine taking the gegl dynamic tree, and compiling it into a single LLVM dynamically compiled function. So if any of the above actually appear in patch sets, then we, at least partially, have this benchmark to thank for motivating that. I can see ways in which any one of the above projects can benefit GIMP as well. And in terms of transparency and user benefit, , the vips developers' benchmark also makes me think that there really should be a set of benchmarks that call out the concrete user benefits for gegl. E.g. higher accuracy, especially for deep pipelines. If these benefits exist it must be possible to measure them, and show how gegl truly beats out everyone else it it's areas of focus. In a very reals sense, vips is doing exactly what they should be. They are saying "if speed for a single image one-and-done operation is what you need vips is your tool, and gegl really isn't." That sounds like an extremely fair statement to me right now, until some of gegls limitations in this area are addressed. And long term, why not? -- Daniel ___ gegl-developer-list mailing list List address:gegl-developer-list@gnome.org List membership: https://mail.gnome.org/mailman/listinfo/gegl-developer-list
Re: [Gegl-developer] VIPS and GEGL performance and memory usage comparison
Hello all, vips maintainer here, thank you for this interesting discussion. On 29 January 2016 at 16:37, Daniel Rogerswrote: > A fast 8 bit pipeline is great for previews or single operation stacks, or > when accuracy is not as important for the user. My feeling is that gegl is probably right to be float-only, the cost is surprisingly low on modern machines. On my laptop, for that benchmark in 8-bit I see: $ time ./vips8.py tmp/x.tif tmp/x2.tif real0m0.504s user0m1.548s sys0m0.104s If I add "cast(float)" just after the load, and "cast(uchar)" just before the write, the whole thing runs as float and I see: $ time ./vips8.py tmp/x.tif tmp/x2.tif real0m0.578s user0m1.768s sys0m0.148s Plus float-only makes an opencl path much simpler. As you say, this tiny benchmark is very focused on batch performance, so fast startup / shutdown and lots of file IO. It's not what gegl is generally used for. John ___ gegl-developer-list mailing list List address:gegl-developer-list@gnome.org List membership: https://mail.gnome.org/mailman/listinfo/gegl-developer-list
Re: [Gegl-developer] VIPS and GEGL performance and memory usage comparison
As someone new to the gegl development list and seeing the performance numbers in that benchmark, I propose adding a asterisk * by each gegl number would help the reader understand that something is different with this library. Then add the corresponding asterisk down by the statement, "GEGL is not really designed for batch-style processing -- it targets interactive applications, like paint programs." Since gegl is the only interactive library in the list the asterisk works well enough and separating it out to a different table is not necessary. Best regards, -Adam Bavier On Thu, Jan 28, 2016 at 2:58 PM, Sven Claussnerwrote: > Hi, > > the developers of VIPS/libvips, a batch image-processing library, > have a performance and memory usage comparison on their website, > including a GEGL test. [1] > Some days ago I told John Cupitt, the maintainer there, some issues > with the reported GEGL tests. > In his answer to me John points out that GEGL is a bit odd in this > comparison, because it is the only interactive image processing library > there. He therefore suggests to remove GEGL from this list. > > What do you GEGL developers think - does anybody need these results so > GEGL should reside in this comparison or would it be OK, if John > removed it from the list? > > Greetings > > Sven > > [1] > http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use > > > ___ > gegl-developer-list mailing list > List address:gegl-developer-list@gnome.org > List membership: > https://mail.gnome.org/mailman/listinfo/gegl-developer-list > > ___ gegl-developer-list mailing list List address:gegl-developer-list@gnome.org List membership: https://mail.gnome.org/mailman/listinfo/gegl-developer-list