Hendrick,

Thank you for your interest in this years GSoC.  We are just now ramping
up our efforts to get involved.  The original post by Ian was a bit of a
"Call to Arms" for the devs (and students) to measure the general GSoC
temperature.

You've obviously put a lot of thought into your project proposal.  It
looks very good.  I must say you have quite a head start as compared to
many students, I would expect.  While we love to hear and discuss the
ooportunities for you, the "official" time has not yet arrived for
submission of student proposals.

I think the best thing to do at this time might be to simply join us in
IRC to further discuss you idea, and get some developer feedback on it.
 Please join us on Freenode in both #e and #edevelop.  #e is a general E
community channel in which you will find many users as well as the
developers.  Topics of discussion vary greatly.  In #edevelop you will
find a more developer oriented group, and the discussion is (typically)
more development oriented.  Please join both and jump in.  Do note, if
you are unfamiliar with IRC, that while there may be plenty of people in
the channel, many may be idle or away from their keyboard.  Please be
patient when asking a question for a response.

Thanks
-ravenlock

On 02/02/2011 04:37, Hendrik Siedelmann wrote:
> 2011/1/28 Hendrik Siedelmann <hendrik.siedelm...@googlemail.com>:
>> Hello everybody.
>>
>> My name is Hendrik Siedelmann and I'm a student at the University of
>> Stuttgart in Germany currently in my fourth year. I have worked with
>> the efl in the past and always wanted to be more active but lacked
>> time/motivation. I now want to change this and would like to
>> paticipate in the GSoC. And I have an idea that went round in my head
>> for quite some time...
>>
>>
>> Image editing with the same efficiency as the rest of the efl.
>>
>> There is no serious image editing in efl, only interaktive stuff.
>> However E17 is all about looks and performance and efficiency so I
>> think some scalable image filtering capabilities that have no
>> limitation on the image size and are usable from embedded to high end
>> workstations is really something that's missing.
>>
>> There are several things wrong in how image editing is done in most
>> programs today.
>> As an example I'll take the following simple processing steps:
>> crop -> denoise -> adjust brightness/contrast -> save
>> Most image processing programs will do the processing this way:
>> The image is decompressed from disk into memory is then filtered in
>> multiple steps and in the full resolution and finally saved.
>> On the way the user will try different values for each filter
>> (propably in some preview dialog) and propably do undo and redo
>> because for example after increasing the brightness he sees there is
>> noise remaining in the shadows or for whatever reason.
>> This approach has several problems:
>>
>> Memory usage:
>> Memory usage will grow unlimited depending on the image size, the
>> number of undo-steps and the number of layers.
>> I have high resolution panoramas where memory usage in gimp exceeds
>> 2GB with only three layers and one layer mask, undo-system completely
>> disabled.
>>
>> Delays:
>> If the user decides on a filter and filtering parameters he will want
>> to continue but he has to wait for the filter to process the whole
>> image which might take some time.
>> Also if he decides that he has cropped wrong at the beginning but the
>> other filters are ok, he will have to undo everything, crop again and
>> then repeat all steps with all the time it takes for the filters to
>> process the image again.
>> Also for many filters the user will have to view the whole image (for
>> example to see the effects of a change of brightness or contrast). So
>> there will be huge delays to process the whole image.
>>
>> Waste of cpu-time:
>> Whenever a filter is applied, all parts of the image that were
>> rendered with the final filter parameters in some preview will be
>> wasted. Also each undo has to throw away all processing done earlier,
>> which will therefore have been a waste of time.
>>
>>
>> There are two technics that can be utilized to solve all this problems:
>>
>> Graph and chunk based on-demand rendering:
>> Every image operation is a node on a graph, which is connected to its
>> input image(s) and has one output.
>> There will always be one node which is connected to the global
>> (graph)-output. Loading an image would mean creating an input node and
>> connecting it's output to the global output.
>> If the user wants to do an operation its input will be connected to
>> the currently selected output and it's output will be the new
>> graph-output.
>> All operations on the graph will be basically instanteous, as the
>> nodes will only need to be connected, zero operations on actual image
>> data will be performed. The output will be created on-demand, and only
>> for a specified area. Therefor there will be no preview in the
>> above-mentioned sense, because it's always possible to just move to an
>> area and let it render, the whole image only needs to be rendered when
>> savĂ­ng.
>> This also gets rid of the overhead of undos because if the user
>> decides to to change a filter that is early in the filter chain he can
>> just do so and the output can then be rerendered accordingly.
>> While this will get rid of most useless processing this is only fast
>> if the area the user (or some frontend) requests is small. If one
>> wants to look at a big area (or the whole image) but at a lower scale
>> (because screens do not have such a high resolution), then there is
>> still the overhead of precessing the whole image for an output of much
>> smaller size (actual output window size).
>> Therefor the second step is:
>>
>> Scaled filtering:
>> Instead of filtering at the whole scale all filters need to be able to
>> work on a scaled down version of the image and output the same image
>> as if applied to the full scale image and then scaling down (or a good
>> approximation).
>> This is trivial for some operation (like brightness contrast ...) easy
>> for others (like blur) and difficult for some, like tonemapping or
>> denoising.
>> The benefit is that with scaled filtering the processing time for a
>> specific filter graph depends only on the actual pixel count of the
>> requested area. Low scale requests for a big part of the image but
>> with the same actual pixel count will probably be even a bit faster
>> compared to a small full resolution request because area filters like
>> blur will be faster on lower scale.
>>
>> Of course this should also include some intelligent caching depending
>> on the request and qraph characteristics and a bit of multithreading
>> (easily possible by processing different chunks at the same time).
>> Also image loaders are needed that do allow random chunk access to all
>> supported image file formats and if possible scaled down access.
>>
>> Also note there are two libraries currently which implement parts of
>> what I propose above, GEGL and libvips.
>> However GEGL does focus on highest quality on all costs (it normally
>> works with float as sample type!), last time I checked it was really
>> slow and not even multithreaded.
>> And while libvips is pretty fast, it does not include scaling and no
>> advanced caching so it is still slow for interaktiv use and more
>> suited for batch processing.
>>
>>
>> Ok long mail. And I didn't yet go into any implementation details.
>> What do you think?
>>
>> hendrik
>>
> 
> Guys?
> 
> ------------------------------------------------------------------------------
> Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
> Finally, a world-class log management solution at an even better price-free!
> Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
> February 28th, so secure your free ArcSight Logger TODAY! 
> http://p.sf.net/sfu/arcsight-sfd2d
> _______________________________________________
> enlightenment-devel mailing list
> enlightenment-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
> 
> 


-- 
Regards,
Ravenlock

Attachment: signature.asc
Description: OpenPGP digital signature

------------------------------------------------------------------------------
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to