What will change? The multiplication won't matter and you can move it
outside of the loop. If there is nothing else to be done in the mainloop it
will do few function calls and get back to work.

If you allow the full frametime to be used, whenever the user will try to do
work the frametime will be gone and skips will happen, even under low load!

If someone is considering download speed I strongly recommend doing splice()
to avoid it coming from kernel to userspace then back to kernel

Things like servers using ecore don't need frametime at all, thus they can
make it a few seconds or minutes :-)

On Thursday, September 22, 2011, Kim Yunhan <spb...@gmail.com> wrote:
> It sounds good. But I'm wonder download performance will be impaired. :'-(
> It should be tested much more.
>
> On Thu, Sep 22, 2011 at 8:13 PM, Gustavo Sverzut Barbieri <
> barbi...@profusion.mobi> wrote:
>
>> Better to use a percentual if frametime otherwise user will try to do
some
>> work and it will not have enough time. Something like 0.7 *
>> ecore_animator_frametime_get()
>>
>> On Thursday, September 22, 2011, Cedric BAIL <cedric.b...@free.fr> wrote:
>> > On Thu, Sep 22, 2011 at 12:51 AM, Kim Yunhan <spb...@gmail.com> wrote:
>> >> Thank you!
>> >> Ecore_Con_Url already have the solution with
>> _ecore_con_url_idler_handler.
>> >> So I just break the while loop if it takes too long.
>> >>
>> >> ==================================================================
>> >> --- src/lib/ecore_con/ecore_con_url.c (revision 63520)
>> >> +++ src/lib/ecore_con/ecore_con_url.c (working copy)
>> >> @@ -1357,15 +1357,21 @@
>> >>    int fd_max, fd;
>> >>    int flags, still_running;
>> >>    int completed_immediately = 0;
>> >> +   double start;
>> >>    CURLMcode ret;
>> >>
>> >>    _url_con_list = eina_list_append(_url_con_list, url_con);
>> >>
>> >>    url_con->active = EINA_TRUE;
>> >>    curl_multi_add_handle(_curlm, url_con->curl_easy);
>> >> -   /* This one can't be stopped, or the download never start. */
>> >> -   while (curl_multi_perform(_curlm, &still_running) ==
>> >> CURLM_CALL_MULTI_PERFORM) ;
>> >>
>> >> +   start = ecore_time_get();
>> >> +   while (curl_multi_perform(_curlm, &still_running) ==
>> >> CURLM_CALL_MULTI_PERFORM)
>> >> +     if ((ecore_time_get() - start) > ecore_animator_frametime_get())
>> >> +       {
>> >> +          break;
>> >> +       }
>> >> +
>> >>    completed_immediately =
>> _ecore_con_url_process_completed_jobs(url_con);
>> >>
>> >>    if (!completed_immediately)
>> >>
>> >>
>> >> It works well for me.
>> >> How about this code?
>> >> Please review again.
>> >
>> > Sounds good to me. If nobody apply, I will in a few hours.
>> >
>> > Thanks,
>> >
>> >> Thank you once again.
>> >>
>> >> On Thu, Sep 22, 2011 at 4:46 AM, Cedric BAIL <cedric.b...@free.fr>
>> wrote:
>> >>
>> >>> On Wed, Sep 21, 2011 at 6:18 PM, Kim Yunhan <spb...@gmail.com> wrote:
>> >>> > Thank you for your advice.
>> >>> >
>> >>> > libcurl already supports asynchronous DNS lookup (including
c-ares).
>> >>> > Ecore_Con_Url is integrated with libcurl.
>> >>> > But I think that code in below blocks asynchronous mechanism of
>> libcurl.
>> >>> > while (curl_multi_perform(_curlm, &still_running)
>> >>> > == CURLM_CALL_MULTI_PERFORM) ;
>> >>> >
>> >>> > I want to fix it simple. :)
>> >>>
>> >>> Agreed, I didn't look to that code since months or years, but why do
>> >>> we have a 'while' here ? shouldn't we just go back to the main loop
>> >>> and be magically called again ? did you try that solution ? if that
>> >>> work, it would be a much better fix in my opinion.
>> >>>
>> >>> > On Thu, Sep 22, 2011 at 12:48 AM, Nicolas Aguirre <
>> >>> aguirre.nico...@gmail.com
>> >>> >> wrote:
>> >>> >
>> >>> >> 2011/9/21 Kim Yunhan <spb...@gmail.com>:
>> >>> >> > Hello!
>> >>> >> >
>> >>> >> > elm_map uses Ecore Con with CURL.
>> >
------------------------------------------------------------------------------
>> All the data continuously generated in your IT infrastructure contains a
>> definitive record of customers, application performance, security
>> threats, fraudulent activity and more. Splunk takes this data and makes
>> sense of it. Business sense. IT sense. Common sense.
>> http://p.sf.net/sfu/splunk-d2dcopy1
>> _______________________________________________
>> enlightenment-devel mailing list
>> enlightenment-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
>>
>
------------------------------------------------------------------------------
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2dcopy1
> _______________________________________________
> enlightenment-devel mailing list
> enlightenment-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
>

-- 
Gustavo Sverzut Barbieri
http://profusion.mobi embedded systems
--------------------------------------
MSN: barbi...@gmail.com
Skype: gsbarbieri
Mobile: +55 (19) 9225-2202
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to