Hi
when building ecos (configtool-081009) with toolchain (unpacked from ecoscentric-gnutools-arm-eabi-20081213-sw.i386linux.tar.bz2) I get warning errors like the one below:

arm-eabi-gcc -c -I/home/rwb/icfg0/lwip_ecos_install/include -I/home/rwb/ecos/packages/services/memalloc/common/current -I/home/rwb/ecos/packages/services/memalloc/common/current/src -I/home/rwb/ecos/packages/services/memalloc/common/current/tests -I. -I/home/rwb/ecos/packages/services/memalloc/common/current/src/ -finline-limit=7000 -mcpu=arm7tdmi -Wall -Wpointer-arith -Winline -Wundef -Woverloaded-virtual -g -O2 -ffunction-sections -fdata-sections -fno-rtti -fno-exceptions -Wp,-MD,src/sepmeta.tmp -o src/services_memalloc_common_sepmeta.o /home/rwb/ecos/packages/services/memalloc/common/current/src/sepmeta.cxx /home/rwb/icfg0/lwip_ecos_install/include/cyg/kernel/sched.inl: In member function ‘cyg_uint8* Cyg_Mempool_Sepmeta::alloc(cyg_int32, cyg_tick_count)’: /home/rwb/icfg0/lwip_ecos_install/include/cyg/kernel/sched.inl:85: warning: inlining failed in call to ‘static void Cyg_Scheduler::unlock()’: call is unlikely and code size would grow /home/rwb/icfg0/lwip_ecos_install/include/cyg/memalloc/mempolt2.inl:204: warning: called from here /home/rwb/icfg0/lwip_ecos_install/include/cyg/kernel/sched.inl: In member function ‘cyg_uint8* Cyg_Mempool_Sepmeta::alloc(cyg_int32)’: /home/rwb/icfg0/lwip_ecos_install/include/cyg/kernel/sched.inl:85: warning: inlining failed in call to ‘static void Cyg_Scheduler::unlock()’: call is unlikely and code size would grow /home/rwb/icfg0/lwip_ecos_install/include/cyg/memalloc/mempolt2.inl:130: warning: called from here

Applications built with this ecos do however not show any problem - at least not until now. Can I savely ignore these warning or should I increas the option -finline-limit=7000 to a higher value (and if yes which one)? Thanks for help.
   Robert

--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

Reply via email to