On Monday 27 March 2006 14:02, David Kuehling wrote: > >>>>> "Marcos" == Marcos Daniel Marado Torres <[EMAIL PROTECTED]> > >>>>> writes: > >> > >> From http://gnunet.org/libextractor/documentation.php?xlang=English : > > > > If the need for 200 MB of memory to compile libextractor seems > > mysterious, the answer lies in these plugins. In order to be able to > > perform a fast dictionary search, a bloomfilter is created that allows > > fast probabilistic matching; gcc finds the resulting datastructure a > > bit hard to swallow. > > But I just wondered why the data had to be passed through GCC? The > binary data could be read from an external file. Or if they really need > to be linked into the resulting shared library, why not put them in > there _directly_ i.e. with objcopy: > > objcopy --input-target=binary --binary-architecture=elf32-i386 \ > bloomfilter.dat library.o > > or something...
Well, that may work for GNU systems (with working objcopy), but it is not portable. In my opinion, gcc should just be improved to handle large in-line arrays. External files are also not a good solution: it would be slower and also cause more/additional/other problems for people building packages (more files to handle / install). Christian _______________________________________________ Help-gnunet mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-gnunet
