Hi Harry,

2009/10/7 Harry van der Wolf <hvdw...@gmail.com>:
>
> 2009/10/7 Lukáš Jirkovský <l.jirkov...@gmail.com>
>>
>>
>>
>> I'm not Mac user (although I find it really cool but very expensive)
>> but I may found solution for out of memory problem. I had a discussion
>> about memory and I mentioned these fragmentation problems on OS. And
>> get interesting advice – use TLSF [1]. It is said that it doesn't
>> suffer from fragmentation much.
>>
>> Maybe some of you want to take a look at it.
>>
>>
>> [1] http://rtportal.upv.es/rtmalloc/
>>
>> Lukas
>>
>>
>
> Hi Lukáš,
>
> (You are the one person who I had hoped to react. I started to do tests
> myself yesterday evening but they take looooooooong).

Thanks for being confident in me.

>
> Both on 32bit windows, 32bit linux and 32bit OSX, or with the 32bit versions
> of enblend,  we regularly see the error message of enblend crashing due to a
> memory allocation error.  We see this both at hugin-ptx as well as in the
> bugtracker for large projects. We sometimes see it for enfuse in case of
> large projects which need to be fused as well.
> Untill now we blamed it on memory fragmentation, but maybe it's something
> else.

It is possible that it's something else. The question is: how to find it?

> George Row is one of the persons who encounters these errors on OSX. I have
> received a large set (2.5GB) from George in June, some time before my mac
> crashed. At that time I did some tests. These results are gone (no backup of
> test results).Yesterday I took Georges set from my "big disk" backup server
> and did some tests myself trying to stich a 12000X6000 (slightly bigger)
> panorama in hugin-2009.4.0-beta1.
>
> My question to you now is: You recently did some "memory leak" patching on
> the hugin trunk, using cppcheck, thereby finding some "things" in celeste.
> You reported this via
> <http://groups.google.com/group/hugin-ptx/browse_thread/thread/e2b5b09e4706fb80>.
> Can you do the same for the enblend trunk?
> If you want to do this and you find the time for it "in the near furure", be
> so kind to publish this on the hugin-ptx list.
> But please: don't feel obliged. If you don't have time or just do not want
> to do this: just say so.

The cppcheck is still running but enblend.cc has already been checked.
It has not detected any (interesting) problems yet. The major leak of
these static analysis tools is that they are not perfect. Even the
ones like Coverity uses. It may be interesting to try valgrind.

Note: I've just read your post that it doesn't detect anything interesting.

>
> = If you don't want to do this, you can now stop reading.  =
> If you want to do this or are at least interested: please continue reading.
>
> Below you will find my tests on OSX. It's done on the 2.5GB bracked "village
> hotel" project of George Row.
>
> Below the "tail" of the enblend error for a very recent 32bit "Christoph
> Spiel" build:
> enblend: info: loading next image: FoyleDays_M2_040007.tif 1/1
> enblend: info: creating blend mask: 1/3enblend(11221) malloc: ***
> vm_allocate(size=580620288) failed (error code=3)
> enblend(11221) malloc: *** error: can't allocate region
> enblend(11221) malloc: *** set a breakpoint in szone_error to debug
>
> enblend: out of memory
> enblend: St9bad_alloc
> gnumake: *** [FoyleDays_M2_04.tif] Error 1
>
>
> Below the "tail" of the error for the stable 32bit enblend 3.2 build (This
> to prove it's not a recent problem. It's already there in the 3.2 stable
> build).
> enblend(50447) malloc: *** mmap(size=2097152) failed (error code=12)
> *** error: can't allocate region
> *** set a breakpoint in malloc_error_break to debug
> enblend(50447) malloc: *** mmap(size=2097152) failed (error code=12)
> *** error: can't allocate region
> *** set a breakpoint in malloc_error_break to debug
> enblend(50447) malloc: *** mmap(size=2097152) failed (error code=12)
> *** error: can't allocate region
> *** set a breakpoint in malloc_error_break to debug
> enblend(50447) malloc: *** mmap(size=2097152) failed (error code=12)
> *** error: can't allocate region
> *** set a breakpoint in malloc_error_break to debug
>
> enblend: out of memory
> St9bad_alloc
> gnumake: *** [mamaloe_exposure_00.tif] Error 1

It would be nice to find out which malloc() fails. According to the
older discussion it seems to be a problem in CachedFileImage. IMO the
best thing how to try it would be do build enblend without image cache
and create HUGE swap space so it won't run out of memory. If stitch
works then we should look for the problem in image cache.

I'd try it here but it would take ages since I don't have very
powerful PC. I'll try to limit RAM on my PC (there is some switch to
Linux kernel but IIRC it's almost undocumented) If it doesn't work I
can try to replace all my RAM modules with an old 256MB RAM module and
disable/reduce swap space. Then it may occur earlier and with smaller
projects.

I'm a bit afraid that it doesn't depend on how much RAM (or better
virtual memory) is there but on the stitch size. ie. that when the
stitch output is big enough it exposes some weird bug when it
allocates memory even if it may not be necessary.

>
> Test report for a 64bit enblend: After 6 hours and much further in the
> process my system crashed. I will rerun tonight.
> (I can start and monitor remote. I can't start a crashed mac from remote.)
> Note: I build 32bit binaries by default as they run on every mac. A 64 bit
> version only runs on Leopard and Snow Leopard on 64bit hardware. 64bit
> brings hardly performance gain or other benefits, only when making gigapixel
> pano's.)
> To me this does NOT mean that the 64 bit behaves better. It only proves IMO
> that due to the huge 64bit address space, enblend can (might) just continue
> leaving it 's "memory junk" without filling the address space that fast and
> that fragmentation is less an issue within the huge 64bit address space. In
> other words: it will only crash at a later stage when trying to stitch (even
> bigger) projects. But that's an assumption which I can't prove right now.
>
>
> Hoi,
> Harry
>
> >
>

I'll try changing RAM and valgrind tomorrow. I hope the smaller RAM
allows me to run into the problem quite early.

Lukáš

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx
-~----------~----~----~----~------~----~------~--~---

Reply via email to