Hello everyone,

  This is a very interesting road, so I'm calling out to see if anyone
might have idea on how to minimize an issue I am having on an embedded
imaging system.

  This device is performing an insane amount of image processing from two
firewire cameras.  We're talking on the order of ~120 fps.  Each of these
is being processed, and various image masks are saved to the local hard
drive, for later use, and are provided to the client side of the device via
an embedded web server within the analyzer software.

  What is happening is, once a minute, a new instance of the imagers is
launched.  These two imagers then connect to the firewire cameras, and go
to work.

  Over time, ~1-2 weeks, the imagers start to fail with at an increasing
rate, as the kernel starts to kill them as the firewire stack cannot
allocate enough DMA buffers to communicate with the cameras.  Note, there
is plenty of ram in DMA32, however, it is fragmented to the point that the
required contiguous 128k page areas are not available.  The system has tens
of thousands of 4&8k pages, however.

  A short term solution I have found is to simply request the kernel drop
all of it's caches on the floor.  This, in turn, frees up a LOT of memory,
and subsequently, allocs can function without an issue, until it occurs
again.

  I believe the issue is that the system is creation beeeelions of little
files, and the I/O caching system is using unused ram, of which there is
plenty.  The caching, however, it breaking up large page areas into the 4
and 8k areas, fragmenting the RAM significantly.

  Is there possibly a way to limit Linux's caching system to prevent the
use of a portion of the DMA32 zone in its entirety?  Or perhaps block off
portions of the DMA32 zone for use only by firewire and/or DMA transfers?

  Kind of describing the issue out loud, wasn't sure if anyone had any good
ideas to minimize the issue.

-- 
-- Thomas
_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

Reply via email to