Boston Linux Meeting Wednesday, October 15, 2014 - Linux runs the spectrum from High Performance to Power Efficient Computing

2014-10-10 Thread Jerry Feldman
When: October 15, 2014 7PM (6:30PM for QA)
Topic: Linux runs the spectrum from High Performance to Power Efficient
Computing

Moderators Brian DeLacy and Kurt Keville

Location: MIT Building E51, Room 325

Please note that Wadsworth Street is under construction. You can enter
Ames St from Memorial Drive, and take a right onto Amherst St.

Summary

Dart on Debian on BeagleBoneBlack, plus a Linux HPC update

Abstract

Debian + Dart + BeagleBoneBlack

Brian DeLacey introduces the recent packaging of the Dart language
for Debian on the BeagleBoneBlack. We'll look at Wheezy today and Jessie
tomorrow. Brian's demos will showcase a webserver and a variety of
sensor circuits and applications running Dart on the Debian-friendly
BeagleBoneBlack. We'll discuss how this simple setup ties into the core
ideas of The Physical Web project, recently announced by Google.

We'll revisit the crazy code churn rant heard round the community,
and why Device Tree architecture was a huge revolution for ARM
platforms. We'll also review the evolution of GPIO in the Linux kernel
for software controlled digital signals. Bring your multimeter for a
closer look at power efficient computing.

Also, Kurt Keville updates us on the current state of Linux High
Performance Computing (HPC).

For further information and directions please consult the BLU Web site
http://www.blu.org
Please note that there is usually plenty of free parking in the E-51
parking lot at 2 Amherst St, or directly on Amherst St.

After the meeting we will adjourn to the official after meeting meeting
location at The Cambridge Brewing Company
http://www.cambridgebrewingcompany.com/

-- 
Jerry Feldman g...@blu.org
Boston Linux and Unix
PGP key id:3BC1EB90
PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66  C0AF 7CEA 30FC 3BC1 EB90






























signature.asc
Description: OpenPGP digital signature
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Perl Tech meeting Tues Oct 14th - Shell-Shocker CGI and Perl DoS bugs

2014-10-10 Thread Bill Ricker
Boston Perl Monger's 2nd Tuesday comes as late as possible this month,
so falls the day before BLU 3rd Wednesday.
   (I hear some operating system also issues patches that day, doesn't
affect me.)

TOPIC: Shell-Shocker CGI and Perl DoS bugs
DATE: Tuesday, October 14
TIME: 7:00 – 10:00 PM
ROOM: E51-376
SPEAKER: Bill Ricker (lead)

We will examine the implications for the ShellShock BASH bug for Perl
-- it's much wider than just about BASH CGI or even Perl CGI scripts
-- and also a recently discovered/fixed but comparably long-lurking
Perl DoS bug in a core module (Data::Dumper stack smash CVE-2014-4330)
and how is it possibly remotely triggerable.

The good news is ShellShocker was slightly over-hyped; unlike
Heartbleed, this one does NOT generally affect the Internet of Things,
you internet-enabled toaster is likely immune. But Windows and Mac are
not entirely immune to this Linux bug.

[ Anyone who has examined either bug or its implications is welcome to
contribute or co-present - contacting me off-list is recommended,
although in our interactive style I'll cheerfully include ambush
collaborators. ]

Boilerplate details

Tech Meetings are held on the 2nd Tuesday of every month at MIT
building E51, Sloan School Tang Center [not the other Tang building!]
nearer to Kendall Sq than Mass Ave.
 (directions http://boston-pm.wikispaces.com/MIT+Directions).
Talk begins at 7:30.
Refreshments in the hallway prior.
RSVP for count encouraged but not required, to bill.n1...@gmail.com or
Boston-PM list, by 4pm Tuesday.


(NOTE: we're staying in the wider room 376 where we were in summer,
after being in squarish 372 for winter/spring.)

website - boston.pm.org

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Linux Memory Fragmentation, ideas..

2014-10-10 Thread Thomas Charron
  Hello everyone,

  This is a very interesting road, so I'm calling out to see if anyone
might have idea on how to minimize an issue I am having on an embedded
imaging system.

  This device is performing an insane amount of image processing from two
firewire cameras.  We're talking on the order of ~120 fps.  Each of these
is being processed, and various image masks are saved to the local hard
drive, for later use, and are provided to the client side of the device via
an embedded web server within the analyzer software.

  What is happening is, once a minute, a new instance of the imagers is
launched.  These two imagers then connect to the firewire cameras, and go
to work.

  Over time, ~1-2 weeks, the imagers start to fail with at an increasing
rate, as the kernel starts to kill them as the firewire stack cannot
allocate enough DMA buffers to communicate with the cameras.  Note, there
is plenty of ram in DMA32, however, it is fragmented to the point that the
required contiguous 128k page areas are not available.  The system has tens
of thousands of 48k pages, however.

  A short term solution I have found is to simply request the kernel drop
all of it's caches on the floor.  This, in turn, frees up a LOT of memory,
and subsequently, allocs can function without an issue, until it occurs
again.

  I believe the issue is that the system is creation blions of little
files, and the I/O caching system is using unused ram, of which there is
plenty.  The caching, however, it breaking up large page areas into the 4
and 8k areas, fragmenting the RAM significantly.

  Is there possibly a way to limit Linux's caching system to prevent the
use of a portion of the DMA32 zone in its entirety?  Or perhaps block off
portions of the DMA32 zone for use only by firewire and/or DMA transfers?

  Kind of describing the issue out loud, wasn't sure if anyone had any good
ideas to minimize the issue.

-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux Memory Fragmentation, ideas..

2014-10-10 Thread Bill Freeman
I don't know how much visibility into the code you have, but if I were
designing from scratch, I would consider pre-allocating the buffers for the
imagers, then have the new instances connect to these bufferes, rather than
allocate them afresh.  This could likely be done with mmap, with the
buffers locked in memory, and not backed by the swap file, if possible.
Then the buffers are out of consideration for file system and page buffers,
or things that lesser IO might allocate and drop.  If mmap doesn't support
what you need (and I haven't looked at the capability of modern linux
mmap), then a custom device driver should be able to allocate such buffers
at boot time and map them.

Bill

On Fri, Oct 10, 2014 at 4:47 PM, Thomas Charron twaf...@gmail.com wrote:

   Hello everyone,

   This is a very interesting road, so I'm calling out to see if anyone
 might have idea on how to minimize an issue I am having on an embedded
 imaging system.

   This device is performing an insane amount of image processing from two
 firewire cameras.  We're talking on the order of ~120 fps.  Each of these
 is being processed, and various image masks are saved to the local hard
 drive, for later use, and are provided to the client side of the device via
 an embedded web server within the analyzer software.

   What is happening is, once a minute, a new instance of the imagers is
 launched.  These two imagers then connect to the firewire cameras, and go
 to work.

   Over time, ~1-2 weeks, the imagers start to fail with at an increasing
 rate, as the kernel starts to kill them as the firewire stack cannot
 allocate enough DMA buffers to communicate with the cameras.  Note, there
 is plenty of ram in DMA32, however, it is fragmented to the point that the
 required contiguous 128k page areas are not available.  The system has tens
 of thousands of 48k pages, however.

   A short term solution I have found is to simply request the kernel drop
 all of it's caches on the floor.  This, in turn, frees up a LOT of memory,
 and subsequently, allocs can function without an issue, until it occurs
 again.

   I believe the issue is that the system is creation blions of little
 files, and the I/O caching system is using unused ram, of which there is
 plenty.  The caching, however, it breaking up large page areas into the 4
 and 8k areas, fragmenting the RAM significantly.

   Is there possibly a way to limit Linux's caching system to prevent the
 use of a portion of the DMA32 zone in its entirety?  Or perhaps block off
 portions of the DMA32 zone for use only by firewire and/or DMA transfers?

   Kind of describing the issue out loud, wasn't sure if anyone had any
 good ideas to minimize the issue.

 --
 -- Thomas

 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux Memory Fragmentation, ideas..

2014-10-10 Thread Thomas Charron
  I have full viability.  :-)  I've been developing the whole thing for
several years.  We don't have direct access to the buffers, as they are
created by libdc1394, which in turn interfaces with the firewire_core
modules.

  We're in a bind because we're finding the issue now that we're collecting
clinical results running constantly for days at a time.

  We've slated the 'imager daemon' for future versions, which would solve
the issue.  For the short term I'm trying to find ways to make the existing
code function from an OS standpoint.

  Thomas

On Fri, Oct 10, 2014 at 5:00 PM, Bill Freeman ke1g...@gmail.com wrote:

 I don't know how much visibility into the code you have, but if I were
 designing from scratch, I would consider pre-allocating the buffers for the
 imagers, then have the new instances connect to these bufferes, rather than
 allocate them afresh.  This could likely be done with mmap, with the
 buffers locked in memory, and not backed by the swap file, if possible.
 Then the buffers are out of consideration for file system and page buffers,
 or things that lesser IO might allocate and drop.  If mmap doesn't support
 what you need (and I haven't looked at the capability of modern linux
 mmap), then a custom device driver should be able to allocate such buffers
 at boot time and map them.

 Bill

 On Fri, Oct 10, 2014 at 4:47 PM, Thomas Charron twaf...@gmail.com wrote:

   Hello everyone,

   This is a very interesting road, so I'm calling out to see if anyone
 might have idea on how to minimize an issue I am having on an embedded
 imaging system.

   This device is performing an insane amount of image processing from two
 firewire cameras.  We're talking on the order of ~120 fps.  Each of these
 is being processed, and various image masks are saved to the local hard
 drive, for later use, and are provided to the client side of the device via
 an embedded web server within the analyzer software.

   What is happening is, once a minute, a new instance of the imagers is
 launched.  These two imagers then connect to the firewire cameras, and go
 to work.

   Over time, ~1-2 weeks, the imagers start to fail with at an increasing
 rate, as the kernel starts to kill them as the firewire stack cannot
 allocate enough DMA buffers to communicate with the cameras.  Note, there
 is plenty of ram in DMA32, however, it is fragmented to the point that the
 required contiguous 128k page areas are not available.  The system has tens
 of thousands of 48k pages, however.

   A short term solution I have found is to simply request the kernel drop
 all of it's caches on the floor.  This, in turn, frees up a LOT of memory,
 and subsequently, allocs can function without an issue, until it occurs
 again.

   I believe the issue is that the system is creation blions of little
 files, and the I/O caching system is using unused ram, of which there is
 plenty.  The caching, however, it breaking up large page areas into the 4
 and 8k areas, fragmenting the RAM significantly.

   Is there possibly a way to limit Linux's caching system to prevent the
 use of a portion of the DMA32 zone in its entirety?  Or perhaps block off
 portions of the DMA32 zone for use only by firewire and/or DMA transfers?

   Kind of describing the issue out loud, wasn't sure if anyone had any
 good ideas to minimize the issue.

 --
 -- Thomas

 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/





-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/