> On Sep 22, 2022, at 6:06 PM, Claude Marinier wrote:
>
> Allô,
>
> I am considering a different approach for an old project which collects IP
> traffic addresses and counters. The old approach has serious flaws: the PCAP
> handler calls Scheme to process each packet without buffering and it forks at
> regular intervals to write statistics.
>
> In practice, the old approach consumed a lot of CPU time; it should be
> unobtrusive. I am simplifying the data structures to avoid unnecessary memory
> allocation and GC. The approach is not neat or tidy.
>
> Using the mailbox egg should provide buffering; a thread can loop on
> mailbox-receive! and accumulate the counts. On exit, the main can send a
> message to the mailbox to stop the thread cleanly; the remaining data can
> then be saved.
the mailbox egg package has a srfi-18 reader-writer example in the tests
directory ("chicken-install -r mailbox” to get the files )
but you should also see the gochan egg: http://wiki.call-cc.org/eggref/5/gochan
>
> I have never used threads; this is intimidating. Here are some questions.
>
> 1) The SRFI-18 egg has not reached version 1. Which threading egg do you
> recommend for my simple case?
that one
>
> 2) After calling process-fork, will the child still have an open PCAP handle?
> Will the thread still be running? If the thread is running, the child process
> could immediately ask it to stop and clean-up. If the thread is dead, it
> could be messy. The child can wait for the thread to stop, right? The PCAP
> loop will not be running when the main forks.
>
> 3) The parent will need to reset the hash tables. Which is cleaner
> hash-table-clear! or make-hash-table to start fresh (and let GC cleanup the
> old one)? Which has less impact on performance?
-clear! is probably quicker in that some storage is reused (the k/v info gc is
the same)
>
> I am willing to read and experiment. I need to know where to start.
>
> Merci.
>
> P. S. I have asked many questions about other aspects of this before (years
> ago) and the responses were excellent. Thank you.
>
> --
> Claude Marinier