Re: 12.2: fork() causing getline() to repeat stdin endlessly

2023-10-24 Thread Jon Leonard
On Tue, Oct 24, 2023 at 03:19:43PM -0400, Greg Wooledge wrote:
> On Tue, Oct 24, 2023 at 11:01:55PM +0700, Max Nikulin wrote:
> > On 24/10/2023 12:18, tom kronmiller wrote:
> > > so I unbuffered stdin and that seemed to make it happy.
> > 
> > It might be performance killer. Even fflush(NULL) before fork() may be
> > better.

fflush(NULL) is almost certainly cheaper, and something like it is necessary.
The call to fork() does undefined things to stdio state if the relevant
files aren't flushed.  Correctness matters more than performance, though,
so "correct but might be slow" could be a fine answer.

The stdio functions and fork() come from different standards, so it's not
all that surprising that it's tricky to use both.

> > https://stackoverflow.com/questions/50110992/why-does-forking-my-process-cause-the-file-to-be-read-infinitely
> > "Why does forking my process cause the file to be read infinitely"
> 
> Ooh, that's got an *excellent* answer.

That brings up a subtlety:  I've been assuming a Linux-like system, where
stdio stuff has an in-process cache, backed by a Unix-style file descriptor.
As that page points out, the relevant standards permit quite a bit more
than that, much of which is "undefined behavior".  That's usually bad news
as compiler-writer code for "you're not allowed to do that, and we make
no promises whatsoever as to what happens if you try."

As this example shows, getting "undefined behavior" can be really quite
surprising.

> > glibc bug report was closed as invalid
> > https://sourceware.org/bugzilla/show_bug.cgi?id=23151
> > 
> > I am curious why macOS behaves differently.
> 
> It would have been nice if the glibc developer had explained a bit, but
> of course we aren't owed any explanations.
> 
> At this point, we can conclude that the bug is in fact in the OP's C
> program.  The underlying C compiler, C library, and Linux kernel are
> all behaving within specs.
> 
> At this point I still don't know *why* glibc rewinds stdin intermittently
> on exit().  Apparently Mac OS X doesn't, or at least didn't at the time
> the answer was written (2018).  I guess there must be some reason for it,
> and it's not just randomly pranking people, even if I don't understand
> the reason.

The standard says that exit() does cleanup like things queued for atexit()
and flushes any open files.  There's a good reason for that:  If you produce
some output from a program, you'd be surprised if it just got discarded
at the end of the program because it was sitting in an output buffer
that hadn't been flushed.  If you want the other behavior, there is the
similarly named _exit() that doesn't flush output, presumably for cases
where there was a fork() for some small task and flushing buffers first
wasn't viable.  I'd read the documentation carefully, or maybe design
to avoid having to read that documnetation.

What's probably going on is that under different circumstances the cache
of the input is used differently.  Maybe the MacOS libc is less prone
to read-ahead.  Without digging through the code or nominally-opaque
FILE* structures, it's hard to say.  But I'm pretty sure it's
"intermittently the FILE was in a state that didn't cause problems", not
"exit() intermittently flushes".

Jon Leonard



Re: 12.2: fork() causing getline() to repeat stdin endlessly

2023-10-23 Thread Jon Leonard
On Mon, Oct 23, 2023 at 09:31:11AM -0400, Greg Wooledge wrote:
> On Mon, Oct 23, 2023 at 11:15:22AM +0200, Thomas Schmitt wrote:
> > it helps to do
> >   fflush((stdout);
> > after each printf(), or to run before the loop:
> >   setvbuf(stdout, NULL, _IONBF, 0);
> > 
> > So it is obvious that the usual output buffering of printf() causes the
> > repetitions of text.
> 
> Yes, it looks like a buffering issue to me as well.  Or, more generally,
> some kind of weird interaction between buffered stdio FILEs and the
> underlying Unix file descriptors, when new processes are being fork()ed.

More specifically, fork() does not play nicely with stdio buffering.

For performance reasons, stdio tends to read and write in chunks, reducing
the number of read() and write() calls.  Some data is stored in the stdio
buffers, and that data is getting copied to both processes in the fork()
call.

If you want to mix fork() and stdio, be sure to flush buffers before the
call to fork.  Depending on the task, it may be easier to use the underlying
read() and write() calls.

Jon Leonard 



Re: chromium: "Your browser is managed"

2022-08-30 Thread Jon Leonard
On Tue, Aug 30, 2022 at 04:27:09PM -0700, L L wrote:
> I'm on bullseye, and installed chromium from the bullseye repos. In
> Chromium I get the message that the browser is "managed by your
> organization." I didn't do any special setup for work or school. Is the
> management part of the Debian packaging, or is something sketch going on?

There's malware that does that.  The feature is usually for things like
company-wide security policy, but if you're not expecting it, it's almost
certainly malware.  It's presumably trying to spy on you or serve you ads
or some such.

There's various web pages describing how to remove it; you'll probably need
to remove the directory that chromium is storing data in.  (Back up bookmarks
and such first.)

You'll also want to try to figure out how it got installed, and what else
might be compromised.

Jon Leonard



Re: MIDI-to-USB on Debian?

2018-02-13 Thread Jon Leonard
On Tue, Feb 13, 2018 at 03:00:20PM -0600, Nicholas Geovanis wrote:
> Does anyone have a MIDI-to-USB adapter they could recommend for Debian
> and/or linux?
> This is just for a point-to-point connection from a Yamaha keyboard to
> a laptop. Software on the laptop remains undetermined, probably some
> combination of Rhythmbox, CSound, Supercollider and god knows what
> else. Thanks..Nick

I have what lsusb reports as:
Bus 003 Device 004: ID 0763:1002 Midiman MidiSport 2x2

It does need to download some firmware to the device, but aside from that
it has been working with no issues for me.  I don't remember what it cost
new, and it looks a little different from the "Anniversary edition" that's
for sale now.  But such things do exist.

Jon Leonard



Re: Installing Linux on a Mac Mini without OSX

2014-12-04 Thread Jon Leonard
On Thu, Dec 04, 2014 at 01:25:37PM -0500, Brian Sammon wrote:
> I was recently given a Mac Mini (Intel Mid 2007) that had been wiped.
> 
> I tried to install Debian (Wheezy) on it, and the installer reported success, 
> but 
> when it came time to eject and reboot, Debian didn't boot from the hard drive.
> 
> Googling finds me various pages about installing Linux where one of the steps 
> is something like "Boot into OSX"
> 
> Is there a way to install Debian/Linux on this machine that doesn't involve 
> buying or borrowing (or "borrowing") a copy of OSX?  Is it easier to install 
> linux on a USB disk and run it off of that?
> 
> Two particular subtasks that I may need to do that seem to require OSX:
> 1) "Blessing" a partition
> 2) Checking what version of firmware it has (some versions have BIOS
>compatibility)
> 
> Any pointers/suggestions?

I saw similar behavior installing on more recent Mac Minis.  There, the issue
was that the Mac firmware's idea of the boot sequence didn't match Debian's.

I wound up solving this by installing rEFInd (from
http://www.rodsbooks.com/refind/
) , though not always the same way.  You'd presumably need to install it
to the EFI partition if you don't have a Mac partition.  I think it's also
possible to get Grub or even the Linux Kernel set up to boot as the EFI
loader, but I stopped trying that after I got booting to work reliably.

It should be possible to boot the Debian installer in rescue mode, get a
shell, and do the install from there.

Jon Leonard

> I'm also looking into PureDarwin as a possible solution.
> 
> 
> -- 
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
> Archive: 
> https://lists.debian.org/20141204132537.c44457fee702caa9b3bac...@brisammon.fastmail.fm


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141204192110.gh22...@slimy.com



Re: atomically concurrent writes question

2010-06-22 Thread Jon Leonard
On Wed, Jun 23, 2010 at 01:19:01AM -0500, Ron Johnson wrote:
> On 06/23/2010 12:51 AM, Evuraan wrote:
>> I've a script which forks and echo's [appends] multiple long lines to
>> one plain text file.  some of the multiple threads also yank lines
>> from the same plain text file. this file is now approx 31M and has
>> about 105000 lines ( growing.)   i've been doing this for years now,
>> and have seen nothing go wrong so far. (yay!)
>>
>>
>> my question is rather theoretical - what can go wrong if multiple
>> processes are writing to the same file at multiple instances?
>>
>> i've read elsewhere, that the kernel takes care of this stuff. or is
>> it that i could just not cause finitely concurrent writes to the file
>> and have not seen a corrupt line just by luck?
>>
>
> Not only the kernel, but *bash* (or whatever other lesser shell you  
> use...) is what you'd need to look at to see what it does when it sees 
> ">>".

It's using the O_APPEND flag on the writes, which guarantees that each
write(2) system call operates atomically.  That is, the data on a given
write will be added to the end of the file, and if two or more processes
are writing to the file at the same time, the results will be consistent
with some ordering.  (Data will not be mixed or corrupted between writes.)

It's important to make sure that each line does fit in the same call to
write(), though.  I'm not sure what's meant by "long lines":  If they're
too big to process all at the same time, the guarantees for a single
call to write() no longer apply.  But the shell is almost certainly doing
the right thing.  (In C code, I'd check it very carefully.)

Jon Leonard


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100623063839.gc15...@slimy.com



Re: sharing an ipp printer with a mac client

2009-09-10 Thread Jon Leonard
On Mon, Sep 07, 2009 at 11:38:59PM +0200, jaan vaart wrote:
> >Anyway some time ago I managed to configure the printer on my wife's
> >mac. This expansive laptop is connected wirelessly (same as the linux
> >clients) via a router. Mac osx was able to detect the printer with the
> >protocol (IPP) and server address. Now it can't and upgrading cups to
> >1.4.0 broke printing from it with the old printer profile. I can print
> >in mac console specifying host address with:
> >lp -h 192.168.0.1:631 test.txt
> >but the damn wizard won't detect the printer.
> >Any hints?

I ran into something similar:  The protocol changed between Mac OS 10.5
and 10.6, and some reconfiguration is needed to talk both protocols.

The clearest explanantion I found is at
http://support.apple.com/kb/HT2275?viewlocale=en_US

We worked around it following those instructions, typing
cupsctl BrowseProtocols='"cups dnssd"'
into a Terminal window.  That told the Mac to also look for printers using
the protocol that our cups install was using.  Might be related.

Jon Leonard


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: problems with eps from grace

2007-11-16 Thread Jon Leonard
On Fri, Nov 16, 2007 at 10:52:10AM -0200, Marcelo Chiapparini wrote:
> On Fri, 2007-11-16 at 04:08 +, Kamaraju Kusumanchi wrote:
> > On Thu, 15 Nov 2007 16:48:23 -0200, Marcelo Chiapparini wrote:
> > 
> > > Hello,
> > > 
> > > I am using etch and grace for plotting. With the version of grace in
> > > etch, I am having problems when I save my graphs in eps format. Other
> > > applications doesn't recognize this eps: Latex (texlive) and oo impress
> > > are two examples. This two programas does recognize eps files from other
> > > applications. Does anyone know about a fix or where can I find another
> > > version of grace?
> > 
> > What is the output of
> > 
> > file file_produced_from_grace.eps
> > 
> > Can you see your eps file using a eps viewer like gv without any problem?
> 
> Yes! I can see the eps file with gv, evince and acroread. But in Latex,
> the epsfig package doesn't recognize the figure, and doesn't scale it.
> And oo impress give the "unknown file format" message when I try to
> include the file in a presentation. The problem is really annoying,
> mostly with Latex, because I use Latex in work...   

What do the first few lines of the .eps file look like?  There should be
a few lines at the top that start with %, indicating what sort of
PostScript file it is.

A minimal set looks something like this:

%!PS-Adobe-3.0 ESPF-3.0
%%BoundingBox: 0 0 72 72

Based on the observed behavior it sounds like it's something else, which
is presumably due to a bug in grace.

The actual rules for what the headers can look like are complicated,
so it's probably best to just paste them verbatim.

Jon Leonard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Regarding tar and split

2007-10-13 Thread Jon Leonard
On Sat, Oct 13, 2007 at 11:11:35AM -0600, Paul E Condon wrote:
[snip: remote backups]
> It doesn't appear from the man page that rsync has the equivalent of
> cp --backup=t
> I use this and it is important to me. Nothing ever is deleted from my 
> backup until I do a clean-up sweep on it (which I have never yet done).

The workaround would be to rsync to the other box, and do the cp
--backup=t there.  Costs more disk space, but saves network bandwidth.
[That may or may not be advantageous, of course]

Alternatively, you could commit everything to version control;  I do
something like that for most of my files.

Jon Leonard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Search for string in files

2007-08-26 Thread Jon Leonard
On Sun, Aug 26, 2007 at 07:46:01AM +0200, Jonathan Kaye wrote:
> Johannes Tax wrote:
> 
> > Hi,
> > 
> > I'm trying to figure out how to find a certain string inside a bunch of
> > files. If I, for examples, look for a certain function in a large source
> > tree, I could do
> > 
> > cat `find . -name '*.c'` | grep 'a_certain_function'
> > 
> > but this seems quite awkward, furthermore it doesn't help that much
> > because I don't know in which file the string was found. Maybe there's a
> > tool that makes it possible to find a string in a bunch of files and
> > also to list in which file the string was found? Or any modification to
> > the command given above?
> > 
> > Thanks a lot in advance,
> > 
> > Johannes
> > 
> > --
> > Johannes Tax
> > [EMAIL PROTECTED]
> Hi Johannes,
> If you don't mind a non-cli-solution you can use the Find File built into
> Konqueror. It's in the Tools menu. You just specify your filter and then go
> into the Contents tab where you can specify which text your looking for.
> You get the results in a nice clickable pane. Maybe other file managers
> have a similar feature.

And if you're interested in sticking with the command line, the invocation
you probably want is:

find . -name '*.c' | xargs grep 'a_certain_function'

The xargs command is almost essential for this sort of activity:  It
takes its standard input and uses it as additional arguments to the
command.  It also avoids various limits as to the length of argument
lists.  The version with 'cat' above could well fail if you have too
many findable .c files.

The find above takes advantage of grep's default of listing filenames in
matches if there's more than one on the command lines.  I'm more likely
to use a variant like:

find . -type f | xargs grep -li pattern

That'll search all ordinary files, case insensitive, and only give the
names of the matching files.  The man pages for find and grep can be
very helpful for fine-tuning this kind of search.

Jon Leonard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: SpamAssassin weightings - am I missing something?

2003-03-12 Thread Jon Leonard
On Mon, Mar 10, 2003 at 03:36:09PM -0500, Alan Shutko wrote:
> Jonathan Matthews <[EMAIL PROTECTED]> writes:
> 
> > I think we're talking at cross purposes - I meant "I find it strange 
> > that the excuses cited should be taken as reasons for the mail to be 
> > less likely to be spam", not any other reading.
> 
> Spamassassin default scores are set using a genetic algorithm[1].
> Basically, there's a large corpus of spam and non-spam, the scores
> are set at some default value, then modified by the algorithm until
> they accurately categorize the spam in the test corpus.
> 
> So, yes, it may seem strange that those things are used more on
> non-spam than spam, but in their tests, that's exactly the case.  I
> get lots of legitimate email that has those phrases... random things
> I have subscribed to.  

Well, it could be used more in spam and still be justifiably negative.
That'd happen if it was used in spam, but all the spam (in their test set)
that had it hit enough _other_ tags that it wasn't useful, and it helped
decide (correctly) in some non-spam cases.

> If you don't feel that score reflects the true spaminess on your
> mail, that's the perfect reason to change it in your prefs.

Indeed.  Though you have to be careful about false positives:  One mailing
list that I'm on is falsely labeled as spam by the (stable) spamassasin,
and is unlike the rest of my email.

Jon Leonard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Linksys PCMCIA network card - what module to use?

2002-10-15 Thread Jon Leonard

On Mon, Oct 14, 2002 at 10:24:55PM -0400, Neal Lippman wrote:
> I decided to have a go at getting debian installed on my Thinkpad 770ED;
> mostly I was able to get the initial module configuration to work (a few
> glitches, but I can work those out later). However, until I can get
> networking up and running, I cannot continue the install.
> 
> I have a Linksys 10/100 PC card (PCMCIA), but I cannot figure out what
> the correct driver is for this. Anyone have any information?
> 
> The card is a mode PCMPC100, for what it's worth.

It matters which version of the PCMPC100, actually.  I've had both v2 and v3
versions of the card, and they use different drivers.  Apparently the v1
used a third driver...  IIRC, the v2 used a tulip_cb driver.  I don't
remember the driver for the v3; I can look it up when I get home if it
winds up mattering.

But in any case, with a recent pcmcia_cs it 'just works'.  There is some
conflict between the kernel PCMCIA drivers and the pcmcia_cs ones.  In order
to get it to work I needed to build a kernel without pcmcia support.
(Turning off pcmcia support in the kernel means that the modules in the
pcmcia_cs package get used, and for many cards that's preferred.)

Jon Leonard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: "MUD" Game Engines

2002-10-07 Thread Jon Leonard

On Mon, Oct 07, 2002 at 12:21:49PM -0700, Derek Gladding wrote:
> On Monday 07 October 2002 08:00 am, Soul Computer wrote:
> > I was wondering if anyone would know where I
> > could find "MUD" game engines for Linux.
> >
> > Please "Reply All" when you reply to make certain
> > I get any information you send my way.
> >
> > Thank you very much for any help you can give me.
> >
> 
> Have a look at www.worldforge.org - it's a project to build
> both tools and games. There might be something there that
> is useful to you.

There's a suprising variety of MUDs out there, and the vast majority of
them are portable enough to be a straightforward install.  I only see
one pre-packaged for Debian (stable), in the lambdamoo package, likely
due to most MUDs having licencing restrictions keeping them out of
Debian.

I'd recommend doing a web search for the specific kind of MUD you're looking
for (Tiny, MOO, Diku, LP ...?), and then download & build it.  If you're
looking for more general resources, try starting from http://www.kanga.nu/,
which has a wealth of MUD-related stuff on it.

Jon Leonard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Editing and storing encrypted files

2000-09-06 Thread Jon Leonard
On Wed, Sep 06, 2000 at 10:22:44PM +0200, Wouter Hanegraaff wrote:
> Hi,
> 
> I have some files that I would like to store encrypted. Of course I can
> just type them in, encrypt them using gpg and delete the original, but
> that seems to be a bit of a kludge. It would mean the file is at some
> time readable unencrypted (after saving in the editor), and forgetting
> to turn off the backup file option in the editor when changing the file.
> 
> There must be better solutions, but I can't seem to find them. What I
> would like to have is an editor that has built-in encryption or gpg
> integration, and the option not to store any non-encrypted data on disk
> or on the clipboard.
> 
> Is something like this available?

There are several possibilities.  A great deal depends on your threat model:
What are you trying to protect against?

It sounds like you're worried about someone searching your raw disk and
recovering data.  For that, you probably want to encrypt entire partitions,
and also make sure swap and /tmp are protected.  There's good discussion and
several possibilities listed in the Encryption-HOWTO:
(http://fachschaft.physik.uni-bielefeld.de/leute/marc/Encryption-HOWTO/Encryption-HOWTO.html)

I personally would be tempted to use Matt Blaze's CFS
(ftp://research.att.com/dist/mab/cfs.announce), but I actually store all of my
sensitive files on a separate secured machine.  (no network daemons, etc.)

If you have more extreme secrecy needs, you might want to look into duress
filesystems or steganographic file storage.  Those are only really useful if
you might need to plausibly deny that you had the encrypted files at all.
I'm also not aware of any available implementations.

Jon Leonard



Re: when must I reboot?

1999-09-17 Thread Jon Leonard
On Fri, Sep 17, 1999 at 02:29:38AM -0400, William T Wilson wrote:
> On Fri, 17 Sep 1999, Aaron Solochek wrote:
> 
> > USB should be fine, it was designed so that you could just plug
> > something into it and be ready to go. However you are not supposed
> > to mess with ps/2 while the computer is on... That really doesn't stop
> > me, sometimes nothing happens, sometimes the machine reboots...
> 
> The machine will only reboot if you accidentally touch one of the signal
> or voltage pins to ground, which is very possible to do with a PS/2
> connector.  :}
> 
> It's true that you're not supposed to mess with PS/2 connectors while the
> machine is running, but I for the life of me cannot explain why.  Other
> than the keyboard being confused as to which lights (num lock, etc.) are
> supposed to be lit, I have never suffered any ill effects.  And even that
> goes away the first time you actually press one of those keys...

In contrast, I've killed a computer (requiring a new motherboard) by
hot-swapping a PS/2-style mouse.

It is a stress on the system -- most of the time you'll get away with it,
sometimes it weakens it, and sometimes stuff can break.  Don't hot-swap
stuff unless the documentation says it'll work, or you're willing to fix it.

I think the actual mechanism for damage a mix of overcurrent leading to
electromigration and overheating.  You'd have to check with an
autority on chip reliability to be sure.

Jon Leonard


Re: Sleeping in a shell

1999-09-13 Thread Jon Leonard
On Mon, Sep 13, 1999 at 07:59:35PM +0200, Marcin Owsiany wrote:
> Low!
> Does anybody of you guys know a way to sleep for an amount of time less than
> a second in a shell (bash/sh) script?
> 
> sleep refuses arguments like: 0.5 0,5 1/2
> 
> Maybe there is a nice perl command to do this?
> 
> I really don't feel like writing a my_sleep.c ...

The usleep shell command does what you want (though as has been pointed
out, you may be better off translating to perl.

Jon Leonard


Re: insert a blank page in a ps file

1999-09-09 Thread Jon Leonard
On Thu, Sep 09, 1999 at 02:05:15PM +1000, Shao Zhang wrote:
> Hi,
>   Is there any easy way to insert an extra blank page after
>   each page in a PostScript file?

I'm not aware of a general tool for doing it, but you basically want to
put an extra showpage after every page.  For DSC compliant documents,
you should be able to put them before the lines starting %%Page: (except
the first) and one at the end of the file.

Sort of like this:

 stuff
showpage
%%Page: 2 2
 more stuff
showpage

Check your results with gs before wasting paper, of course.

There's probably a program that'll do this for you, if only by merging with
a document consisting of lots of blank pages.

Note that in the general case this isn't possible, because PostScript files
are really programs, and can do all sorts of things -- most documents just
don't.

>   Say I have a ps file with pages: 1 2 3 4 5 6 7 8 9
>   I want to format it to a new ps file with the following pages:
>   1 blank 2 blank 3 blank 4 blank 5 blank 6 blank 7 blank
> 
>   Thanks in advance.
> 
> Shao.

Hope this helps,

Jon Leonard


Re: syslogd mail

1999-09-06 Thread Jon Leonard
On Tue, Sep 07, 1999 at 12:21:21AM +0200, Pere Camps wrote:
> Hi!
> 
>   How can I make syslogd mail somebody an incident (aka log) when it
> happens?

There isn't support for email in syslogd, though the remote logging
feature (line with @hostname as the action in syslog.conf) may be close
enough to what you want.  Be sure to enable remote messages on the
destination host, with the -r switch.

Alternatively, I've just finished a modification to sysklogd-1.3-31 that
adds calling Perl functions as an action type.  Sending email from there
should be pretty easy.  I've put it under
http://frost.slimy.com/~jleonard/syslog/
until I figure out the right way to contribute it back to the community.

If you really want syslogd to send email directly, I'd be happy to explain how
to modify it to do so, but wouldn't code that myself unless paid to do so.

Jon Leonard