First of all let me point out that I am one of the people who sent a 'personal'
reply to Jeremy, mentioning crontab.

It was not an idle comment. In fact a couple of years ago I was involved with
the development of an information kiosk. It was completely autonomous, content
be updated automaticly by modem, it was Linux based, and we did use crontab.

In fact the custom code we wrote for the maintaining the system was a small
script, which I will explain later, evrything else being done by the normal
unix tools and facilities.

Jeremy's problem is that his way of thinking about computer systems has been
corrupted by overexposure to the limitations and idiosyncrasies of DOS/Windows.
Not trying to start a holy war here, its just that if you think about the items
he cited in UNIX terms, your first thought is "so where is the problem". Let's
face it, UNIX/LINUX are mostly used for server systems, many of which are
headless, and sysadmins pride themselves on being able to set up systems that
can take care of themselves for months on end.

But let us try to address Jeremy's problems directly. To start with, the idea
of an embedded shell is, by his own admission, a mistake. A command shell is a
means by which a human may interactively gain raw access to the systems
resources and facilities. His desire is a system where no human has raw access,
so the command shell has no role. If thier is no 'user' (and we exclude people
accessing the system via an 'app' ) then there is no requirement for a command
shell. Servers normally run with no one logged in directly, and thus no command
shell is active.

So how **do** we do what Jeremy wants? Well, the best I can do is give you some
tips and tricks to get you thinking in the right way, then you go and
experiment!

Lets start with messages and errors. J only touched the surface when he
mentioned what you can do with syslog. When you launch an app (including,
generally speaking, deamons) you may decide where standard output and standard
error output will go simply by re-directing the output. You are not limited to
files, you may pipe the output to another program or script. But that does
imply that you have one message handler for each app. Instead you may simply
have e.g. error messages concatenated to a common file, with a common script
parsing it. That soon gets messy, but Linux, as ever, has a simple solution in
the form of FIFO buffers.

Using mknod you may create a file that is really a FIFO buffer. Programs
opening for file for append, or piping thier standard out to it, write lines as
they would to any other file. Likewise programs reading the file will simply
get the 'oldest' line written to it. Of course DOS also has these facilities
but they are seldom used. In part this is because few programs use 'stdio', but
also because the pipe in DOS is very clumsy. UNIX, by contrast, handles pipes
with style, I often use them for testing DSP algorithms with generators pumping
out data on standard out which gets fed through 'filters' and other DSP
algorithms via stdin and stdout. The data flows through smoothly and, yes, the
start and end of the chain can be /dev/audio, so I can do:

cat /dev/audio | uncompander | dsp_routine | compander | /dev/audio

Neat huh?

But what about programs which write messages to thier own log file with no
possibilty to change that to your common FIFO? Use a symbolic link ;-)

BTW, there is a package called netpipes which allows you to pipe the standard
out of an app on one machine to the stdin of an app on the other simply by
sticking the ip address and port number in front of the pipe. Netpipes +
/dev/audio + gzip = DIY IP telephony with a single command line !!!!! (OK, its
mega crude, but thats not saying its any worse than MS TAPI;-))

Now we talked about programs and scripts, and here is an area where DOS and
UNIX differ greatly. In DOS we have a 'command shell which also processes
script files. At start up the command shell is launched, which automaticly
interprets a default file (autoexec.bat). We need the command shell to
interpret the scripts (or bat files), and these scripts must be written for the
command shell in use. In UNIX, by constrast, no command shell needs to be
running. If the kernel is asked to launch a program which is not a binary (i.e
it is a script), then it looks to the first line of the script to see which
program it should use to interpret the script, and thus pipes the rest of the
file into stdin of a copy of the interpreting program which is invoked for this
particular instance. Thats why shell scripts in UNIX start:

#!/bin/sh

it says 'interpret this script with the binary /bin/sh'. In Linux systems
/bin/sh is usually a symbolic link to bash, which is able (like most UNIX
shells) to interpret a common subset of shell commands. But you are by no means
obliged to use the normal shell scripts. You may use Perl, and start your
script #!/usr/bin/perl, or perhaps #!/usr/bin/java or python or whatever. A
script written in one language may invoke a script in another, and of course
scripts and binary executables are interchangeable, a script is to all intents
and purposes an .EXE file.

This latter point is important in embedded systems, a perl interpreter is far
too large for a flash based system, and even bash weighs in at 100's of K. You
may are more likely to use one of the minimalist versions of 'ash', which
whilst being small can implement the base scripting language elements. If your
script is more complex than suits ash, then you can use an executable. Of
course if you want to be really rigid and minimalistic you could do away with
shell scripts altogether.

2CTIP. Suppose you have an app 'my_app' which has to carry out specific actions
depending on the contents of a configuration file. You might use something
like:

my_app <config file>

to launch the program, so that my_app starts by opening the config file and
then reading in the input. But why open the file in the app? Let it read the
commands from standard in, so you launch it by:

my_app < config_file

so why stop there? Lets add

#!/usr/bin/my_app

to the start of the config file, so we just 'launch' the config as if it were
an executable file.

Now, lets talk about making sure apps stay up and running. What better than a
virtual watchdog? When I launch the app, I also launch a 'virtual watchdog'
that must recieve tokens from the app via a FIFO. If I don't get the tokens I
attempt to obliterate the app and re-launch it. The FIFO is a bit clumsy, I
could use 'kill'. Now perhaps you thought KILL was for stopping apps. Actually
that is how it started out, hence the name. But kill worked by sending a signal
to the program that by default shut it down, but was trappable so that
programmers could get it to invoke thier own shutdown routine. Now kill
actually sends a 4 bit number, so in reality it may be used to send one of 16
binary events between two processes. Of course it is better not use the main
ones such as 'terminate' and 'HUP', but there are some seldom used esoteric
ones and user defined signals that are available. You could of course use IPC,
but it would be overkill.

Even so, you have two apps, which also have to know the process number of each
other, so lets put them all in one program, a program that sends itself signals
to stay up. Yhis program, once launched, forks into two identical copies, one
of which carries on with the app whilst the other watches it, a bit like
workers in high risk areas working in pairs.

Now before you say "that sounds like a terrible waste of memory", remember you
will be using a virtual fork, so the duplicated code is only virtual, only one
physical copy exists in memory. And if that sounds 'risky', remember that the
virtual fork technique is based on hardware facilities in the memory management
unit, and if they aint working right evrything goes up the spout.

This mechanism is generic, so having implented it at the start of one app, you
can include it in all your apps that you wish to be 'self-sustaing'

Another 2CTIP. Virtual forks are a very simple and effective way of
multithreading. So much so that the 'real' multi-threading facilities in Linux
have been negleted. There is a caveat. Virtual forks only multi-thread on a
single processor, to use SMP effectivly tou must use real threads, and with SMP
suddenly coming into fashion there has been a clamour to improve the proper
threading facilities. This is the motivation behind the clamour for libc6 (aka
glibc) based systems, as the principle improvement in glibc is the threading
code. As an embedded developer it is improbable that you are interested in SMP,
so you may happily stck with the smaller libc5 and happily htread the night
away with your eating irons.

But what about 'the whole caboodle'? What launches you apps in the first place,
and checks up on the validity of the system as a whole? Well, here you write
your own script (or executable or whatever), which to all intents and purposes
constitutes your 'shell'. You should have realised by know that if you know how
to exploit the facilities of UNIX (which are far more extensive than what I
have sneak previewed here), such a shell will be simple yet powerfull and
elegant, and what you write yourself will be far more powerfull and **easier**
to create than having some generic mechanism that must be programmed and set by
a plethoria of options.

And what starts your shell? Well, when the Linux kernel has booted, it invokes
/sbin/init. You **could** replace this with 'your_shell' but I would not
recommend it, as there are numerous actions to be carried out. Anyway, init
sets up the system by means of the scripts contained in /etc/rc.d, (which also
contains the scripts init uses to shut down the system). I am not going to go
into all the gory details of runlevels etc, save to say that as init moves up
to the highest level (multi-user), it also launches all the other autonomous
subsystems (or deamons) such as the printer queues and the webserver. This
could be a good place to start your script that implements your shell.

On the other hand, when we did the multi-media information kiosk we did not do
this. We had two subsytems, one which furnished the client side, and one which
dealt with updating the system from the server. Our 'shells' were simple
scripts which simply ensured that the apps and deamons where up and running,
and if they were not, re-started them, then exited. These scripts were invoked
periodicly by crontab. So the apps were always started by crontab the first
time it launched our scripts.

Now here is a caveat. Crontabs minimum resolution is minutes. This was not a
problem for us as our kiosks normally ran round the clock, but in the case of
many embedded apps (e.g. an in car mpeg player), one minute is a long time.
Embedded systems need a finer grained crontab with, say, 1 second resolution.
How about a time_t based crontab?

Now for soething completely different. Real time. Most embedded systems do not
need real-time facilties, but they frequently must be real-fast. Lets define
the difference. Real time means that a function may be invoked at a specific
time interval within a given tolerance. I need not be fast. Allthougth a
typical RTOS will be required to invoke a function in 10mS from now, give or
take 100uS, a requirement to invoke a function exactly two years from now, with
5 seconds of tolerence would also be a real-time system. Basicly, the time at
which the function is launched is deterministic. Note that allthough my
computer may be mega fast and doing virtually nothing in that two year period,
if I get a rush of activity at that two year timeout I may still not be able to
respect that tolerance unless I have specic mechanisms to ensure it.

Real-fast on the other hand means that the system is sufficiently fast to
respond to requirements in an adequate time. Lets look at the in car mpeg. The
only critical time factor is that of sending the audio data to the codec, a job
handled by the audio device driver under interupt control. As far as the rest
is concerned, it must simply be sufficiently fast that it is able to respond
quickly to user input and decompress the audio data fast enougth to be able to
keep the output buffer adequately stocked.

Fact of the matter is that the non-real time scheduler normaly used in OS's is
designed to be good at switching the tasks at optimum periods rather than fixed
periods, and thus allthougth it cannot assure the exact period between the
invocation of tasks, it is generally is able to achieve a higher throughput and
is to be preffered unless Real-time is absolutely necessary.

Note that most real-time requirements are real-world interface related, and
thus should be handled by a device driver, that syncronises the system to the
real world as well as interfacing it (as is the case in the mpeg player). Of
course many embedded systems require a device driver anyway to access special
I/O. Experienced windows users tend to be allarmed at the prospective of
writing device drivers, and with good reason. Take heart, in Linux they are
very simple. A basic device driver implemted in the form of a loadable kernel
module may be written with a few lines of code, and then compiled and inserted
into the kernel with no requirement to even look at the kernel source, let
alone compile it, or even reset the computer. Obviously a driver that does
anything serious will require hooks in the kernel, but nonetheless it is
simple. Reccomended book, Linux Device Drivers by Alessandro Rubini (ORA).

Actually, there is no escape from device drivers. If you aboslutely must have
hard fixed scheduling and the like then you will require one of the two
real-time Linux packages. These allow real-time scheduling of tasks, but only
have limited visibility to userland, so you will need a kernel module to
interface these routines to other apps.

Graphics. Look long and hard at X, despite its ugly looks it is very
sophisticated, and extremely modular. It may be invoked with no window manager
and as such represents a full screen graphics whiteboard where objects may be
created and manipulated from multiple programs simutaneously from anywhere
within a TCP/IP network.

There is a good deal of overhead however. If you want something simple and
light, look at the SVGA library, which is a bit like the old Borland 'BGI'
interface under DOS, but far less pedantic. It is also a bit more up to date
with libraries that can handle things such as gif's and jpegs.

and BTW, UNIX/LINUX has really fine grained control over what users may do or
not do, what they may modify or not modify etc.

Example, you may define which apps a particlar user may use or not use.
Applications generally have multiple configuration files, and will look first
for a config file in the users home directory, then, if not available will
apply a system wide default. This means each user may have his own personal
configurations, but that does not mean that he may modify them. If the
administrator so desires he can put personal configuration files in the users
home directories but with permission so that only he may modify them, or he can
have them modifiable by a sub-administrator, without giving that
sub-administrator the power to modify other elements such as the network
configuration. Evry file, every device, every roesource may be individually
defined as being readable writeable or executable by the owner, the group, or
'anyone', irrespective of the loacation within the heirachy and the property of
the directory.

Also processes have personal and group ownships. A script with administrator
access may invoke an app with only user access rights, for example. The
possibilities are endless, and if anything the access control and user
environment control facilities available under UNIX are **exagerated** for a
typical embedded application.


Reply via email to