Good Bye

2015-11-09 Thread Harald Becker

Hi All,

everything comes to an end. Looks like my active development has reached 
this end.


After a long and hard phase of decision I'm going to stop all of my 
development work, not only for Busybox, for all projects. The reasons 
for this are difficult, have just cumulated and don't belong only to the 
last discussions I had here.


So the last I want to say is, thank you to those who responded and gave 
input, in any way.


The mail address is unsubscripted from the list soon, but will stay 
valid until the end of this year. For those who like to stay in contact, 
send me a message, I will respond with a different address which should 
be kept valid.


Good Bye
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: Neue stromsparende Rechner

2015-05-23 Thread Harald Becker
Posting the previous message to this list was unintentional ... sorry 
for the noise!


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Neue stromsparende Rechner

2015-05-23 Thread Harald Becker

Sehr geehrter Herr Kawohl,

falls sie daran denken ältere Rechner durch Strom sparende Modell zu 
ersetzen, aber diese ganz kleinen Rechner suspekt oder zu limitiert 
sehen, sollten sie vielleicht ein Auge auf die Angebote werfen, die 
derzeit für modernere Motherboards aufkommen:


Etwa wie dieses (es gibt aber noch ähnliche):
http://www.heise.de/newsticker/meldung/Braswell-Motherboards-mit-passiv-gekuehlter-CPU-von-Asus-2663660.html

Ein solches Board sollte man in ein herkömmliches ATX Gehäuse einbauen 
können und somit Netzteil und Festplatten weiter nutzen, bei deutlich 
reduziertem Stromverbrauch und niedrigerer Geräuschentwicklung.


Lediglich beim Bedarf für hochperformante Spiele- und 
Grafik-Anwendungen, rate ich von derartigen Rechnern ab.


Mit freundlichen Grüßen
Harald Becker
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox

Re: prompt display when cd to dir symoblic link

2015-04-02 Thread Harald Becker

On 02.04.2015 11:35, Ron Yorston wrote:

Harald Becker wrote:
Another similar case I've encountered is with the home directory.
The shell respects HOME in performing tilde expansion but lineedit
doesn't have access to the shell variable so uses getpwuid when
trying to display '~' in the prompt.


ACK

Can be fixed the same way, as my suggestion, but needs one more 
parameter to read_iine_input():


... in ash.c:

read_line_input(..., lookupvar("HOME"), ...)

... and in read_line_input() this must set:

read_line_input(..., char const *homedir, ...)

home_pwd_buf = homedir ? strdup(homedir) : nullstr;


This shall fix your concern, too.

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: prompt display when cd to dir symoblic link

2015-04-02 Thread Harald Becker

On 02.04.2015 10:34, Harald Becker wrote:

I don't know how to fix this.


Suggesting:

adding an extra parameter to read_line_input()

read_line_input(..., char *cwd_buf, ...)

propagating this to parse_and_put_prompt()
(removing the local variable of same name)

parse_and_put_prompt(..., char *cwd_buf)


Then calling in ash:

read_line_input(..., cdopt() ? physdir : curdir, ...)

... and at other locations, giving save function as now:

read_line_input(..., NULL, ...)


That should do the job, as expected.

--
Harald


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: prompt display when cd to dir symoblic link

2015-04-02 Thread Harald Becker

On 02.04.2015 10:17, Ron Yorston wrote:

Harald Becker wrote:

... but I can't find the location in BB ash.c, where this \w is replaced
with the directory name :( ... (any hint?)


libbb/lineedit.c


Thanks, I stumbled on this at the same moment your message came in ... 
and the separation into a different file is the problem:


As the code for this prompt display has got moved into libbb, it has no 
more access to the value of ash's curdir variable ... using getcwd to 
get the value to display. Originally this used the same logic for \w 
(and \W) as the pwd command, accessing the variables curdir or physdir 
(depending on shell flag).


I don't know how to fix this.

IMO this is a bug, as most users expect \w and \W to honor the shell 
flag settings and display either physical or logical directory. That was 
the original concern.


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: prompt display when cd to dir symoblic link

2015-04-02 Thread Harald Becker

On 02.04.2015 09:28, Denys Vlasenko wrote:

Yes, bash shows that, but the *real* getcwd system call
returns "/hdd1/test" in this case.

You *can't* have a symlink as a current directory.
It's just not possible in Linux.


So just modify santosh's question:

Why can't we use and display the same value for \w in prompt string, as 
the pwd command does? IMO it should exactly do that.


... but I can't find the location in BB ash.c, where this \w is replaced 
with the directory name :( ... (any hint?)


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-20 Thread Harald Becker

On 20.03.2015 21:28, V.Krishn wrote:

The concept of a simple netlink reader daemon seems nice.
But would seem extra to having another reader that reads this reader.
The concept of whole /sys tree is being re-done, even if its just not
visible(memory). I could add/suggest few changes.
Please consider concept/flow below as theoretical.


1) Your question is about the principal of putting all together in one 
logical process, or splitting operation into different threads. Most 
likely will modern multi core processors benefit from splitting into 
threads, as those may run on different cpu cores, but also single core 
machines shall gain benefit, as the pipe has buffering purposes and 
allow for more parallel operation.


2) The pipe is pure memory operation, which will never access any 
physical disk, so we do memory writes, even faster than writing anything 
into /sys/...


3) The reader is started on demand, that is this process is fired up 
only when events arrive and it shall exit, when idle. So the bigger part 
drops it's resources giving more to the system for other operation. Only 
the small netlink reader part stays in memory.


4) Putting small, functional simple operations together in a pipe, is 
the Unix way. netlink is a long lived daemon, but concentrating on it's 
single job, to read from netlink, reformat the messages into textual 
format, then write them to stdout, make it more compatible to the Unix 
philosophy (IMO). And it make this piece of software even compatible 
with other textual Unix tools, which can give extra benefit, when you 
like to do need things, like display all incoming event messages on a 
console.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: One-shot table driven system setup

2015-03-20 Thread Harald Becker

I apologize, wrong translation, ...

On 20.03.2015 15:03, Harald Becker wrote:

Anyway shall confsh *not break* the KISS principal, Busybox is
following, *not wasting* resources otherwise useless, and *not build*
any specific system setup operation into the binary code.


s /otherwise useless/unnecessarily/

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


RFD: One-shot table driven system setup

2015-03-20 Thread Harald Becker

Hi !

*Intention:*

My original intention was (beside geting netlink operation), to add some 
extensions to mdev, to have the ability to do a simple table driven 
one-shot startup of the device file system.


The same extensions would also allow to bring up the other virtual file 
systems, at no extra cost, but let Laurent correctly state, that this 
would mix functionality into mdev, logically not belonging to the device 
file system.


So the decision was to split the intended initial function (xdev -i) and 
build the table driven startup feature into it's own applet (confsh), 
independed from any used device file system management.


This also allows to extend the intended functionality, without mixing 
logical functionality, bringing this to a hopefully general purpose 
method for easy table driven setup operations.



*Primary goal:*

The primary idea of confsh is simplification of the system setup, not 
replacing one type of complexity with a different one. Where the novice 
admin shall get the possibility to setup a simple system in an easy 
manner, and the expert shall gain from simplicity for the default things 
done a thousand times, but still have the ability to insert hookups 
(call helper scripts) to do any special kind of operation he likes to do.


For this approach the focus lies on description on what is required, has 
to be done or setup in *your* system (table of descriptive lines), 
instead of the usual shell script approach, describing more how this 
operation is done (all the commands to achieve the result).


Anyway shall confsh *not break* the KISS principal, Busybox is 
following, *not wasting* resources otherwise useless, and *not build* 
any specific system setup operation into the binary code.



*Planned operation:

The planned operation is a shell pre-processor, which read a specified 
configuration file for a table driven system setup operation, create a 
shell script and pass (pipe) that to /bin/sh (or may be better $SHELL 
with default /bin/busybox ash ?). This approach allow table driven 
operation for general kinds of environmental setups, either system or 
application related.


The lead in code send to the shell, instructs that to read a 
configurable script file (setup by system maintainer, example provided 
in distribution), defining the required shell functions for the 
operation of confsh. The generated script code will never do any 
specific system operation, but pre-parse and check the conf file format, 
then create shell friendly function calls with a constant order of 
arguments.


e.g. generating shell script code for a mount line from table format
(the brackets mean optional):

MOUNTPOINT [OWNER[:GROUP] [MODE]] @ FSTYPE[:DEVICE] [OPTION] [#PASS]

=> create this shell script line (passed to shell via pipe):

cf_mount MOUNTPOINT OWNER GROUP MODE FSTYPE DEVICE OPTION [PASS]


where cf_mount() is defined in confsh-script.sh:

# cf_mount - create mount point and mount given file system
#   $1 = MOUNTPOINT
#   $2 = OWNER or ''
#   $3 = GROUP or ''
#   $4 = MODE or ''
#   $5 = FSTYPE
#   $6 = DEVICE or ''
#   $7 = OPTIONS or ''
#   $8 = fsck PASS number or absent
#   (giving a pass number include this entry in a created fstab)
cf_mount()
{
  ... do all required operation to create the mount point
  ... set owner, group, mode, and mount the file system
  if [ -n "$8" ]
then ... add a line to variable CF_TABLE_FSTAB
  fi
}


*More complete example:*

/etc/devfs-setup.conf:
(this what the system maintainer write)

---cut-here---
#!/bin/busybox confsh
# (confsh may be used as script interpreter)
# (or called: /bin/busybox confsh CONF_FILE_NAME [ARGS])

# ... here may go other setup stuff

# setup the initial device file system:

# (create empty mount point with owner, group, mode)
# (explicit creation of mount points allow to setup any permission)
/dev  root:hotplug  0751 @

# (mount the file system with the given options)
# (if mount point does not exit, try auto create with root:root 0751)
/dev  root:root  0755  @  tmpfs relatime,size=10240k

# (create empty mount points in the device file system)
/dev/pts root:root  0751 @
/dev/shm root:root  0751 @
/dev/mqueue  root:root  0751 @

# (mount the virtual file systems in /dev)
/dev/pts root:root0755  @  devpts  #0
/dev/shm root:root1777  @  tmpfs   #0
/dev/mqueue  root:mqueue  1777  @  mqueue  #0

# create devfstab table
write fstab /var/run/config.d/devfstab root:admins 0644

# ... more stuff to setup
---cut-here---


=> shall create a shell script to invoke the desired operations, which 
also write the file /var/run/config.d/devfstab (with perm root:admins, 
mode 0644):


(this file is create due to write fstab command):

---cut-here---
# *** auto generated file, don't edit, for changes see ... ***
virtual /dev/pts devpts defaults 0 0
virtual /dev/shm tmpfs defaults 0 0
virtual /dev/mqueue mqueue defaults 0 0
---cut-here---

Note: The /dev virtual file system is not included in the devfstab 

Re: RFD: Rework/extending functionality of mdev

2015-03-18 Thread Harald Becker

Hi Laurent !


  Note that events can still be lost, because the pipe can be broken
while you're reading a message from the netlink, before you come
back to the selector; so the message you just read cannot be sent.
But that is a risk you have to take everytime you perform buffered IO,
there's no way around it.


To make clear, about what case we talk, that is:

- spawn conf parser / device operation process
- exit with failure
- re-spawn conf parser / device operation process
- exit with failure
- re-spawn conf parser / device operation process
- exit with failure
- ...
- detect failure loop
- spawn failure script
- exit with failure or not zero status
- giving up, close read end of pipe
- let fifosvd die

@Laurent: What would you do in that case?

Endless respawn? -> shrek!

--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-18 Thread Harald Becker

Hi Laurent !

> On 18/03/2015 18:08, Didier Kryn wrote:

No, you must write to the pipe to detect it is broken. And you won't
try to write before you've got an event from the netlink. This event
will be lost.


On 18.03.2015 18:41, Laurent Bercot wrote:

  I skim over that discussion (because I don't agree with the design)


Why?

Did you note my last two alternatives, unexpectedly both named #3?
... but specifically the last one "Netlink the Unix way"?

- uses private pipe for netlink and named pipe for hotplug helper
  (with maximum of code sharing)

- should most likely do the flow of operation as you suggested
  (as far I did understand you)

- except, I split of the pipe watcher / on demand startup code of the 
conf parser / device operation into it's own thread (process), for 
general code usability as a different applet for on demand pipe consumer 
startup purposes

(you had that function as integral part of your netlink reader)

- and I'm currently going to split of that one-shot "xdev init" feature 
from the xdev, creating an own applet / command for this, as you suggested
(extending functionality for even more general usage, as suggested by 
Isaac, independent from the device management, and maybe modifiable in 
it's operation by changing functions in a shell script)


So why do you still doubt about the design? ... because I moved some 
code into it's own (small) helper thread?




I can't make any substantial comments, but here's a nitpick: if you
use an asynchronous event loop, your selector triggers - POLLHUP for
poll(), not sure if it's writability or exception for select()- as
soon as a pipe is broken.


This is what I expected, but the problem is, the question for this 
arrived, and I can't find the location where this is documented.




  Note that events can still be lost, because the pipe can be broken
while you're reading a message from the netlink, before you come
back to the selector; so the message you just read cannot be sent.
But that is a risk you have to take everytime you perform buffered IO,
there's no way around it.


Ok, what would you then do? Unbuffered I/O on the pipe, and then what?

... if that single one more message dropped, except the others not read 
from netlink buffer (to be lost on close), matters, then we shall indeed 
use unbuffered I/O on the pipe, and only read a message, when there is 
room for one more one more message in the pipe:


  set non blocking I/O on stdout
  establish netlink socket
loop:
  poll for write on stdout possible, until available
  (may set an upper timeout limit, failure on timeout)
  poll for netlink read and still for write on stdout
  if write ability drops
we are in serious trouble, failure
  if netlink read possible
gather message from netlink
write message to stdout (should never block)
on EAGAIN, EINTR: do 3 write retries, then failure

... does that fit better? I don't think that it makes a big difference, 
but I can live with the slight bigger code.


My problem is not the detection of the failing pipe write, but the 
reaction on it. When that happen, the down chain of the pipe most likely 
need more than just a restart. That is, it should only happen on serious 
failure in the conf file or the device operations (-> manual action 
required). So I expect more loss of event messages, than just that 
single one message, you were grumbling about. Hence on hotplug restart 
we need to re-trigger the plug events, nevertheless!


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-18 Thread Harald Becker

On 18.03.2015 18:08, Didier Kryn wrote:

Either that piece of code does it's job, or it fails and dies. When
 fifosvd dies, the read end of the pipe is closed (by kernel),
except there is still a handler process (which shall process
remaining events from the pipe). As soon as there is neither a
fifosvd, nor a handler process, the pipe is shut down by the
kernel, and nldev get error when writing to the pipe, so it knows
the other end died.


No, you must write to the pipe to detect it is broken. And you won't
try to write before you've got an event from the netlink. This event
will be lost.


Ok, that's true, but even if we catch SIGCHLD, the possibility for a
race condition (event already read from netlink but not written to pipe)
is very high. So you throw in code for catching part of a situation,
which is anyway problematic.

When netlink dies, the socket is closed, so the kernel throws away
further event messages, until a new netlink socket is opened (which take
some time for startup). Do you think it makes much difference, if we
lose one more event?

I think when netlink has to be restartet, it's best to re-trigger the
plug events (cold plug). The handler shall test if device node for an
operation already exist and match the current event, then it may
silently ignore those duplicate plug events (don't redo e.g. plug
scripts) ... but may catch new event message not matching existing
device entry, and do any failure action (not currently in mdev).


You get the information immediately from SIGCLD. You get it too late
from the pipe, and you loose at least one event for sure, a whole
burst if there is.


At the technical principal, you are right, we lose an event message
here, but ...

... one event due to the pipe failure, which mean our hot plug system
has serious problems, then we die and lose how much events until we
fixed the problem and re-started the hotplug system?

Does that one event make a big difference? For what? Extra code, which
doesn't fix the principal problem, or allow to recover?

... and in addition, most likely fifosvd, won't die before netlink has
closed the write end of the pipe. Remember fifosvd does the failure
management for the handler process, restarting when required, only
dieing when there are serious problems, which most likely need admin
(that means manual) intervention.

Do you think it matters losing one more event?


This is fine as long as the netlink reader keeps control on its exit,
not if it's killed.


And when netlink is killed, the it is the responsibility of the higher
instance to bring required stuff up again.


This netlink reader you describe is not the general tool we were
considering up to now, the simple data funnel.


My pseudo code described the principal operation and data flow, not
every glory detail of failure management. So the here described netlink
is what I last called t(netlink the Unix way).


If the idea is to integrate such peculiarities as execing a script,
then it is not the general tool and why not integrate as well the
supervision of mdev-i instead of needing fifosvd. The reason for
fifosvd was AFAIU to associate general tools, nldev and mdev-i.


??? Don't know if I fully understand you here. And why shall exec a 
failure script violate making netlink a general tool? consider:


nldev -e /path/to/failure/script

With may be a default of /sbin/nldev-fail.

... and that single exec with a fixed and small number of arguments is 
usually very small, compared to complete failure management 
(supervision) for the handler process.


Putting this into same code, would make the netlink reader code more 
complex, then otherwise required, and in addition you lose possible 
parallelism due to multi threading on modern processors.




On the other hand, exiting on SIGCLD (after wait()ing the child) is
neither a major change to nldev, nor one which would preclude its use
in any other case.


The problem is the complexity, which arise from this. nldev does not 
wait for any process, but it need to do and grab child status, then see 
if this is the process id of fifosvd ... ohps, wait, where does we get 
that? ... Ok, extra parameter to pass, so we know, a bit earlier, our 
hotplug system is dieing due to serious problems? Which usually won't 
vanish by simple re-starting, without some kind of intervention, to fix 
the problem, which let the system die.


The only other situation is killing the netlink system, but this run's 
slightly different. The kill signal usually go to to the top most 
process in the chain. That is nldev, which will die and close socket and 
pipe. Letting the handler finish still pending messages in the pipe, 
then exit gracefully. Which shall be detected by fifosvd, and let this 
helper also vanish.


And if one really kill fifosvd, just for fun (needs root privileges), we 
are in trouble anyway. Does one more lost hotplug event matter in that case?



OK, let's assume fifosvd polls the pipe. As long as poll() blocks, it
means nlde

Re: Add a user/password interface for a Telnet and ftp connect

2015-03-18 Thread Harald Becker

On 18.03.2015 15:50, Alexis Guilloteau wrote:

After looking at the help of the ftpd function in busybox i know that

the main function is to create an anonymous ftp server so i was not
surprised with the lack but do you think there would be a solution for
that ?


Busybox ftpd is an anonymous ftpd, without access restrictions. I 
suggest putting the files to be served in a separate directory, using a 
chroot and running ftpd with a low privileged user (not as root), so ftp 
access goes not to system related files.


... else you need a full ftpd package (not Busybox ftpd).


And pretty much the same thing for telnetd.


If login to telnetd is done the usual way, it should use /bin/login, 
which shall ask for user name and password, but beware all those 
information is send in clear (readable) text on the net.



Right now the only user on the board is the root with no password.


May be that's your problem. Have you set up your password system correct 
/etc/passwd, /etc/shadow, /etc/group ?


... and based on information from your mail: Is your inetd running in 
the right directory? Has it access to the other commands (especially 
when your BB is not installed at /bin/busybox)?


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-18 Thread Harald Becker

On 18.03.2015 10:42, Didier Kryn wrote:

Long lived daemons should have both startup methods, selectable by a
parameter, so you make nobodies work more difficult than required.


 OK, I think you are right, because it is a little more than a fork:
you want to detach from the controlling terminal and start a new
session. I agree that it is a pain to do it by hand and it is OK if
there is a command-line switch to avoid all of it.



But there must be this switch.


Ack!



No, restart is not required, as netlink dies, when fifosvd dies (or
later on when the handler dies), the supervisor watching netlink may
then fire up a new netlink reader (possibly after failure management),
where this startup is always done through a central startup command
(e.g. xdev).

The supervisor, never starts up the netlink reader directly, but
watches the process it starts up for xdev. xdev does it's initial
action (startup code) then chains (exec) to the netlink reader. This
may look ugly and unnecessary complicated at the first glance, but is
a known practical trick to drop some memory resources not needed by
the long lived daemon, but required by the start up code. For the
supervisor instance this looks like a single process, it has started
and it may watch until it exits. So from that view it looks, as if
netlink has created the pipe and started the fifosvd, but in fact this
is done by the startup code (difference between flow of operation and
technical placing the code).


 I didn't notice this trick in your description. It is making more
and more sense :-).


I left it out, to make it not unnecessary complicated, and I wanted to 
focus on the netlink / pipe operation.




 Now look, since nldev (lest's call it by its name) is execed by
xdev, it remains the parent of fifosvd, and therefore it shall receive
the SIGCLD if fifosvd dies. This is the best way for nldev to watch
fifosvd. Otherwise it should wait until it receives an event from the
netlink and tries to write it to the pipe, hence loosing the event and
the possible burst following it. nldev must die on SIGCLD (after piping
available events, though); this is the only "supervision" logic it must
implement, but I think it is critical. And it is the same if nldev is
launched with a long-lived mdev-i without a fifosvd.


netlink reader (nldev) does not need to explicitly watch the fifosvd by 
SIGCHLD.


Either that piece of code does it's job, or it fails and dies. When 
fifosvd dies, the read end of the pipe is closed (by kernel), except 
there is still a handler process (which shall process remaining events 
from the pipe). As soon as there is neither a fifosvd, nor a handler 
process, the pipe is shut down by the kernel, and nldev get error when 
writing to the pipe, so it knows the other end died.


You won't gain much benefit from watching SIGCHLD and reading the 
process status. It either will give you the information, fifosvd process 
is still running, or it died (failed). The same information you get from 
the write to the pipe, when the read end died, you get EPIPE.


Limiting the time, nldev tries to write to the pipe, would although 
allow to detect stuck operation of fifosvd / handler (won't be given by 
SIGCHLD watching) ... but (in parallel I discussed that with Laurent), 
the question is, how to react, when write to the pipe stuck (but no 
failure)? We can't do much here, and are in trouble either, but Laurent 
gave the argument: The netlink socket also contain a buffer, which may 
hold additional events, so we do not loss them, in case processing 
continues normally. When the kernel buffer fills up to it's limit, let 
the kernel react to the problem.


... otherwise you are right, nldev's job is to detect failure of the 
rest of the chain (that is supervise those), and has to react on this. 
The details of taken actions in this case, need and can be discussed 
(and may be later adapted), without much impact on other operation.


This clearly means, I'm open for suggestions, which kind of failure 
handling shall be done. Every action taken, to improve reaction, which 
is of benefit for the major purpose of the netlink reader, without 
blowing this up needlessly, is of interest (hold in mind: long lived 
daemon, trying to keep it simple and small).


My suggestion is: Let the netlink reader detect relevant errors, and 
exec (not spawn) a script of given name, when there are failures. This 
is small, and gives the invoked script full control on the failure 
management (no fixed functionality in a binary). When done, it can 
either die, letting a higher instance doing the job to restart, or exec 
back and re-start the hotplug system (may be with a different 
mechanism). When the script does not exist, the default action is to 
exit the netlink reader process unsuccessful, giving a higher instance a 
failure indication and the possibility to react on it.




Not detect? Sure you closed all open file descriptors for the write
end (a common cave-eat)

Re: RFD: Rework/extending functionality of mdev

2015-03-17 Thread Harald Becker

Hi Didier,

On 17.03.2015 19:00, Didier Kryn wrote:

 The common practice of daemons to put themselves in background and
orphan themself is starting to become disaproved by many designers. I
tend to share this opinion. If such a behaviour is desired, it may well
be done in the script (nohup), and the "go to background" feature be
completely removed from the daemon proper. The idea behind this change
is to allow for supervisor not being process #1.


Ack, for the case the daemon does not allow to be used with an external 
supervisor.


Invoking a daemon from scripts is no problem, but did you ever come in a 
situation, where you needed to maintain a system by hand? Therefor I 
personally vote for having a simple command doing auto background of the 
daemon, allowing to run from a supervisor, by a simple extra parameter 
(e.g. "-n"). Which is usually no problem, as the supervisor need any 
kind of configuration, where you should be able to add the arguments, 
the daemon gets started with. So you have to enter that parameter just 
once for your usage from supervisor, but save extra parameters for 
manual invocation.


Long lived daemons should have both startup methods, selectable by a 
parameter, so you make nobodies work more difficult than required.


Dropping the auto background feature, would mean, saving a single 
function call to fork and may be an exit. This will result in a savage 
of roughly around 10 to 40 Byte of the binary (typical x86 32 bit). To 
much cost to allow both usages?




 Could you clarify, please: do you mean implementing in netlink the
logic to restart fifosvd? Previously you described it as just a data
funnel.


No, restart is not required, as netlink dies, when fifosvd dies (or 
later on when the handler dies), the supervisor watching netlink may 
then fire up a new netlink reader (possibly after failure management), 
where this startup is always done through a central startup command 
(e.g. xdev).


The supervisor, never starts up the netlink reader directly, but watches 
the process it starts up for xdev. xdev does it's initial action 
(startup code) then chains (exec) to the netlink reader. This may look 
ugly and unnecessary complicated at the first glance, but is a known 
practical trick to drop some memory resources not needed by the long 
lived daemon, but required by the start up code. For the supervisor 
instance this looks like a single process, it has started and it may 
watch until it exits. So from that view it looks, as if netlink has 
created the pipe and started the fifosvd, but in fact this is done by 
the startup code (difference between flow of operation and technical 
placing the code).




 Well, this is what I thought, but the manual says an empty end
causes end-of file, not mentionning the pipe being empty.


end-of-file always include the pipe being empty. Consider a pipe which 
has still some data in it, when the writer closes the write-end. If the 
reader would receive eof before all data has bean consumed, it would 
lose some data. That would be absolutely unreliable. Therefore, the eof 
is only forwarded to the read end, when the pipe is empty.



*Does anybody know the exact specification of poll behavior on this
case?*

 My experience, with select() which is roughly the same, is that it
does not detect EOF. And, since fifosvd must not read the pipe, how does
it detect that it is broken?


Not detect? Sure you closed all open file descriptors for the write end 
(a common cave-eat)? I have never bean hit by such a case, except anyone 
forgot to close all file descriptors of the write end.



No, they should still be processed by the handler, which then stumbles
on the eof, when all event messages are read.

 See above. It would make sense but the manual does not tell that. I
bet the manual is wrong, in this case.


It's the working practice of pipes in the Unix world, may be the 
specification of this goes back to K&R in the 1970th.




PS. I inadvertently went out of the list. Just my habit to click
"reply". I leave it up to you to go back to the list.


I tried to set a CC to the list, but got a response the message has bean 
set to hold, but I try doing it vice versa.


--
Harald


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-16 Thread Harald Becker

On 16.03.2015 22:25, Didier Kryn wrote:

 I had not caught the point that you wanted it a general purpose
tool - sorry.


It's a lengthy and complex discussion, not very difficult to miss 
something, so no trouble. Please ask, when there are questions.




The netlink reader is a long lived daemon. It shall not exit, and
handle failures internally where possible, but if it fails, pure
restarting without intervening other action to control / correct the
failure reason, doesn't look as a good choice. So it needs any higher
instance to handle this, normally init or a different system
supervisor program (e.g. inittab respawn action).


 OK, then this higher instance cannot be an ordinary supervisor,
because it must watch two intimely related processes and re-spawn both
if one of them dies. Hence, it is yet another application. This is why I
thought fifosvd was a good candidate to do that. Also because it already
contains some supervision logic to manage the operation handler.


Supervision is a system depended function, which differs on the 
philosophy the init process is working and handles such things. So 
before we are talking about which supervision, we need to tell, which 
type of supervision you are using, that is mainly which init system you use.




 So, if fifosvd is a general usable tool, it must come with a
companion general usable tool, let's call it fifosvdsvd, designed to
monitor pairs of pipe-connected daemons.


A pipe is an unidirectional thing. Writing a program, sitting at the 
read end of a pipe, to watch the other side is logical mixing of 
functions, but ...



Where as the device operation handler (including conf parser) is
started on demand, when incoming events require this. The job of the
fifosvd is this on demand pipe handling, including failure management.



 2) fifosvd would never close any end of the pipe because it could
need them to re-spawn any of the other processes. Like this, no need for
a named pipe as long as fifosvd lives.


Dit you look at my pseudo code? It does *not* use a named pipe (fifo)
for netlink operation, but a normal private pipe (so pipesvd may fit
better it's purpose). Where as hotplug helper mechanism won't work
this way, and require a named pipe (different setup, by just doing
slight different startup).


 Yes, but it cannot work if the two long-lived daemons are
supervised by an ordinary supervisor. Because one end of the pipe is
lost if one of the processes die, and this kind of supervisor will
restart only the one which died.


... you are wrong. When the netlink process dies, the write end of the 
pipe is automatically closed by the kernel. This let at first the 
handler process detect, end-of-file when waiting for more messages, so 
that process exits. fifosvd then checks the pipes and gets an error, 
telling the pipe has shutdown on the write end, so fifosvd does the 
expected thing, it exits too.


Even if that exit may be delayed somewhat, it does not matter, when the 
higher instance respawns the hotplug system due to the netlink exit. The 
new pipe will be established in parallel, while the old pipe (including 
processes) vanish after some small amount of time.


That is your supervision chain is slight different:

- netlink is supervised by the higher instance, but itself watches for 
failures on the pipe (in case the read end dies unexpectedly)


- the supervision of the pipe read side is a bit complexer, as we use an 
on demand handler process, so we have two different cases: a handler 
process is currently active or not:


- when no handler process is active, fifosvd detects a pipe failure of 
the write end immediately and just exit. So there is no need of 
supervision, only some resources have to be freed


- when there is an active handler process, this process is supervised by 
fifosvd, but itself checks for eof on the pipe, and exit. Meanwhile 
waits fifosvd for the exit of the handler process and checks the exit 
status (if successfull or any failure). Nevertheless which way fifosvd 
takes, at the end it detects, the write end of the pipe has gone and 
takes his hat.


so supervision chain is:

init -> netlink -> fifosvd -> handler


 At some point you considered that the operation handler might be
either long-lived or dieing on timeout. I suggest that the supervision
logic is identical in the two cases.


That was an alternative in the discussion, to show how I got to my 
solution and picked up a solution Laurent mentioned. So the 
alternatives, show the steps of improving my approach has gone.


I highly prefer this last one (netlink reader the Unix way). It is the 
version with the most flexibility, and even addresses the wish, to use a 
private pipe and not a named pipe for netlink operation, without adding 
extra overhead for that possibility.


Indeed are the alternatives very similar, do the same principal 
operation, but move around some code a bit, for different purposes, to 
see which impact each alter

Re: RFD: Rework/extending functionality of mdev

2015-03-16 Thread Harald Becker

On 16.03.2015 21:30, Natanael Copa wrote:

Does it exist systems that are so old that they lack hotplug - but at
same time are new enough to have sysfs?


Oh, yes! Mainly embedded world.


I suppose it would make sense for kernels without CONFIG_HOTPLUG but I
would expect such systems use highly customized/specialized tools
rather than general purpose tools.


You miss to think about all those embedded device which build there 
minimal System around BB, plus some application specific programs.


Such systems do only use BB tools for the system setup, and what if they 
want to use a specialized plug event gatherer (however that does work)?



We have different goals so I will likely not use your tools. I want a
tool for hotplug that avoids dead code.


Then build your own BB version and disable those mechanism in the config 
you do not use. Then there would be no dead code ... but you are free to 
use whichever tool you like.




Thanks for your patience and thanks for describing it with few words. I
think I finally got it.


Your decision, but I still think, you don't do something different, 
except moving around some of the code, without gaining any real benefit, 
but the expense of blocking those who like to setup there system in a 
different way ... ough ... I won't like to use your tools, too!


[For completeness: When you like to know, why I think your design is not 
the right way, see what Laurent told you about this.]


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-16 Thread Harald Becker

On 16.03.2015 19:49, Natanael Copa wrote:

netlink reader | tee /dev/ttyX | device operation handler


This looks good to me.

If you want avoid that this netlink reader in your example is in memory
at all times, then feel free to use my netlink socket activator to
activate it. Otherwise, please ignore it.


Your activator wouldn't be of much benefit, as the netlink reader itself 
tries to stay as small as possible. So your gain may be a single page of 
user space, but pay for this with an extra process descriptor in the 
kernel. Not really a difference, but needs extra CPU power to fire up 
the netlink reader. Not to note, that inactive memor may be swapped out 
by the kernel, so your approach may be a "for resource constraint 
purposes special solution".


The netlink reader does:

establish network socket
wait for incoming event
  gather event information
  write textual event message to stdout

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-16 Thread Harald Becker

On 16.03.2015 18:13, Harald Becker wrote:

This is the essential of your message, I would give *you* the expected
result, but not to everybody else.


Sorry, bad typo :( -> s /I/it/

This is the essential of your message, *it* would give *you* the 
expected result, but *not* to *everybody* else.


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-16 Thread Harald Becker

On 16.03.2015 10:15, Didier Kryn wrote:


4) netlink reader the Unix way

Why let our netlink reader bother about where he sends the event
messages. Just let him do his netlink receiption job and forward the
messages to stdout.

netlink reader:
   set stdout to non blocking I/O
   establish netlink socket
   wait for event messages
 gather event information
 write message to stdout

hotplug startup code:
   create a private pipe
   spawn netlink reader, redirect stdout to write end of pipe
   spawn fifosvd - xdev parser, redirect stdin from read end of pipe
   close both pipe ends (write end open in netlink, read in fifosvd)




 1) why not let fifosvd act as the startup code? It is anyway the
supervisor of processes at both ends of the pipe and in charge of
re-spawning them in case they die. Netlink receiver should be restarted
immediately to not miss events, while event handler should be restarted
on event (see comment below).


This would make the fifosvd specific to the netlink / hotplug function. 
My intention is, to get a general usable tool.


You won't gain something otherwise, as the startup of the daemon has to 
be done nevertheless. It does not matter if you start fifosvd, and then 
it forks again bringing it into background, and then fork again to start 
the netlink part, or do it slight different, start an inital code 
snipped, that do the pipe creation and the forks (starting the daemons 
in background), then step away. This is same operation only moved a bit 
around, but may be not blocking other usages.


The netlink reader is a long lived daemon. It shall not exit, and handle 
failures internally where possible, but if it fails, pure restarting 
without intervening other action to control / correct the failure 
reason, doesn't look as a good choice. So it needs any higher instance 
to handle this, normally init or a different system supervisor program 
(e.g. inittab respawn action).


Where as the device operation handler (including conf parser) is started 
on demand, when incoming events require this. The job of the fifosvd is 
this on demand pipe handling, including failure management.




 2) fifosvd would never close any end of the pipe because it could
need them to re-spawn any of the other processes. Like this, no need for
a named pipe as long as fifosvd lives.


Dit you look at my pseudo code? It does *not* use a named pipe (fifo) 
for netlink operation, but a normal private pipe (so pipesvd may fit 
better it's purpose). Where as hotplug helper mechanism won't work this 
way, and require a named pipe (different setup, by just doing slight 
different startup).




 And I have a suggestion for simplicity: Let be the
timeout/no-timeout feature be a parameter only for the event handler; it
does not need to change the behaviour of fifosvd. I think it is fine to
restart event handler on-event even when it dies unexpectedly.


???

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-16 Thread Harald Becker

On 16.03.2015 09:19, Natanael Copa wrote:

I am only aware of reading kernel events from netlink or kernel hotplug
helper.


Where as I'm trying to create a modular system, which allows *you* to 
setup a netlink usage, *the next* to setup hotplug helper usage (still 
with speed improvement, not old behavior), and ...



What is this new, third plug mechanism? I think that is the piece I am
missing to understand why fifo manager approach would be superior.


... the *ability* to setup a system, with a different plug mechanism, 
not yet mentioned, using same modular system. Just putting together the 
functional blocks, the system maintainer decides.


Think of, for simplicity, about doing the event gathering from sys file 
system with some shell script code, then forward the device event 
message to rest of system. Looks ugly? What about older or small systems 
without hotplug feature?


My intention is *not* to *solve your needs*, it is to give *you* the 
*tools to build* the system with *your intended functionality*, by 
putting together some commands or command parameters, without writing 
code (programs). At the only expense of some (possibly) dead code in the 
binary. Where dead code means, dead for you, but be used by others who 
want to setup there system in a different way (build your own BB version 
and opt out, if you dislike).


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-16 Thread Harald Becker

On 16.03.2015 08:16, Natanael Copa wrote:

This give me exactly what I am interested in: a hotplug handler that is
fast while keep memory consumption at a minimal during long periods
with no events.


This is the essential of your message, I would give *you* the expected 
result, but not to everybody else.


What about the ability of logging and debugging purposes? And I don't 
talk about debugging netlink / mdev / xdev code, I mean debugging kernel 
device event messages. Your approach would need a separate new piece of 
code (socket / netlink aware), to handle those, whereas a slightly 
different approach gives more modularity and compatibility with other 
Unix tools, e.g.


netlink reader | tee /dev/ttyX | device operation handler

(Yes this is a normal pipe, used in shell scripts, to display all 
incoming event messages on a specific console).


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-16 Thread Harald Becker

Hi Timo !

On 16.03.2015 07:23, Timo Teras wrote:

My take on this is that you are designing an abstract server to be
usable by several things. However, in our case it would be used only
by one thing. Thus adding complexity by introducing one more
component than needed. Not everyone wants to use fifo supervisor.

Yes, it might make sense if everyone used it. But who's now trying
to make people do things his way?


The fact is, you will always use it's functionality, when you use
netlink reader. It has either be included in the netlink reader process
(may be with slight optimization to to choosing some defaults), or split
in a separate thread. I did the later, making it reusable for other
purposes, at no extra cost.



I would happier with solution to a specific problem that is simple
to install, than solution to all that requires additional changes
everywhere.


May be it is confusing, as we talked about the functionality, but
installing / using this is in nothing different. Ok, assuming, you don't
think it is a problem adding e.g. an extra option to a command line, to
activate netlink operation, else you would stay at hotplug helper.
Anything wrong with this?

The startup of the smal supervisor process is done automatically, when
required by other functions, but stays as a long lived service, until
killed.



Granted the idea is nice. But what happened with inetd? No one uses
it real servers anymore. It is usable only in special embedded
solutions. I see fifosvc the same.


inetd is a bit special, it tries to replace several other operations of
tcpsvd and udpsvd. fifosvd is just comparable operation of tcpsvd for
pipes or named pipes, instead of incoming TCP requests. Everybody using
pipes or named pipes and wants to startup the consumer process only on
demand, may benefit from that.



It'd be usable in special embedded solutions, but not in a real
distribution. The idea of it is intriguing, though.


Why not usable in "real distributions"? I'm running on Linux Mint, a 
real distribution I assume, using pipes and named pipes for different 
purposes. Not all but several of them could benefit from on demand 
startup of the pipe consumer process.


It is the wider view, who gives that sense. Not the operation, stuck on 
only looking for the netlink operation. The handling / usage will be the 
same, except you will see a small extra process in your process list.




I have not followed all the discussion. But does it handle also all
the named pipe details correctly? E.g. when named pipe has been
connected, and then disconnected by reader. All writers will get
errors like EPIPE until you close and recreate it.


The trick is, this won't happen, as one of the primary jobs of that 
piece of code is, to hold the pipe open, so it does not vanish, when 
others close there copy of the descriptor (buffer space is only assigned 
by the kernel when required). For the read end of the pipe it is handled 
by the supervisor job, independent of private or named pipe. For the 
write end it is only done for named pipes, leaving the handling of the 
pipe to the writer process (intentional usage).


The intention of fifosvd (or may be call it pipesvd), is to be of 
general usability, for different purposes. If I missed something, then I 
apologize. Let me know what's missing.

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [PATCH 2/2] mount: -T OTHERTAB support

2015-03-16 Thread Harald Becker

On 16.03.2015 06:09, Isaac Dunham wrote:

Aren't there any mount options for the filesystem type to set the default
permissions? Ahh, looks like there left such things out :(


-o mode?


Don't have any experience with mqueue, (some) other virtual file systems 
have an option to set the mode.




I see (down below) that what I wanted is more like:
mqueue root:root 1300 %mqueue
mqueue/ root:root 1775


Which ever you like, and makes sense. It is your config file and up to 
you, what you put in there. I did never intend any restrictions to the 
used options, just some simple format checking, then forwarding to the 
appropriate calls.




Well, honestly, my first thought on a system I've never used would be
-grep through /etc/init.d/* for "mount" and then for "/dev"
-read whatever script that points at
-eventually, if I find /etc/mdev.conf, I'd be wondering what anything
there had to do with mounting filesystems, and would have to read an
explanation of that line two or three times.


Sure, it is a different approach then the usual shell script style, but 
most of those I met and had a deeper look at the principle, felt it is 
of very practical use.


... but even if the feature is included in BB, nobody is encouraged to 
use them. Old script style setup of things is still there and possible.




I was thinking of the version in Busybox.


Ohps, didn't look at this one, as I had already my table driven dev 
setup, when those commands arrived ... but then dev file system came 
into existence and the hot plug stuff.




What I mean is not that people might start using mdev for setting
up /sys as well as /dev, but that once that happens, people may start
thinking it's a good idea for mdev to do anything and everything
that involves filesystem manipulation, and each additional feature
makes it less and less clear that mdev is for converting hotplug
events to device nodes.


This mets somehow the fears and wishes of Laurent, to split the initial 
commands / setup from the device definitions, which could be wise.


Both configs then just share the same syntax, but this is pure 
preference, if you like the way to collect information at a central 
place, or vote for more separation into logical groups.


My intention of the mdev.conf extensions are highly for usage on device 
file system and (with some fuzziness) the virtual file systems proc and 
sys. May be someone include putting a tmpfs on /var/run or /run or 
something similar, but this already highly depend on preference.


Technically, there is no specific limit on what you setup with this 
feature, as is with providing the commands in a shell script. It's up to 
you, what you do with the available tools.




In the other thread I already suggested an include feature for mdev.con,
e.g.


Indeed, that's pretty much the reverse of what I meant.

Could the one "master file" you've referred to that controls everything
be the  *source* for mdev.conf, your fstab, and so on?


Do I understand you right, if I assume you're asking if it would be 
possible to put also the functions of fstab, and may be some other 
system setup related information in a single, table driven file?


If you like, sure, most of them, with some slight extended syntax. The 
details have be looked for, but I do not see any principal problem for 
this. I'm open to suggestions on this -> and that won't have an impact 
on the hotplug / netlink question or other operation of the device settings.


Even for other commands and functions, the table driven approach can be 
used, without need to change there operation, as you are able to create 
the appropriate config files for the other functions from the initial 
table driven setup. On a read only root file system you may just set 
symlinks in /etc for those config files to a tmpfs on e.g. /var or /run, 
and then create / modify the config files there. On a few systems I did 
already similar with some shell / sed / awk scripting.


Let me think a bit about that: Could be wise to split things into two 
functional groups, doing all the initial stuff and the device 
descriptions? What do you think?


So we won't have an "xdev -i" and other xdev functions, but a separate 
applet (call it sysstart CONF_FILE, for now) to run the table driven 
system startup and the xdev / mdev for the device file system operations 
... sounds not so bad for me!


Does that fit better your question?


Just to note: If we do that table driven approach with an own applet 
that can be thought of a kind of script interpreter:


put this as your first line in /etc/my-sysstart:

#!/bin/busybox sysstart

than you could just do:

/etc/my-sysstart [ARGS]




"this" meant "the single file controlling creation of everything".
Not what the proposed applet could do, but what the configuration
format you've mentioned that you already use looks like.


The proposed format is just like the possible commands you may put into 
a script, there is no special invented limitation. The informatio

Re: [PATCH 2/2] mount: -T OTHERTAB support

2015-03-15 Thread Harald Becker

On 16.03.2015 00:53, Isaac Dunham wrote:

Just to clarify: this is NOT a feature I came up with.
I found it documented in the util-linux mount manpage:

x-*All  options  prefixed  with "x-" are interpreted as comments or


Yes I know that x-* mount options approach, but I don't see it has very 
wide spread (yet).




My own idea was to make mount recognize "-d" as a signal that it should
create the mountpoint if needed.


AFAIK something similar is the BSD approach, but that doesn't matter it 
can be of general use. The disadvantage is, it doesn't solve the missing 
owner, group, and permission setting.




-for a brief moment between mkdir() and mount(), while initscripts are
setting up the kernel mounts
-for a short period during shutdown, while the shutdown scripts are
unmounting everything
-and if the sysadmin does a non-recursive bind mount (such as if you
configure chroots by bind mounting /dev without using rbind) or manually
unmounts a kernel filesystem.


... you forget the case, where something in system startup fails, and a 
not so experienced admin need to look, what's failing in the system :(


Sorry, I know being picky on the permissions look ugly, but you never 
mind what sorts of failures I came across. Someday I started to set the 
permissions to help me in discovering unusual behavior, and after some 
time I found it very handy to have the permissions of all mount points 
set. I found others roll there eyes when say came across, but some time 
later they start adapting and liking this. Not everybody, but several 
did. Preference!




And if you care about the owner and group of a long-term mountpoint,
it's one command to create it and one more to change the owner and group.


Not every mount point is long-term mount point. Not everybody mounts 
there file systems on startup and unmount before shutdown. An unmounted 
file system is more save to damages than a mounted. Therefor it may be 
of use to setup the mount points correct, as soon as the file system is 
unmounted, the mount point still shows up with some reasonable permissions.




Sure, that's a valid choice.  And you are free to set it up that way:


And anybody else is free to setup it the way they like.



# using permissions that Debian uses
for FS in /dev/mqueue /dev/shm; do
mkdir -p -m 1777 $FS
touch $FS/"not mounted"
chown -R noowner:nogroup $FS
done


Sure, and another time scattered around some system setup information in 
any scripts. Where as I like to have a table, what to set up and not how 
to setup.




It just doesn't seem like something to do with an option or configuration
line to a stock utility.


My intention is to get a few simple one-shot startup commands, that get 
there system setup related information from some central tables.


I don't think putting this into mount / fstab is a good choice, but 
putting all device file system related information in mdev.conf is at 
least, for me no doubt.


If you widen your acceptance to proc and sys, is pure preference, and 
can be done with no extra cost (to nobody, who dislikes and leave such 
things out of there mdev.conf). All the intended extensions are required 
for some operation on device file system (e.g. devpts, mqueue, shm, fd, 
stdin, stdout, stderr). So they are clearly device file system related.




It *would* make sense to allow creating leading directories "manually"
in mdev.conf; the rule would presumably have to be a generic one executed
unconditionally upon "mdev -s".


Now you are coming to my xdev -i. Mounting is only for things like 
devpts, but the problem is, it has to be done in the right order with 
other operations, so the easiest way is, to put the mount information 
for those virtual file systems in mdev.conf and call mount with the 
right arguments, just as done in a shell script. Nothing is setup 
automatically, but you can setup all things the way you like. From a 
central place.




However, util-linux developers added x-* options as a way for external
applications to store per-device comments, and some applications are
apparently already using those.


Sure, ignoring them in BB could be wise.



/etc/fstab. Adding x-mount.mkdir support will not impose an additional
burden, except for the fact that it offers sysadmins and packagers
more reasons to use an extension that could break old code.


It is difference, if you actively support a not so beautifully 
extension, or silently ignore them, not breaking things.




First, request for clarification--does that mean:
mountpoint  owner:group chmod   %fstype

as it appears to?


yes, the percent sign triggers mount operation

Or better in addition (full intended syntax):

mountpoint owner:group mode %fstype [=device] [options]

--> mount -t fstype -o options device mountpoint

Where the default of device is the literal "virtual", but doesn't 
matter, as it is only for humans readability.


The default for options is the empty string (leaving out th

Re: RFD: Rework/extending functionality of mdev

2015-03-15 Thread Harald Becker

We were looking at alternative solutions, so even one more:

3) netlink reader the Unix way

Why let our netlink reader bother about where he sends the event 
messages. Just let him do his netlink receiption job and forward the 
messages to stdout.


netlink reader:
   set stdout to non blocking I/O
   establish netlink socket
   wait for event messages
 gather event information
 write message to stdout

hotplug startup code:
   create a private pipe
   spawn netlink reader, redirect stdout to write end of pipe
   spawn fifosvd - xdev parser, redirect stdin from read end of pipe
   close both pipe ends (write end open in netlink, read in fifosvd)

This way we can let the starting process decide which type of pipe we 
use: private pipe for netlink, and named pipe for hotplug helper.


I think this is not far away from Laurent's (or Natanael's) solution, at 
the only cost of a small long lived helper process, managing the on 
demand handler startup and checking for failures. Small general purpose 
daemon in the sense of supervisor daemons (e.g. tcpsvd), with generally 
reusable function for other purposes.


... better?

Ok, but brings me to the message format in the pipe, I highly think, we 
should use a textual format, but do required checks for control chars 
and do some (shell compatible) quoting.


This would allow to do:

  netlink reader >/dev/ttyX
  (to display all device plug events on a console)

  netlink reader >>/tmp/uevent.log
  (append all event message to log file)

  ... and all such things.

I know, the parser needs to do some checking and unquoting, but we have 
a single reader and it doesn't matter how much data it reads from the 
pipe in a single hunck, as long as the writers assure, they are going to 
write a single message with one write (atomicity). The parser assumes 
reading text from stdin, with required checking and unquoting. This way 
we get maximum compatibility and may easily replace every part with some 
kind of script.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-15 Thread Harald Becker

Hi James!

> 1) Why argue over something that has already been admitted?

It does not bolster your argument and it does not put in you
a good light.


Keep cool, I know the fears of Laurent.

There are to many stupid programmers out there, who would try to add 
something like that into system management related programs. Couldn't go 
worser. Even if it works, at the first glance, it is error prone, and 
the next who change message size of one process, will break the chain, 
and possibly smash the system (as running with root privileges).


Laurent just overlooked my lead in "just for curiosity". And (again) 
that's it: curiosity.


... at least until reactions have bean standardized, clearly documented 
(not same as former), and guarantied.


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-15 Thread Harald Becker

On 15.03.2015 20:06, Laurent Bercot wrote:

The behavior of multiple concurrent reads on the same pipe, FIFO,
or terminal device is unspecified.


That is, you can't predict, which process will get the data, but each 
single read operation on a pipe (private or named), is done atomic. 
Either it is done complete and read the requested number of bytes, or it 
is not done at all. It won't read half of the data, then let some data 
pass to a different process, then continue with the read in the first 
process (or let that one do short read, when there is enough data).


I don't want to introduce this or use it, but please stop and think 
about: When each writer and each reader agree at the size of messages 
written / read from the pipe, you can have multiple writers *and* 
multiple readers on the same pipe. Due to atomicity of the write and 
read operations.


... and I know, it's not in POSIX / OpenGroup ... it's just working 
practice ... try it and you will see, it works.


... for curiosity :)  (And don't fear, I won't do this)

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-15 Thread Harald Becker
So, as I'm not in write-only mode, here some possible alternatives, we 
could do (may be it shows better, how and why I got to my approach):



1) netlink with a private pipe to long lived handler:

  establish netlink socket
  spawn a pipe to handler process
  write a "netlink, no timeout" message to pipe
  wait for event messages
 gather event information
 write message to pipe

The initial pipe message let the parser / handler know that we are at 
netlink operation and disable timeout functionality, resulting in both 
processes being long lived. This won't harm the system much, as memory 
of sleeping processes is usually swapped out, but still resources lie 
around unused.


@Laurent: You know the race conditions why the handler process needs to 
be long lived here, or we need complex pipe management with re-spawning 
handler, and all that stuff. You told about them.


This would indeed be the simplest solution when splitting of netlink 
reader and handler. Other mechanisms may still create a named pipe and 
use the same handler for it's purpose. With the cave-eat of two long 
lived processes, where I call one big.


So, look forward for second alternative ...


2) netlink with a private pipe but on demand start of handler (avoiding 
the race):


   create a pipe and hold both ends open (but never read)
   establish netlink socket
   wait for event message
  gather event information
  if no handler process running
 spawn a new handler process, redirecting stdin from read end 
of pipe

 write message to pipe

  with a SIGCHLD handling of:
 get status of process
 do failure management
 check for data still pending in pipe
re-spawn a handler process, redirecting stdin from read end of pipe

The netlink reader is a long lived process, the handler is started on 
demand when required and may die after some timeout. Races won't happen 
this way, as the pipe does not vanish and data written into the pipe 
during exit of an old handler, does not get lost (next handler will get 
the message).


... better?

This is, was I want to do, with an additional choice of more clarity: 
Let the netlink reader do it's job, and split of the pipe management and 
handler start into a separate thread, but otherwise exactly the same 
operation. With *no* extra cost, the pipe management and the handler 
startup may then be used for other mechanism(s).


... still afraid about using a named pipe? You still would prefer a 
private pipe for netlink?


... ok look at the next alternative (and on this one I came taking your 
fears into account).



3) netlink spawning external supervisor for on demand handler startup

netlink reader:
   establish netlink socket
create a pipe, save write end for writing to pipe
   spawn "fifosvd - xdev parser", redirecting stdin from read end of pipe
   close read end of pipe
  wait for event messages
 gather event information
 write message to pipe

fifosvd:
   save and hold read end of pipe open (but never read)
   wait until data arrive in pipe (poll for read)
   spawn handler process, handing over the pipe read end to stdin
   wait for exit of process
   failure management

A novice may think this way we added another process in the data flow, 
but no, the data flow is still the same: netlink -> pipe -> handler.  
The extra process is a small helper, containing the code for the on 
demand start of the handler,  and the failure management, but will never 
get in contact with the data passed through the pipe.



This approach allows simple reusing of code for other mechanism(s), and 
fifosvd may be of general usage: when argument is a single dash ("-"), 
it uses the pipe from stdin else it creates and opens a named pipe. May 
also be used for on demand start of other jobs:


   process producing sometimes data | script to process the data

 may be changed to:

   process producing data | fifosvd - script to process data

will script start on demand when data arrives in the pipe, and when 
script dies, restart as soon as more data is in the pipe.


This is extra benefit from my approach, with no extra cost.


I hope this helps to solve some fears.

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-15 Thread Harald Becker

On 15.03.2015 20:06, Laurent Bercot wrote:

  What "most systems" do in practice is irrelevant. There is no guarantee
at all on what a system will do when you have multiple readers on the
same pipe; so it's a bad idea, and I don't see that there's any room for
discussion here.


My lead in was: "just for curiosity", and that's it, it works on many 
systems.


... but I never proposed, doing something like that. It's what my lead 
in says: curiosity.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-15 Thread Harald Becker

On 15.03.2015 14:29, Natanael Copa wrote:

You hacked a solution, for the mechanism of your preference, throwing
out those who want to use one of the other mechanisms ...
this is forcing others to do it your way?


The RFC in subject means "request for comments", not "I force you do it
my way".


I apologize ... please change my words to (which was there intention):

You hacked a solution, for the mechanism of your preference, throwing 
out those who want to use one of the other mechanisms ... which would be 
bad, when you try to force others to do it your way?


... sorry for my poor English.



... and what is the major difference of your hacked code and my intended
solution?


My way does not use anonymous pipe instead of named pipe? (no mkfifo
call)


What are the difference between private pipes and named pipes, except 
the possibility to let other processes get access to the pipe 
descriptor? Which is a requirement, when you want to open up and allow 
for different front ends with the same back end, using an IPC.




The example was to show *one* way to solve a problem I am interested
in. ...


That's it, to "solve one way, you are interested in", but your way will 
block other wishes and usages, without duplicating code, or the need of 
complex code sharing.



(I'm still not sure we try solve same problem)


I'm trying to create a tool, with not much overhead compared to current 
mdev, which allow the system maintainer to setup the system withth the 
plug mechanism he likes, but let all benefit from resource and speed 
optimization on event bursts.


Therefore: I *know* we are not trying to solve the same problem!



I am not trying to stop you do it vise versa.
Are you trying to force me to discuss ideas your way?


Sorry, if my response took the wrong words, but you could have chosen 
different words for some expressed things, too ... or may be, I just 
misunderstood them ...




I think you have a point here though. It should be possible to solve it
without pipes/fifos at all.


By what? By putting all together in one monolithic block, e.g:

  read conf file into memory
  establish netlink socket
  wait for event
gather information
search for matching line in table
do device operation

Is this your intention? Then you come into trouble when you try to let 
the kernel hotplug helper mechanism benefit from improvements. 
Especially re-parsing the conf file for every event.




It should be possible to solve the hotplug problem by setting up
netlink listener, wait for event, when event arrives fork helper and
just hand over the netlink socket filedescriptor to the child. That way
we avoid pipes/fifos alltogehter. And we avoid the splitted messages
problem too.


Which would mean to move over the netlink reader to the parser process, 
making it even more complex to use the same parser back end for 
different purposes.



That would also make it possible to replace the netlink socket with a
named pipe for those who wants that.


? Don't know what you are intending here, and how you want to achieve that.



In theory, the netlink socket (or named pipe) could be set up of a
separate process. That way we avoid the init code in memory of longtime
running process. Not sure its worth it thoug.


You mean splitting the netlink creator, from the netlink waiter, from 
the netlink reader ... even if I see, it could be possible, for which 
result? Complexity! ... I'm trying to go the opposite way, clarity and 
modularity. Not only a solution for one group of people.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [PATCH 2/2] mount: -T OTHERTAB support

2015-03-15 Thread Harald Becker

On 15.03.2015 06:01, Isaac Dunham wrote:

the fstab approach has one big cave-eat, which got me awy from that idea: he
format of fstab does not allow to specify the owner, group and permissions
of newly created mount points, but if you like / need to do some kind of


Do you mean as in the permissions of the root of the mount:

mount -o uid=,gid=,mode=0700


This may collide with setting the default uid / gid for mounted file 
systems, when those file systems do not use the usual Unix owner / group 
semantics.


Otherwise changing the owner / group would mean to overwrite the 
permission settings of the mounted file systems root directory, but 
still this can be of interest for virtual file systems.




or as in the permissions of the mountpoint itself, as entered in the
filesystem where it's found:

mount -o x-mount.mkdir=0700


And here you are trying to add a very crazy and complicated syntax, for 
something required for several mount points.




which is always root:root since it is disallowed for non-root users, but
does *not* affect the permissions or ownership of the filesystem that is
mounted there.


... and here you fail! Not everybody agree with the practice to have all 
"unused" mount points laying around with root:root 0755. It is just a 
visible indication for usage, or call it a reminder for novices (of no 
extra cost, except setting it up).


In fact, I prefer to have even a visible indicator, of mount points, and 
a kind of response to say "the file system on this mount point is 
currently not available", other than some scripts or commands just 
failing with a "not found". Therefor I add a single empty file in mount 
point directories with the name "not mounted", nowner:nogroup, mode 
. The mount point permissions are set to fit there later usage. If 
you try to list the directory content you get a "not mounted" entry 
displayed.


I know this is preference, but not everybody does it the same way, and I 
won't try to force anybody to do something similar, but I know several 
people who like to not being stuck at this root:root semantic.




I will readily admit that most of that may not be something that many
people can come up with off the top of their head. I happen to have
just spent a large part of the day reading up on that sort of thing.


Sure, and my reply was not for criticism, just an explanation, why I got 
away from that fstab approach.


... because any changing of that file format may have consequences you 
never will expect ahead. fstab is a critical beast, read and acted on by 
several other programs and (even worser) system management scripts.


Isaac, I've spend hours over hours reading up such things in the last 25 
to 30 years. I know, that it's difficult to dig into this and get 
everything right. That's the reason why I prefer planning my steps, 
before I even hack a single line of code.




Note the x-mount.mkdir option isn't supported by Busybox yet; I've looked
at the code but haven't figured out how to do it yet.


Sure, we could do that ...



(We also need to ignore x-* other than that, just to handle fstab properly.)


... but here you fail. You can do that for BB and may be all the applets 
using fstab, but what about other programs and scripts? You get in big 
trouble, when you change any fstab format, as you have to expect to 
break someone others work.


Conclusion:

- current fstab give not all required possibilities (mainly for the 
virtual file systems and device file system management)


- changing format of fstab risk breaking other things, as this file is 
used by many others programs / scripts


Therefor it is not wise to try using fstab for the intended purpose.


... on the other side mdev.conf is a BB private conf file. You may add 
some extensions, to allow for extra functionality, without breaking 
existing configurations, by just letting a single point of usage hop 
over some lines, as for comment lines.




mqueue  /dev/mqueue mqueue  x-mount.mkdir=1300  0 0


Long syntax, compared to the intended mdev.conf format, compatible in 
format of other entries there:


mqueue  root:root  1300  %mqueue

The first mqueue is the mount point name which get prefixed by /dev when 
not starting with a slash (as other entries).


*Remember*: Those lines are just detected and ignored (as comments) by 
normal device operations.




On a typical system, this would be done by hand, with a service file
or adding "mkdir /dev/mqueue && mount -t mqueue /dev/mqueue" to a
script.


Ack ... done a thousand times ... before I stopped ... took a deep 
breath ... and started to migrate to a different approach:


Not describing how my systems are that up (which may very between 
systems), but to describe what shall be set up.


That is, a list of required mount points, mounts, symlinks, directories, 
etc., and a single uniform handler script (system depended) reading that 
table, and performing the required operation.


Which differs in the point, as 

Re: RFD: Rework/extending functionality of mdev

2015-03-15 Thread Harald Becker

On 14.03.2015 03:40, Laurent Bercot wrote:

  - for reading: having several readers on the same pipe is the land
of undefined behaviour. You definitely don't want that.


... just for the curiosity:

On most systems it is perfectly possible to have multiple readers on a 
pipe, when all readers and writers be so polite to use the same message 
size (<= PIPE_BUF). On most (but not all Unix systems) the kernel 
guaranties not only atomicity for write operations, but also for read 
operations (not in POSIX, AFAIK).


With multiple readers you will get a load balancing. The first available 
reader get the next message from the pipe and has to handle this 
message. You can't predict, which process will receive a specific 
message, as this every process has to handle all types of incoming 
messages. This is usually true, when all readers share the same program.


With a small helper you can even fire up a new reader process when there 
is more data in the pipe, then sleep some time to let the new process 
pick up a message from the pipe, and then check the pipe again for more 
data, firing up the next reader (up to an upper limit of reader 
processes). Reader processes just die, when no more data is available in 
the pipe.


This belongs to pipes, and it does not matter if they are private or 
named (fifo).


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-15 Thread Harald Becker

On 14.03.2015 03:40, Laurent Bercot wrote:

What would you do if your kid wanted to drive a car but said he
didn't like steering wheels, would you build him a car with a
joystick ?



... [base car with wheel steering module replaceable by a joystick
module] ...


... and the next one coming around, with an automatic steering module, 
may replace the wheel steering or joystick module, plug in his module, 
and also take advantage of your base car ...


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [PATCH 2/2] mount: -T OTHERTAB support

2015-03-14 Thread Harald Becker

Hi Isaac,

the fstab approach has one big cave-eat, which got me awy from that 
idea: he format of fstab does not allow to specify the owner, group and 
permissions of newly created mount points, but if you like / need to do 
some kind of restriction management, you always need to do some extra 
work to setup this information, but this scatters around the information 
into separate places :(


So how does your fstab approach may help to fix this problem?

... beside that i think it's always a good idea to have the possibility 
to overwrite the name of the default table (e.g. when your root file 
system including etc is read only).


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-14 Thread Harald Becker

Hi Isaac !

On 14.03.2015 08:10, Isaac Dunham wrote:

Basic concept is that it creates a fifo, and treats the success of mkfifo
as a lock to determine whether it's the daemon or a writer.
If it's the daemon, it will
fork;
in the child:
exec the parser with the fifo as stdin
in the parent:
set a SIGCHLD handler that respawns the parser if the fifo is not empty,
open the fifo to write,
dump the environment into the fifo
sleep


Congratulations, you implemented the base functionality, I intended, 
except I use some command line parameter to distinguished between 
hotplug helper and parser. This is mainly for increased stability and 
flexibility with no extra cost (due to BB startup / option handling less 
code than fifo check).


... beside not (yet) implemented failure management

... has your approach a problem: The write side of the pipe (fifo) is 
closed as soon as you exit the hotplug helper, which let the parser die 
immediately too (except other hotplug run at exactly the same moment). 
This way you get extra parser starts when events have slight delays. 
With the possibility of race conditions (which need complicated failure 
check and handling).


Therefore it's better to let a small process hold the fifo open and 
available (skips need to recreate on every event). For no extra cost 
this process can do the parser startup, signal management and failure 
checks, as this automatically shares code otherwise also required in the 
netlink part.


... and you got my "fifo manager".



While I don't think this approach should replace mdev, or that a long-running
fifo supervisor is a *good* solution for hotplug, it would certainly be
possible to do what Harald proposes - likely in 200-300 LOC.


So you don't think using a supervisor daemon like tcpsvd is a good 
solution for waiting to incoming network requests? The intended "fifo 
manager" does exactly this, it waits for incoming plug events and fire 
up a plug service handler. ... What is wrong on this solution?


If we put the "fifo manager" into its own applet and call it fifosvd 
giving the specific names on the command line, we get a general "fifo 
supervisor" which may be used for several different IPC operations, 
mainly when using shell script driven Busybox based system operation.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-14 Thread Harald Becker

On 14.03.2015 03:40, Laurent Bercot wrote:

  Hm, after checking, you're right: the guarantee of atomicity
applies with nonblocking IO too, i.e. there are no short writes.
Which is a good thing, as long as you know that no message will
exceed PIPE_BUF - and that is, for now, the case with uevents, but
I still don't like to rely on it.


Named pipes are a proven IPC concept, not only in the Unix world. They 
are pipes and behave exactly as them, including non blocking I/O, 
programming the poll loop, and failure handling. There is only one 
difference, the method how to get access to the pipe file descriptors 
(either calling pipe or open).




  I call "pipe" an anonymous pipe. And an anonymous pipe, created by
the netlink listener when it forks the event handler, is clearly the
right solution, because it is private to those two processes. With
a fifo, aka named pipe, any process with the appropriate file system
access may connect to the pipe, and that is a problem:


Right, any process with root access may write to this pipe, but don't 
you think such processes have the ability to do other need things, like 
changing the device node entries in the device file system directly?


May processes with root access produce confusion on the pipe?

Yes, but aren't such processes be able to produce any kind of confusion 
they like?


We could have (at some slight extra cost):

- create the fifo with devparser:pipegroup 0620

- run hotplug helper (if used) suid:sgid to hotplug:pipegroup
  (or drop privileges to that)

- drop netlink reader after socket creation to same user:group

- run the fifo supervisor as devparser:parsergroup

- but then we need to run the parser as suid root
Needs to access device file system and do some operation which require 
root (as far as I remember). Any suggestion how to avoid that suid root?




  - for writing: why would you need another process to write into the
pipe ? You have *one* authoritative source of information, which is
the netlink listener, any other source is noise, or worse, malevolent.


You stuck on netlink usage and oversee you are forcing others to do it 
your way. No doubt about the reasons for using netlink, but why forcing 
those who dislike? This forcing won't be different then forcing others 
to rely on e.g. systemd? Isn't it? (provocation, don't expected to be 
answered)


Where as I'm trying to give the user (or say system maintainer) the 
ability to chose the mechanism he likes, and even with the chance to 
flip the mechanism, by just modifying one or two parameters or commands. 
Flipping the mechanism is even possible in a running system without 
disturbance, and without changing configuration.


So why is this approach worser than forcing others to do things in a 
specific way? Except those known arguments why netlink is the better 
solution, where we absolutely agree.




  - for reading: having several readers on the same pipe is the land
of undefined behaviour. You definitely don't want that.


Is here anyone trying to have more than one "reader" on the pipe? The 
only one reader of the pipe is the parser, and right as we are using 
fifos the parser shouldn't bet on incoming message format and content. 
It shall do sanity checks on those before usage (and here we hit the 
point, where I expect getting some overhead, not much due to other 
changes). Isn't that good practice to do this for other pipes too (even 
if a bit more for paranoia)? But all with the benefit of avoiding 
re-parsing the conf for every incoming event, and expected over all 
speed improvement. Not to talk about the possibility to chose/flip the 
mechanism as the user likes.


This even includes extra possibilities for e.g. debugging and watching 
purposes. With a simple redirection of the pipe you may add event 
logging functionality and/or live display of all event messages 
(possibly filtered by a formating script / program). All without extra 
cost / impact for normal usage, and without creating special debug 
versions of the event handler system.


I'm just trying to make it modular, not monolithic.



  - generally speaking, fifos are tricky beasts, with weird and
unclear semantics around what happens when the last writer closes,
different behaviours wrt kernel and even *kernel version*, and more
than their fair share of bugs. Programming a poll() loop around a
fifo is a lot more complicated, counter-intuitive, and brittle, than
it should be (whereas anonymous pipes are easy as pie. Mmm... pie.)


See my statement about fifos above, I don't know what you fear about 
fifos, but there usage and functionality is more proven in the Unix 
world, as you expect. Sure you need to watch your steps, but this shall 
also be done when using pipes (even if only for paranoia, e.g. checking 
incoming data before usage and not blind reliance).


And may be there are internal differences on pipe / fifo handling in the 
kernels, but likely they are internal and don't change the expected 
usage beh

Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 23:33, Laurent Bercot wrote:

On 11/03/2015 08:45, Natanael Copa wrote:

With that in mind, wouldn't it be better to have the timer code in
the handler/parser? When there comes no new messages from pipe
within a given time, the handler/parser just exists.


I've thought about that a bit, to see if there really was value in
making the handler exit after a timeout. And it's a lot more complex
than it appears, because you then get respawner design issues, the
same that appear when you write a supervisor.


Which issues?


What if the handler dies too fast and there are still events in the
queue ?



Should you respawn the handler instantly ?


spawning the handler is the job of the named pipe supervisor. At first
it checks the exit code of the dieing handler and spans a failure script
if not sucessfull. Then waits until data in pipe arrive (or is still
there = poll for reading), and finally span a new handler

The trick on this is, to hold the pipe open for reading and writing in
the supervisor. This way you avoid race conditions from recreating new
pipes, and catch even situation when an event arrive at the moment the
handler got a timeout and is dieing. Otherwise, does the supervisor not
touch the content transfered through the pipe.


That's exactly the kind of load you're trying to avoid by having a
(supposedly) long-lived handler. Should you wait for a bit before
respawning the handler ? How long are you willing to delay your
events ?


A bit more of checking is planned already, Currently I have an failure
counter and detect when parser successively dies unsuccessfully, but may
be we can add in an respawn counter, who triggers a delay (maybe
increasing) on to many respawns without processing all the pipe data,
but when handler exit and pipe is empty (poll), then respawn counter is
reset. So you get two or three fast respawns after handler dies (when
timeout on poll) and more data in pipe, then something seams to be
wrong, so start adding increasing delays before respawning. The normal
case is, when handler exit due to timeout, the pipe is empty, so we can
reset the counter and have no need to delay process respawn, as soon as
new data arrive in pipe. And when the respawn counter goes above some
limit or the handler dies unsuccessful, a failure script is spawned
first, with arguments programname, exit code or signal, failure count


It is necessary to ask these questions, and have the mechanisms in
place to handle that case - but the case should actually never
happen: it is an admin error to have the event handler die to fast.


admins don't make errors! ;)


So it's code that should be there but should never be used; and it's
already there in the supervisor program that should monitor your
netlink listener.


Ok, you expect the netlink listener be watched by a supervisor daemon? 
Fine so the fifo supervisor should also be watched, as it got forked 
from same process as the netlink reader ... that means when we detect 
handler failures, we can just die and let the outer supervisor do the job :)


When that happens the system is usually on it's way to hell ... and even 
if that happens, what does it mean to the system? ... hotplug events are 
no longer handled, we loose them and may have to re-trigger the plug 
events, as soon as hotplug events are processed again (however this is 
achieved) ... and in the worst case you are back at semiautomatic device 
management, calling "mdev -s" to update device file system.


... but consider conf file got vandalized, or the device file system ... 
how to suffer from this? ... do you expect to handle those? ... wouldn't 
it be better to reboot, after counting the failure in some persistent 
storage?




So my conclusion is that it's just not worth it to allow the event
handler to die. s6-uevent-listener considers that its child should
be long-lived;


That's the problem of spawning the handler in your netlink reader. The 
netlink reader has to open the pipe for writing in non blocking mode, 
then write a complete message as a single chunk, failure check the write 
(you always need and handle it), done. If open/write to pipe is not 
possible, the device plug system has gone and need restart, so let the 
netlink listener die (unusual condition). One critical condition should 
be watched and handled, when pipe is full and write (poll for write) has 
timeout, what than? ... but this is not different then in your solution.


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 21:43, Laurent Bercot wrote:

  Except you have to make the writes blocking, which severely limits
what the writers can do - no asynchronous event loop. For a simple
"cat"-equivalent between the netlink and a fifo, it's enough, but
it's about all it's good for. And such a "cat"-equivalent is still
useless.


I'm using none blocking I/O and expect handling of failure situations, 
e.g writing into pipe fails with EAGAIN: poll wait until write possible 
with timeout. then redo write



  It's still a very bad idea to allow writes from different sources
into a single fifo when there's only one authoritative source of data,
in this case the netlink.


Ohps!?

one netlink daemon -> pipe -> parser
  or
many hotplug helper -> pipe -> parser


If a process wants to read uevents from a
pipe, it can simply read from an anonymous pipe, being spawned by the
netlink listener. That's what my s6-uevent-listener and Natanael's
nldev do, and I agree with you: there's simply no need to introduce
fifos into the picture.


Laurent, you still stuck on netlink! Using named pipe is requirement for 
the hotplug helper stuff, how else should they get access to the pipe, 
when not using a named pipe? And what is the difference between fifos 
and pipes?


from "man 7 pipe":

---snip---
Pipes  and  FIFOs (also known as named pipes) provide a unidirectional 
interprocess communication channel. A pipe has a read end and a write 
end. Data written to the write end of a pipe can be read from the read

end of the pipe.
---snip---

You see? So whey is pipe good (your choice) and fifo bad (mine)? They 
differ only in how to get access to the descriptors.


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 21:32, Michael Conrad wrote:

Are you suggesting even the netlink mode will have a process reading the
netlink socket and writing the fifo, so another process and can process
the fifo?  The netlink messages are already a simple protocol, just use
it as-is.  Pass the


You got the function of the fifo manager (or supervisor) wrong. This 
little process does never read or touch the fifo data, it's purpose is 
to fire up the parser (pipe consumer) when any gathering part as written 
something into the pipe and there is no running parser process. In 
addition this supervisor may spawn a failure_script, when the parser 
abort unexpectedly.


Have you ever used tcpsvd ?

This piece open a network socket, then accept incoming connections and 
pass the socket to a spawned service process.


The fifo manager does the same for the named pipe.

The data flow is:
  netlink daemon -> pipe -> parser
or
  hotplug helper -> pipe -> parser



The new code behaves exactly as the old code. When used as a hotplug
helper, it suffers from parsing conf for each event. My approach is a
splitting of the old mdev into two active threads, which avoid those
problems even for those who like to stay at kernel hotplug.


Then it sounds like indeed, you are introducing new configuration steps
for the old-style hotplug helper?


?

  i.e. where does the fifo live?

at a simple default: /dev/.xdev-pipe, because any reading of such 
parameters in the hotplug helper would slow down the operation. Remember 
hotplug helpers a spawned in parallel.



who  owns it?


The only user who's allowed to do netlink operation, load modules, 
create any device nodes, etc. -> root:root



what security implications does this have?


Mode of the fifo will be 0600 = rw---


Who starts the single back-end mdev processor?


This is the job of the fifo manager (or named pipe supervisor). The 
processor, as you call it, is started on demand, when data is written to 
the fifo, the processor has to die when when idle for some time.



If started from the hotplug-helper, who ensures that only one gets started?


? started from the hotplug helper? the helper won't ever start anything, 
just:


hotplug helper:
  gather event information
  sanitize message
  open named pipe for writing
  (ok, if this open fails seriously, we are in big trouble)
  (true for many other such operations)
  (to be discussed what's best failure handling for this)
  if pipe is open, (safe) write the event message
  (the safe means, in loop and checking for success)
  exit 0

netlink reader:
  open named pipe
  (for failures here have already added an option)
  (will spawn a given script with failure reason)
  (or otherwise retries some times, then die)
  open netlink socket
  in an endless loop
wait for messages arriving
sanitize message
(safe) write the event message into the pipe

fifosvd:
  create named pipe (fifo)
  open fifo for reading and writing in none blocking mode
  in an endless loop
wait for data arriving in pipe (poll)
spawn the parser process redirecting stdin from fifo
wait for exit of the spawned process
if not exited successfully
  spawn the given failure script with arguments
  if failure script exits unsuccessful, then die

parser:
  read conf file into memory table
  while read next message from stdin with timeout
sanity checks of message (paranoia)
lookup device entry in memory table
do required operation for the message

Is this better for you?
I really hate code hacking before I'm able to finish planing.



If people have existing systems using hotplug-helper mdev, you can't
just change the implementation on them in a way that requires extra
configuration.


Which extra configuration?


Everyone who has commented on this thread so far agrees with that.


You definitely misunderstand my approach and how it works!

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

Hi,

my original intention was to replace mdev with an updated version,
but as there where several complains which told they like to continue 
using the the old suffering mdev, I'm thinking about an alternative.


... but the essential argument came from James, fail save operation.

In my last summary I used the name xdev, just as a substitution to 
distinguish to current mdev. I don't think the name xdev would be a good 
choice to stay with, neither is nldev, as my approach allows to include 
/ used different mechanisms, ...


... so what would be the best choice for a name of this updated 
implementation of the device system management command?


With a different name, we can just leave the mdev code as is, and in a 
new applet with different code (no code sharing except usual libbb). 
Then you can opt in whichever version you like, or even both and chose 
during runtime which one to use.


... but if now someone complains about the big code size overhead when 
two device managers get included without code sharing, I send him an: 
kill -s 9 -1


... and I won't later to change the name of the new xdev implementation 
to the name mdev, because someone complains and want to use newer 
version but stay at the name mdev.



So call the command xdev for now, to distinguish to current mdev operation:

xdev -i
  - do the configured initial setup stuff for the device file system
(this is optional, but I like the one-shot startup idea)

xdev -f
  - starts up the fifo manager, if none running
(manual use is special purpose only)

xdev -k
  - disable hotplug handler, kill possibly running netlink daemon
(for internal and special purpose usage)
kill is not perfect yet, race condition when switching mechanisms
(needs more thinking)

xdev -p   (changed the parameter due to criticism)
  - select the kernel hotplug handler mechanism
(auto include -f and -k)

xdev -n
  - select the netlink mechanism and start daemon
(auto include -f and -k)

xdev -c
  - do the cold plug initiation (triggering uevents)
(also auto include -f)

xdev -s
  - do the cold plug as "mdev -s" does
(also auto include -f)

xdev (no parameter)
  - can be used as kernel hotplug helper

xdev netlink
  - the netlink reader daemon
(this is for internal use)

xdev parser
  - the mdev.conf parser / device operation process
(this is for internal use)

Command parsing will be stupid, if not an option, the first character is 
checked, so "xdev pumuckl" is same as "xdev parser".


Where each of the mentioned parts except the fifo startup can easily be 
opted out on config, but otherwise add waste only some bytes in then 
binary. The fifo manager (named pipe supervisor) daemon itself is not 
included in this list, as a general fifosvd as separate applet seams to 
be the better place (just used internal by -f).


current mdev -s may get either (other combinations possible)

  xdev -s= do sys file scanning as "mdev -s", but use new back end
  xdev -pc   = kernel hotplug mechanism, trigger the cold plug
   (uses xdev as hotplug handler)
  xdev -nc   = netlink mechanism, trigger the cold plug
   (starts xdev netlink reader daemon)

The only other change in the init scripts shall be to remove the old 
setting of mdev as hotplug helper in the kernel completely (done 
implicitly by -p).


All those may be combined with -i, then at first the configured setup 
operations are performed, thereafter the other requested actions.


That does *not mean* xdev -i does do any binary encoded setup stuff. It 
shall read the config file (suggestion is first try /etc/xdev-init.conf, 
then fallback to /etc/xdev.conf when former not exist) and invoke 
required operations for the configured setup lines. The setup lines are 
only used in xdev -i and otherwise ignored by xdev parser (like comments).



... more brain storming:

Just for those who may need such a feature: If you start the fifo 
supervisor manually, you can arrange to startup a different back end 
parser. This may be used to send all device event messages to a file:


#!/bin/busybox sh
tee -a /dev/.xdev-log | exec xdev parser

When this wrapper is used as back end, it will catch and append a copy 
of each event message to /dev/.xdev-log, which could itself be a named 
pipe to put messages in a file and watch file size to rotate files when 
required.


... but thinking of adding an "xdev -l[LOG_FILE]", which overwrite "-f" 
and setup the fifo supervisor to do the logic of the above wrapper, but 
without invoking an extra shell.


Some neat trick: xdev -l/dev/tty9 -pc
  start fifo supervisor
  set xdev as hotplug helper
  trigger the cold plug events
  beside normal parsing a copy of all event messages is written to tty

And again: This is not for normal usage, only for debugging purposes and 
those interested in lurking at their device messages.



... as a may be:

xdev -e[FAILURE_SCRIPT]
  spawn the failure script, when an operation has serious probl

Re: [OT] long-lived spawners

2015-03-13 Thread Harald Becker

Hi James !

On 13.03.2015 20:33, James Bowlin wrote:

On Fri, Mar 13, 2015 at 02:07 PM, Harald Becker said:


xdev -f
- starts up the fifo manager, if none running
  (manual use is special purpose only)

[snip]

xdev -n
- select the netlink mechanism and start daemon
  (auto include -f and -k)

[etc]


Yes, please give me run-time options so I can field test on many
machines and fall back if there is a problem.


This is the intention of my approach to let the BB builder chose which 
methods are included in the binary (size optimization), and the user / 
system maintainer decide which of the included methods to use, but all 
methods shall share the same back end (conf parser and device operations).


That would even allow to do the device gathering for yourself (e.g. with 
some shell script) and use the config based backend by sending event 
messages to the named pipe (which as simple as: echo "message" 
>/dev/.xdev-pipe). That way can even simulate setting up the device 
file system when specific events arrive in a specific order (could be 
done by creating a file with your events and then cat FILENAME 
>/dev/.xdev-pipe).


Just as a hint, as you told you are interested in testing.

... but you gave me an interesting argument, I have not thought about 
yet, when using command line options to chose the method, you can easily 
use a specific boot parameter to switch the device gathering method. As 
a fail safe operation in e.g. a rescue system, this may be of interest.


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


RFD: Possible idea to help on solving kernel hotplug reorder problem

2015-03-13 Thread Harald Becker
Part of the information hopped to a different thread, so it is possibly 
better to split this question of to a separate thread:


For those who want to use the kernel hotplug helper mechanism, the 
netlink interested are not affected by this.



The Problem:

The kernel spawns a separate process for each event, who gathers the 
information for this event. The startup of those processes go into 
parallel, when events arrive in bursts. This has the race condition, of 
reordering the device node operations. Avoid such reordering, mdev uses 
a special file to check a sequence number.


More description in doc/mdev.txt


Idea:

Splitting the gathering parts from the parser / handler, may allow to do 
this checking in a slight different way, which would avoid massive 
writing / reading a file.


When the hotplug helper (or netlink reader) gathers the event 
information, the sequence number of this event can be send ahead of the 
message to the parser. The parser may check this sequence number as 
explained, but push the message in a backlist, when sequence is wrong.


Then parser continue with message reading, checking merging incoming 
messages with the backlist, until it get a message of right sequence 
number or some timeout, which will hop to the nearest sequence number in 
the backlist.


When doing that carefully, we will be able to avoid actively waiting for 
the message getting into the right sequence.


Before the parser exits, it has to write the sequence number in the file 
and read it back (once) on next startup. No more polling until matching 
sequence number.


I did not dig deeper into this, but I wont lose that idea, so I put this 
into it's own thread for separate discussion.


Current state is pure idea, we could do this, but we can also stay at 
current mdev sequence ordering solution. I did not dig into the details 
yet, but will do that when it's the time for this.


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 16:53, Natanael Copa wrote:

I have the feeling (without digging into the busybox code) that making
mdev suitable for a longlived daemon will not be that easy. I suspect
there are lots of error handling that will just exit. A long lived
daemon would need log and continue instead.


The major mdev part, will not be converted in a long lived daemon. That 
is more or less code in a process doing a job like cat, but still you 
are right. It will take some time and has to be done carefully. No doubt.


One of the reasons to have that "fifo supervisor" is the failure 
management. Even if the parser / handler process dies, this is catched 
and can fire up a failure script (something we do not have now).


The kernel hotplug handler stays as a normal process at all, does only 
need to write the gathered info to the pipe instead of a function to 
handle the event operation, so it is mainly replacing the handler 
function with a write to the named pipe, and on the other sid the 
hotplug gathering part is replaced by a read from stdin (pipe) with timeout.


So major work will go to the code resorting parser / device operation 
handler, but I expect the need of doing a parser rewrite (which is 
straight forward for that simple syntax).


Don't misunderstand, it will be an expensive piece of work, but compared 
to finding a specific bug in a 3 line of a program from somebody 
else, the mdev code is simple ... and it's not my first Busybox hacking, 
it is only the first time do that as public discussion in this channel / 
list. I started creating specialized BB versions around 1995, so some 
experience now.


Did you note that pseudo code for the fifo supervisor? May be it hopped 
to the other thread. That is standard code, more or less, comparable to 
e.g. tcpsvd


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 15:46, Michael Conrad wrote:

I thought pseudocode would be clearer than English text, but I suppose
my pseudocode is still really just English...  Maybe some comments will
help.


You can't fix the suffering of your code with some comments ... beside 
that, it looks like you dropped an "#ifdef".



The new code would not be run like a hotplug helper, it would be run as
a daemon, probably from a supervisor.  But the old code is still there
and can still be run as a hotplug helper.


The new code behaves exactly as the old code. When used as a hotplug 
helper, it suffers from parsing conf for each event. My approach is a 
splitting of the old mdev into two active threads, which avoid those 
problems even for those who like to stay at kernel hotplug.


... and those who like to use netlink, can chose netlink and get 
netlink, using the same back end as the kernel hotplug.


So where I am wrong? What is the reason for your concern? Using a pipe 
as IPC? That fifo supervisor? What does my approach not do, you need 
(except completely staying at the old, suffering code)?


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 14:20, Didier Kryn wrote:

 There are interesting technical points in this discussion, but it
turns out to be mostly about philosophy and frustration.


ACK :(



 Hotplug is KISS, it is stupid, maybe, but it is so simple that you
can probably do the job with a script. The same serialization you
propose to implement in user space by the mean of several processes, a
named pipe and still the fork bomb, has been implemented in the kernel
without the fork bomb: it is called netlink.


You mixed some things, may be due to my poor English:

- current mdev suffers due to parallel reparsing conf for every event

- for those who like to stay at kernel hotplug mechanism, my approach 
gives some benefits, but will not solve every corner case; but it looks 
like, I could extend the approach somewhat, to do easier serialization 
(this needs some more checking).


- for those who want to use netlink, it is a small long lived netlink 
reader, pushing the event messages forward to the central back end (who 
frees resources when idle). That shall work as a netlink solution should


So where is your concern? Using a pipe for communication from to another 
process? This is Unix IPC / multi threading. Nothing else.




 These people you are talking of, who would like to see hotplug
serialized but do not want netlink, do they really exist? This set of
people is most likely the empty set. In case these really exist, then
they must be idiots, and then, well, should Busybox support idiocy?


As soon as you can proof, set set of users is empty or hold only a 
dropable minority, we can set default config for the kernel hotplug 
mechanism to off, so it will be excluded from pre-build binaries. When 
nobody more complains. That's it, you get a netlink solution.



 I agree it's fun to have all tools in one static binary. But I dont
see any serious reason to make it an absolute condition. You speak of
*preference*, but this very one looks pretty futile. I don't see the
problem with having even a dozen applications, all static, why not, I'm
also a fan of static linking.


I explained it already in the other thread to Laurent. It is my way. I 
try to avoid forcing others to do things in a specic way, but I hat to 
be forced by others. Busybox is a public tool set and shall provide the 
tools, which allow the user / admin to setup the system as he like. My 
approach is to let others use kernel hotplug mechanism, if they lik, but 
still gain performance boost, and users who like to use netlink, get a 
netlink solution. The cost is some unused bytes in the pre-build 
binaries (may be opted out in build config).


So where do I fail? Neither the optional event gathering parts (which 
will try to stay as fast / small as possible), nor the parser / device 
operation handler does work different than before (except some code 
reordering to avoid parsing conf file for each event). The job mdev does 
has just got split up in different threads, using a proven interprocess 
communication technique (IPC). Again, where do I fail?


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 12:41, Michael Conrad wrote:

On 3/13/2015 3:25 AM, Harald Becker wrote:

This is splitting operation of a big process in different threads,
 using an interprocess communication method. Using a named pipe
(fifo) is the proven Unix way for this ... and it allows #2 without
blocking #1 or #0.


Multiple processes writing into the same fifo is not a valid design.


Who told you that? It is *the* proven N to 1 IPC in Unix.



Stream-writes are not atomic, and your message can theoretically get
cut in half and interleaved with another process writing the same
fifo.  (in practice, this is unlikely, but still an invalid design)


This is not completely correct:

picked out of Linux pipe manual page (man 7 pipe):

---snip---
O_NONBLOCK disabled, n <= PIPE_BUF
All n bytes are written atomically; write(2) may block if there is not
room for n bytes to be written immediately
---snip---

As long as the message written to the pipe/fifo is less than PIPE_BUF,
the kernel guaranties atomicity of the write, message mixing only
happens when you write single messages > PIPE_BUF size, or use split
writing (e.g. do fprintf without setting to line buffer mode);

PIPE_BUF shouldn't be smaller than 512, but more likely 4k as on Linux 
(old), or even 64k (modern).




If you want to do this you need a unix datagram socket, like they use
 for syslog.


Socket overhead is higher than writing a pipe. Not only at code size,
much more like at CPU cost passing the messages.



It is also a broken approximation of netlink because you don't
preserve the ordering that netlink would give you, which according to
the kernel documentation was one of the driving factors to invent
it.


Sure. You say netlink is the better solution, I say netlink is it, but
next door you may find one who dislike netlink usage. We are not living
in a perfect world.

Ordering is handled different in mdev, that shall stay as is. My 
approach can't solve every single problem in this method, but that is up 
to those who like to stay, still they should gain from the speed 
improvement, and less problems from race conditions (each device 
operation is done without mixing with other device operation, as in pure 
parallelism). Additionally the hotplug helper speed is increased and 
does an really early exit compared to current mdev (or your approach). 
This should reduce system pressure and event reordering, but will indeed 
not avoid (needs to be synchronized) ... but I got a different idea: I 
heard about, the kernel provide a sequence number, which is used in mdev 
to do synchronization. May be we should just send the messages to the 
pipe as fast as posible, but prefix them with the event sequence number. 
The parser reads the message and checks the sequence number and pushes 
reordered messages in a back list, until right message receives (or some 
timeout - as done in mdev, but does not need reading / writing a file).


Oh I think the sequence number info is in the docs/mdev.txt description, 
including how this is done in mdev.




If someone really wants a netlink solution they will not be happy
with a fifo approximation of one.


You missed the fact, my approach allows for free selection of the 
mechanism. C hosing netlink means using netlink, as it should be. The 
event listener part is as small as possible and write to the pipe, which 
fire up a parser / handler to consume the event messages.


Where is there an approximation? Kernel hotplug helper mechanism is a 
different method, but also available for those who like to use them. 
Either one will have only some unused code part (if not opted out on 
config).


The difference is, default config can include both mechanisms in 
pre-build binaries. The user can chose and test the mechanism he wants, 
and then possibly build a specific version and opt out unwanted stuff.


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 12:29, Michael Conrad wrote:

On 3/13/2015 3:25 AM, Harald Becker wrote:

   1 - kernel-spawned hotplug helpers is the traditional way,
   2 - netlink socket daemon is the "right way to solve the forkbomb
problem"


ACK, but #2 blocks usage for those who like / need to stay at #1 / #0


In that case, I would offer this idea:


All you do, is throwing in complex code sharing and the need to chose
a mechanism ahead at build time, to allow for switching to some newer
stuff ... but what about pre-generated binary versions, which
mechanism shall be used in the default options, which mechanism shall
be offered?


Please review it again.  My solution solves both #1 and #2 in the same
binary, with no code duplication.


At first complex code reusage, and then: How will you do it without 
suffering from hotplug handler problem as current mdev? I'm don't 
seeing, that you try to handle this problem. My solution is to enable 
kernel hotplug handler mechanism to also benefit and avoid that parallel 
parsing for each event.


... beside that, this close / open_netlink look suspicious, looks like 
possible race condition.


What es wrong with splitting a complex job into different threads? The 
splitting alone, with inserting the named pipe (a long proven IPC), is 
enough to let even kernel hotplug mechanism based system, to gain speed 
improvement (and expected less memory usage on system startup). On 
modern multi core machines this will also allow the split operations to 
run on different CPU cores, with no extra cost. Synchronized operation. 
Where your Solution still hold the possibility of race conditions from 
possible parallelism.




I suggested wrapping #2 in a ifdef for the people who don't have netlink
at all, such as on BSD, and also anyone who doesn't want the extra bytes.


Therefor is my approach to allow for opt out in the config, but brings 
me otherwise to the idea to throw a compiler error, when netlink is 
build on system where not available, or optionally a warning and auto 
disable netlink support (usually a 4 liner snipped at the code start).


#if CONFIG_FEATURE_MDEV_NETLINK && NETLINK_NOT_AVAILABLE
  #define CONFIG_FEATURE_MDEV_NETLINK 0
  #warn "This system lacks netlink support, netlink disabled"
#endif

... or something similar.

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [OT] long-lived spawners

2015-03-13 Thread Harald Becker

On 13.03.2015 11:51, Laurent Bercot wrote:


  I think that adding hotplug helper management in a hotplug helper
needlessly complicates things.


Complicate? Yes!

Needless? I doubt!

Needless in your eyes? Needless in my eyes? Sure!

... but others may answer it different.

So why bother? With a modular system you can put things together in the 
way you like.



  It's easy enough to run the following in the init sequence:
  1. start a netlink listener with a long-lived handler
  2. coldplug stuff
  3. register a hotplug helper
  4. kill the netlink listener.


??? netlink in the phase of cold plug then switch to kernel helper 
methode ... got a bit confused ... ok we talked about very small 
systems, but doesn't seam to be the normal requirements, but 
nevertheless possible with a modular system.


And your #3 is, where my inclusion of setting the hotplug helper in mdev 
occurred (even if ony this one echo operation). Before #1, #2 or #3 is 
able to send event messages to an handler, the named pipe (fifo) has to 
be created. netlink start and cold plug may auto start, but if you do 
that echo in the scripts, you need also to startup the fifo. On the 
other side, adding that one echo line to mdev and let a command option 
select the plug mechanism used, simplifies usage, as all mechanisms 
involve the same setup steps (just change one parameter to select 
method). So normal system startup will be:


#0 - initial creation of the device file system
#1 - run a single device management command, setting parameter

so call the command xdev for now, to distinguish to current mdev operation:

xdev -i
  - do the configured initial setup stuff for the device file system
(this is optional, but I like the one-shot startup idea)

xdev -f
  - starts up the fifo manager, if none running
(manual use is special purpose only)

xdev -k
  - disable hotplug handler, kill possibly running netlink daemon
(for internal and special purpose usage)
kill is not perfect yet, race condition when switching mechanisms
(needs more thinking)

xdev -p   (changed the parameter due to criticism)
  - select the kernel hotplug handler mechanism
(auto include -f and -k)

xdev -n
  - select the netlink mechanism and start daemon
(auto include -f and -k)

xdev -c
  - do the cold plug initiation (triggering uevents)
(also auto include -f)

xdev -s
  - do the cold plug as "mdev -s" does
(also auto include -f)

xdev (no parameter)
  - can be used as kernel hotplug helper

xdev netlink
  - the netlink reader daemon
(this is for internal use)

xdev parser
  - the mdev.conf parser / device operation process
(this is for internal use)

Where each of the mentioned parts except the fifo startup can easily be 
opted out on config, but otherwise add only some unused bytes in BB 
binary. The fifo manager (named pipe supervisor) daemon itself is not 
included in this list, as a general fifosvd as separate applet seams to 
be the better place (just used internal by -f).


current mdev -s may be either (other combinations possible)

  xdev -s= do old sys file scanning
  xdev -pc   = kernel hotplug mechanism, trigger the cold plug
   (uses xdev as hotplug handler)
  xdev -nc   = netlink mechanism, trigger the cold plug
   (starts xdev netlink reader daemon)

The only other change is to remove the old setting of mdev as hotplug 
helper in the kernel completely (done implicitly by -p).


All those may be combined with -i, then at first the configured setup 
operations are performed, then are the other requested actions taken.


That does *not mean* xdev -i does do any binary encoded setup stuff. It 
shall read the config file (suggestion is first try /etc/xdev-init.conf, 
then fallback to /etc/xdev.conf when former not exist) and invoke 
required operations for the configured setup lines. The setup lines are 
only used in xdev -i and otherwise ignored by xdev parser (like comments).




  To avoid duplicating event handling between 3 and 4, both the
short-lived hotplug helper and the long-lived handler can take a
lock.


As an idea: When the parser holds a small table with the names from the 
last N events (names? don't the kernel provide an event number? heard 
something about this), for any new incoming event it can be checked if 
the same message came in before, and then just ignored. Otherwise the 
operation is done and the message placed in the least recently used 
table slot (simple linked LRU list).


This is only an idea to add extra safety against duplicate event race 
conditions. Can otherwise be left out when dropping that safety. My 
suggestion: Implement it, but let it depend on a "hidden" config option 
(not in config system, at start of source file). The default setting can 
be discussed later.


read_conffile(...);
if( LRU_ENABLE )
{
  setup_lru_table();
}
while( read_message(msg) )
{
   if( !LRU_ENABLE || !message_in_lru(msg) )
   {
 do_device_operation(msg);
   }
   if( 

Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 11:25, Guillermo Rodriguez Garcia wrote:

I understand your argument. You are saying that users should be able
to choose at runtime. What I say is that my impression is that most
users belong to one of the following two groups: Those who don't
really care, and those who are happy making this choice at build time.


Putting peoples in either-or-categories is not very handy, humans are to 
different, and you won't predict the exact needs of the next person 
coming around.


So is either blocking any innovation or forcing one half of your 
either-or-group to do things in a specific way (if they doubt or not), 
your way? Not mine!


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [OT] long-lived spawners

2015-03-13 Thread Harald Becker

On 13.03.2015 10:52, Natanael Copa wrote:

Crazy idea:

have a netlink listener/handler that is installed in
/proc/sys/kernel/hotplug.

On startup it will set up a netlink listener and remove itself from
/proc/sys/kernel/hotplug so all subsequent events comes via netlink.

Read events from netlink and handle those.

On timeout (no events for N seconds), restore itself in
/proc/sys/kernel/hotplug and exit.

I don't think this is possible to implement without race conditions
so I still believe a minimal forever running netlink listening
daemon is the way to go.


Crazy! ... but otherwise ACK ... you may try this, on my intended
modular device management. Modify the hotplug helper to disable hotplug,
fire up netlink, but don't loose the initial hotplug handler event ...
but I don't expect main stream stability


What happens if new event comes after netlink manager is started up but
before the netlink listener is setup up?

For example /dev/sda event comes and the user space hotplug program is
fork/executed, but before it has reached to set up netlink listener the
event for /dev/sda1 comes.

How do you prevent races? I suppose you would need kernel help for
solving that properly, like Denys suggested.


That is where the fifo (named pipe) steps in. The pipe (some bytes of 
descriptor in kernel) will still be there (sense of fifo manager). Any 
active mechanism handler, just write a message to the fifo, when 
complete event information has bean gathered. Event messages are then 
consumed by the parser / handler in the order they have been written 
into the pipe. If no parser is running the fifo manager fires one up. 
Event processing may be slight delayed, but processing is always in the 
order the messages have been written to the pipe (meanwhile the kernel 
uses some buffer space to hold the messages). On modern machines the 
event gathering and handling can even run on different CPU cores, for no 
extra cost.


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 10:30, Guillermo Rodriguez Garcia wrote:

There are many configuration options in BB that must be defined at
build time. I don't see why this one would be different.


You can activate both as the default (with cost of some byte overhead of 
code size), and let the user of the binary decide which mechanism he 
prefers, or even flip temporarily (without system interruption).




Users that want a functional solution will not probably care much
about the underlying implementation.


Exactly that means, using only one mechanism, is forcing those users to 
do it in a specific way, with all sort of consequences.



Those who want to tailor BB to fit their preferences most likely don't have a 
problem with building
their own BB.


Ok, and what's than wrong with my intended approach? You will be able to 
opt out most of the parts, if you like, even the parser / handler (think 
of you want to handle device management in a script without reinventing 
the event plug mechanism). It is a modular system, just tie those 
functions together you like.


A device handler could be:

#!/bin/sh
while read -tTIMEOUT message
  do
# now split the received message
...
# and setup your device node entries
...
  done
exit 0

... or think vise versa: Let someone find a new, ultra solution 
mechanism, but still want to use the conf parser / device handler back 
end. So opt out the plug mechanisms and let in the parser, then use a 
small external program with the ultra new solution mechanism (until it 
may be added as another optional mechanism in Busybox, like netlink).


A modular system, put together the required parts. Only cave-eat should 
be, you need to fire up some kind of a service daemon for the device 
management system ... else this start of the service daemon (fifo 
manager is just another name for it) needs to be coupled with some other 
part ... which hit Laurent's wishes, poking me for clarity and 
functional separation.


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 09:18, Natanael Copa wrote:

Any program (with appropriate rights) may open the "named pipe" (fifo)
for writing. As long as data for one event is written in one big chunk,
it won't interfere with possible parallel writers. If the fifo device is
closed by a writer, this does not mean it vanishes, just reopen for
writing and write next event hunk).


What I meant was that reader needs to reopen it too.


What?



my point is that you need a minimalist daemon that is always there. Why
not let that daemon listen on netlink events instead?


... because then you need to duplicate the process fire up code in the 
kernel hotplug helper, which adds extra cost to this process, which 
should be as fast as possible. A named pipe (fifo) on the other side, 
allows the helper just to do a simple open to get access to the pipe.




Since it is so simple as you say, why not write a short demo code?


... because I hate unnecessary code hacking before finishing the planing 
step!


... and what short demo code you are expecting? For what? Showing that a 
thousand times proven concept will work? Examples for using named pipes 
exist already, search on the net, when you like.




May help me understand what you really mean.


Sorry, that thinking in code is not my way!

I prefer thinking in functionalities and data flows (the way our 
teachers told), and then hop on the build environment, and see what I 
need to do to get the required functionalities (for C / Linux I know 
them, for others I need to dig a bit more, e.g. different language, 
different system). That way, I'm independent of a specific language / 
compiler / environment, and can early change required algorithms without 
much cost, when required.



___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 09:04, Guillermo Rodriguez Garcia wrote:

Michael's proposal would allow you to do what you want to do, since
you are one of those "experts who know how to build their own BB
version". So what's wrong with his proposal?


It is wrong, as his solution will either trow out users which stuck on 
the kernel hotplug mechanism (or will not benefit from improvements), or 
block spreading newer technology to the majority (who stuck on using 
pre-build binaries).


... even (and hence) another and proven interprocess communication 
technique (named pipe, aka. fifos) would let the user select his wanted 
mechanism (the plug mechanism not the old suffering mdev behavior).


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-13 Thread Harald Becker

On 13.03.2015 08:23, Natanael Copa wrote:

I find it hard to discuss any solutions with someone who don't agree
what the problem is.


ACK



So my question to this:

What is the sense of this?
What do you want to express with this?


Show with an example the idea a possible way to solve the hotplug
forkbomb problem with a minimal long lived netlink listener + short
lived event handler.


You hacked a solution, for the mechanism of your preference, throwing 
out those who want to use one of the other mechanisms ... this is 
forcing others to do it your way?


... and what is the major difference of your hacked code and my intended 
solution?



Instead of just using words to tell how to make a, express it with
example code.


Ok, example code, which show some of the work required for the intended 
methods, but lacking other ... and then you start adding and modifying 
your hacked code, before even finishing the planning, and even before 
finishing the gathering of the required functionalities? ... Shruck! ... 
I prefer doing it the way vise versa.


... you did nothing more, than putting together some well known code 
examples, for something incomplete ... I already pushed similar code 
examples into the trash can ... sorry, don't see any / much usage of this :(




What was your intention to do that code hacking?


To show that a long-lived netlink daemon can be very small and simple -
probably much smaller than a fifo manager would be.


LOL ... sorry, but can't resist ...

... you clearly does not understand the purpose of that "fifo manager": 
Have you ever used supervisor daemons like tcpsvd, which accept incoming 
TCP connections to fire up a handling server process? This is the job of 
the "fifo manager", fire up a parser / device handler process, when it's 
required ... and it is required because kernel spwaned hotplug helper 
otherwise can't access the pipe to deliver the event to the handling 
process, or still need to spawn a full parser / handler for each event. 
So what is the benefit of your code snippet, other than forcing other to 
do it your way?




If someone have a better idea how to solve it, I would love to hear it.
(No, I don't fifos will make it smaller/simpler)


What? You use pipes and say fifos will suffer? You clearly did not 
understand the operation of named pipes (aka Linux fifos)?


Have you ever thought of the problem, how a separate spawned process can 
access and write something into a pipe? The Unix solution for this are 
named pipes (fifos), that is separate spawned processes may get access 
to the pipe by just opening the name in the files system, receiving the 
pipe descriptor ... so what is the difference?


One may be my decision to split even more functionality into smaller 
modular blocks, instead of copying there code snipped into different 
programs.


You will need the logic to fire up a process to handle the event, and 
react on failure exit. Or you need to let that process stay in the back 
forever. Right?


My intended "fifo manager" does exactly this, beside creating the named 
pipe. That's it, a supervisor comparable in it's function to tcpsvd ...


... and may be it is a good idea to add this functionality as a separate 
applet into Busybox, where other functions (e.g. Natanaels modprobe 
wishes, and others) may benefit from same functionality:


e.g. (not a code example, but a usage example)

usage:  fifosvd [-e ERROR_SCRIPT] FIFO_NAME PROG [ARGS]

Create the named pipe FIFO_NAME, wait for any other process writing 
something into this pipe, then fire up PROG with given ARGS connecting 
it's stdin to the read end of the fifo (stdout to /dev/null, stderr to 
same as the one of fifosvd). When PROG dies, check the exit status and 
fire up ERROR_SCRIPT with name of PROG, exit status or failure reason, 
and a count of successive failures.


In addition fifosvd should forward signals SIGINT, SIGQUIT, SIGHUP, 
SIGTERM, SIGUSR1, and SIGUSR2 to the currently running PROG, or just die 
on SIGTERM otherwise (ignoring the others).


... and no, this is not an extra thing. This *is* the function of the 
intended "fifo manager", with the benefit of splitting process fire up 
code from possibly different programs, replacing it with a single and 
simple open of FIFO_NAME for writing.


... and (as an idea) when FIFO_NAME is dash, use stdin of fifosvd as 
pipe (skip fifo creation), but still wait until any data arrive in the 
pipe, then fire up a PROG, respawning on more data, until write end of 
pipe get closed ... and you gain a dynamic pipe manager:


ANY PROG | fifosvd - CONSUMER_PROG

Will delay start of CONSUMER_PROG until data arrive in the pipe, and 
allows the consumer process to exit when idle (timeout needs to be added 
to consumer, rest is done by fifosvd) ... think of running a shell 
script with notable memory consumption of the shell, only running when 
data is send to the pipe, otherwise freeing resources:


#!/bin/sh
while read -tTIMEOUT line
  do case 

Re: RFD: Rework/extending functionality of mdev

2015-03-13 Thread Harald Becker

On 13.03.2015 00:05, Michael Conrad wrote:

On 03/12/2015 04:32 PM, Harald Becker wrote:

On 12.03.2015 19:38, Michael Conrad wrote:

On 3/12/2015 12:04 PM, Harald Becker wrote:

but that one will only work when you either use the kernel hotplug
helper mechanism, or the netlink approach. You drop out those who
can't / doesn't want to use either.


...which I really do think could be answered in one paragraph :-) If the
netlink socket is the "right way" to solve the forkbomb problem that
happens with hotplug helpers, then why would anyone want to solve it the
wrong way?  I don't understand the need.


To clarify,


Adding in here #0 - to not forget cold plug and semi automatic handling


   1 - kernel-spawned hotplug helpers is the traditional way,
   2 - netlink socket daemon is the "right way to solve the forkbomb
problem"


ACK, but #2 blocks usage for those who like / need to stay at #1 / #0


   3 - kernel-spawned fifo-writer, with fifo read by hotplug daemon is
"solve it the wrong way".


NO!

This is splitting operation of a big process in different threads, using 
an interprocess communication method. Using a named pipe (fifo) is the 
proven Unix way for this ... and it allows #2 without blocking #1 or #0.




ohh, good question! ... ask Isaac! (answer in one paragraph?)


What I hear Isaac say is "leave #1 (traditional way) alone.  I want to
keep using it".

I agree with him that it should stay.  But I would choose to use #2 if
it were available.  I am asking the purpose of #3.


The purpose of #3 is splitting the hotplug handler needed for 
traditional #1 from the suffering part, putting this with some 
rearranging into a separate process and use a proven interprocess 
communication methode (IPC). Now you may use whichever mechanism you 
like, and whichever method you chose, it will benefit from overall speed 
improvement (as events tend to arrive in bursts, and there is no more 
extra parsing for the 2nd, and following events). That's it.


I try to provide the work to step to #3, allowing everybody to use the 
mechanism he likes, Isaac on the other hand block any innovation, 
forcing me and others to either stay at #1 too, or chose a different / 
external program (with consequence of code doubling or complex code 
sharing, the opposite to clarity).




So I think your answer to my original question is "the fifo design is a
way to have #1 and #2 without duplicating code".


The fifo design is a proven method to split operation of complex 
processes into smaller threads, using an interprocess communication method.



In that case, I would offer this idea:


All you do, is throwing in complex code sharing and the need to chose a 
mechanism ahead at build time, to allow for switching to some newer 
stuff ... but what about pre-generated binary versions, which mechanism 
shall be used in the default options, which mechanism shall be offered?


With netlink active (sure the proven and better way for the job), you 
hit those like Isaac. With netlink disabled, spreading newer technology 
to the wide is usually blocked (don't talking about some experts who 
know how to build there own BB version).


So why not allowing some innovation, to let the user chose which 
mechanism to use? What is wrong with this intention?


I neither want to reinvent the wheel, nor go the udev way to create a 
big monolithic block, but I like to get the ability to setup the system 
the way I like, without blocking others to use the plug mechanism they like.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [RFC] Proof-of-concept for netlink listener for mdev -i

2015-03-12 Thread Harald Becker

Hi Natanael,

I prefer finishing planning and creating a functional complete 
structure, before hacking code, so I do not see any benefits, to dig 
into your code at the moment, don't see any question that could be 
answered this way, at least at the moment.


So my question to this:

What is the sense of this?
What do you want to express with this?
What was your intention to do that code hacking?

I don't expect, you want to show, you are able to do this programing?

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [OT] long-lived spawners

2015-03-12 Thread Harald Becker

On 12.03.2015 20:57, Laurent Bercot wrote:

On 12/03/2015 20:07, Harald Becker wrote:

Don't you risk resource problems for hotplug handler processes, when
the system is under such pressure?


  No and that's my point. If you don't have swap, it's likely that
your box is embedded and you won't spend your life plugging and
unplugging USB sticks, so after the first coldplug burst, the
pressure will be pretty small.


So why using hotplug on those systems at all? Semi automatic device 
management may be a good chose for such systems, that is call "mdev -s" 
manually when device update is required. May be done invoked by an 
application (either direct or indirect)


... beside this, I think it is the users decision which mechanism he 
want to use ... it is his system!


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker

On 12.03.2015 19:38, Michael Conrad wrote:

On 3/12/2015 12:04 PM, Harald Becker wrote:

but that one will only work when you either use the kernel hotplug
helper mechanism, or the netlink approach. You drop out those who
can't / doesn't want to use either.


...which I really do think could be answered in one paragraph :-) If the
netlink socket is the "right way" to solve the forkbomb problem that
happens with hotplug helpers, then why would anyone want to solve it the
wrong way?  I don't understand the need.


Michael,

ohh, good question! ... ask Isaac! (answer in one paragraph?)

In General: We are not living in a uniform world, where every person 
handle things in the same way ... peoples preferences are different ... 
for whatever reason ...


In Detail: I'm not a philosopher, but it sounds to me more like a "what 
is the sense of Live?" question ... that mean I can't give you the 
definitive answer to this question


In Practical: We can have a simpler version, when we chose the netlink 
approach and Natanaels cold plug trigger function, but then others will 
complain (see Isaac).


Conclusion: As I accept different preferences, and I do not want to 
force others more than possible, I tried to find a solution which allows 
every required mechanism (with maximum code sharing), and let the user 
chose which one to use. In addition BB has a config system which allows 
to disable unwanted stuff, so you can opt out the hotplug stuff when you 
like.


... but still Isaac insist on using the old (suffering) mdev version and 
prefers an error known system over some imvrovements and innovation :(


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker

On 12.03.2015 18:12, Laszlo Papp wrote:

It is nice that you are trying to help and I certainly appreciate it,
but why cannot you simply do that job nicely outside busybox where
*you* have to be responsible for that project? It would be an explicit
way of enforcing KISS and not putting more burden on Denys.


If you talk about development and testing, yes ... but my need is a one 
system one static binary approach ... and I really want to put in that 
extended functionality ... so running it as a separate project would 
mean to fork the Busybox project ... public or private, does not matter 
here ...


... I have an idea, to those preference questions ... may be it is of 
general interest ... could even reduce some development pressure from 
Denys ... may be ... but let me check some details, before I write a 
message in a new thread for this, to see what other think about it ...




If you can convince the busybox community to split up the
maintainership, perhaps that would be a completely different
discussion to start with, but in all honesty, I do not like these
"monolythic" projects. I still stick by that KISS is a good thing. If
I could, I would personally replace busybox with little custom tools
on my system, but I currently do not have the resource for that.
Therefore, all the complexities and non-kiss that goes in is something
I need to accept.


... and I go the different way, hook up a complete system with a single 
statically linked binary (ohps currently two, Busybox and dropbear) ... 
though, peoples preferences are different ... I try to do not force 
anybody else to do things in a specific way, but I don't want to be 
forced by others ... and I dislike placing that stuff in (a) different 
file(s), except for a very good reason (e.g. minimalist memory usage on 
plug helper, and daemon - but that has to be discussed)



Asking for feedback is good, nothing wrong in there; putting this into
busybox this way is wrong on the other hand IMHO.


Peoples preferences are different, but when blocking every innovation in 
Busybox you force me to do things in your way! ...


... so my possibilities would be: Either forking Busybox project or 
dropping all my development forever ... both not very welcome! :(


And don't ask! It's preference, not technical, or knowledge ...

And why grumbling about a few bytes on one hand, and then throwing in 
other stuff? The netlink functionality has been requested by others to, 
but I see there are people who like to stay at kernel hotplug mechanism, 
or even semi automatic device setup, so why shall we not enable the base 
functionality for all of them, with maximum code sharing, so user may 
decide which function to use or possibly opt out on config? Otherwise I 
do not want to break existing setups, but was poked to clarity, that is 
splitting of functions which do logically not belong into separate 
commands ... no trouble for me to do this, but then we may need to do 
some slight modifications to startup scripts (one or two lines or 
command parameter) ... everything has it's pros & cons ... even writing 
lengthy mails ... which consume the time I could otherwise use for 
development ... :(


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [OT] long-lived spawners

2015-03-12 Thread Harald Becker

On 12.03.2015 13:45, Laurent Bercot wrote:

  If your system doesn't have swap, then it's probably some embedded
box and you likely won't hotplug many things, so you can use
Natanael's approach: use an event listening daemon for the coldplug,
then kill it and register a /proc/sys/kernel/hotplug helper for the
rest of the system's lifetime.


Don't you risk resource problems for hotplug handler processes, when the 
system is under such pressure? ... by I think, Denys meant big 
monolithic daemons, e.g udevd, when he talked about long lived daemons, 
not of small service starter daemons.


--
Harald



___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [OT] long-lived spawners

2015-03-12 Thread Harald Becker

Hi Natanael !


My point is that a long lived daemon that stays there 5 hours after
the last hotplug event is currently unavoidable unless you are ok
with one fork/exec for every event.


Why not logical splitting the function?

- one listener which is gathering data, and forward sanitized messages

- one consumer / handler job, started when required but dies when idle
  (meanwhile that single job consumes the messages and act on them)


... and compare that to operation of e.g. tcpsvd ... setup network 
socket, accept incoming connections, fire up handling server program ... 
so what is the difference?




Crazy idea:

have a netlink listener/handler that is installed in
/proc/sys/kernel/hotplug.

On startup it will set up a netlink listener and remove itself from
/proc/sys/kernel/hotplug so all subsequent events comes via netlink.

Read events from netlink and handle those.

On timeout (no events for N seconds), restore itself in
/proc/sys/kernel/hotplug and exit.

I don't think this is possible to implement without race conditions
so I still believe a minimal forever running netlink listening
daemon is the way to go.


Crazy! ... but otherwise ACK ... you may try this, on my intended 
modular device management. Modify the hotplug helper to disable hotplug, 
fire up netlink, but don't loose the initial hotplug handler event ... 
but I don't expect main stream stability


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker

Hi Laurent !

>   Out of curiosity: what are, to you, the benefits of this approach ?

What are the benefits of preferences? ... good question!? ;)



Does it actually save you noticeable amounts of RAM ?


May be a few bytes ... noticeable? what is your level to noticeable 
here? ... otherwise I would say *NO*



of disk space ?


Which disk space? ... which disk? ... looking around ... not seeing a 
"disk" (in your sense) ... shot: complete system from initramfs ... disk 
in this sense is some external USB data storage (only data, never used 
for system purposes).



Is it about maintenance - just copy one binary file from a system to
another ?


Hit!


(But you'd also have to copy all your scripts...)


Second file, a tar, but all architecture independent,
third file, a tar, with pre-configured setups.


Is it about something else ?


Yes, the most important! ... It's my way, the way I did it for several 
commercial projects ... and the way I like to do it ... clarity / 
simplicity / puristic ... preference! :)




  If it's just for the hacking value, I can totally respect that, but it's
not an argument you can use to justify architectural modifications to a
Unix tool other people are using, because it kinda goes against the Unix
philosophy: one job, one tool. Busybox gets away with it ...


Better to call Busybox a tool set, it is several commands and a library 
linked together to share some code size. Beside the invoking logic of 
the applets (multicall) the applets has to be considered separate 
commands ... though some commands tend to forget that.


I won't try to hide fixed system functionality in a binary (better say 
program or command here), except for fall back operation (e.g. last 
resort handling). My usual approach is to spawn a script when it's time 
to handle system depended things (or need to be under admin control).


... but I like to describe in configuration what to do, not how to do 
(as it's done in scripts). So I like to have simple lists, describing my 
system, let a one-shot command parse that list, and call the required 
programs / commands / scripts with the configured information from the 
lists, to do the job.


e.g.

# required virtual file systems
/proc root:root  0755  %proc
/sys  root:root  0755  %sysfs
/dev  root:root  0755  %tmpfs  size=64k
/dev/pts  root:root  0755  %devpts

(describes my system setup - selected part - without describing how to 
go there)


... and yes that could be done with shell scripting ... the way I'm 
doing it since years ... but still things tend to be scattered around, 
so i liked to add setting up the virtual file systems (excluding /tmp, 
which I setup in fstab) and preparing the device file system (including 
the device descriptions) in one central place (that was mdev.conf). 
Currently I put those lines in comments and filter them out into a shell 
script, but this is sometimes confusing.



(and I believe that inclusion of supervisors and super-servers is already
too much).


ACK ... but what do you think about e.g. tcpsvd (accepting incoming tcp 
connections), or netlink reader?


My fifo watcher does comparable job as tcpsvd (brings me to the idea to 
create it as fifod applet in BB with appropriate options). We could use 
TCP connections, but system cost for fifo's named pipe should be below 
the cost of running through network stack.



  Even on a noMMU system, I think what you'd gain from having one single
userspace binary (instead of multiple small binaries, as I do on my
systems) is negligible when you're running a Linux kernel in the first
place, which needs at least several megabytes even when optimized to
the fullest.


Linux kernel including cpio to setup initramfs is around 6 to 8 MByte on 
modern kernel versions (complete system kernel + system tools + 
application scripts) ... running on a system with 64 MByte or even 16 to 
20 MB. No disks at all. Boot from CD-ROM drive, then turn it of. ... but 
nowadays more likely boot from USB stick :) ... 256 MByte stick :) ... 
vintage ... 32 MB Boot partition, boot loader files + boot images + 
System Config, rest of stick is data storage.




  I know a guy who manages to run almost-POSIX systems in crazy tiny
amounts
of RAM - think 256kB, and the TCP/IP stack takes about 40 kB - but that's a
whole other world, with microcontrollers, JTAG, a specific OS in asm, and
ninja optimization techniques at the expense of maintainability and
practicalness.


Full duplex serial bridging between an PLC bus system and special 
synchronous clocked bus system for hazardous areas with a bit rate of 31 
kbps and manual Manchester code detection and bit shifting, with a 8 bit 
CPU of 8 to 12 MHz, 16 kB EPROM and 1 kByte RAM and a frame size of up 
to 300 Byte in each direction ... and at least one bus side required 
sending packet check sum in the header :( ... microcontroller ... and 
OS? what's that? which OS? first instruction executed by CPU after reset 
at address X ... that was 

Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker

Hi Laszlo !


So why don't you write such a binary wrapping busybox then and other
things? I think KISS principle still ought to be alive these days.
Not to mention, Denys already cannot cope with the maintenance the
way that it would be ideal. For instance, some of my IMHO serious
bugfixes are uncommented. Putting more and more stuff into busybox
would just make the situation worse and more frustrating,


The usual way of development is:

(1) Planning the work to do (the step we are discussing here)

(2) Code hacking (what I will do next)

(3) Preliminary Testing (also mine job)

(4) Offer access to those who like (for further testing)

(5) fixing complains

(6) putting into main stream (or accessible by the rest)

Right? So what is your complain?



sorry. I really do not want busybox to follow the systemd way.


Who told you, I'm trying to go that way. My intention is to overcome the
mdev problems, and allow those who like to use the netlink interface. I
dislike encoding any fixed functionality in a binary, and don't force
anybody to use possible extensions.

Laurent poked me to more clarity, which would mean to split early
initialization from mdev operation, with the cave eat of a slight change 
in init scripts (may be one more command or some extra
command parameter, could be done automatic, but than different 
functionalities in applet - may be more discussion required on that). 
Beside that, it's shall be up to the system maintainer, to chose which 
device management mechanism to use (BB shall provide the tools for that, 
small modular tools, bound together by admin - no big monolithic).




On the positive side of systemd, they at least have far more resource
than what Denys can offer, at least in my opinion, so ...


You are talking about development resources? Here they are! I'm willing 
to do that job, not asking for someone else doing the work.


I'm asking about, which preferences other people have, so I'm able to 
get the right decisions, before I start hacking code ... so what's wrong?


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker

Hi Natanael !

> I assume that you are talking about named pipes (aka fifos)

http://en.wikipedia.org/wiki/Named_pipe


Ack, fifo device in the Linux / Unix world.


Why do you need a hotplug helper spawned by kernel when you have a
netlink listener? The entire idea with netlink listener is to avoid the
kernel spawned hotplug helper.


... because there are people, wo dislike using netlink and want use the 
kernel hotplug helper mechanism. That's it. Peoples preferences are 
different. Opt out the functions you dislike in BB config.


... but this is vise versa, for those who chose to use the kernel 
hotplug mechanism.




It simply does not make sense to have both.


Both active at the same time? Sure! ... This is not the intention. I've 
been talking about the functionalities, which need to be implemented.




Every gathering part grabs the required information, sanitizes,
serializes and then write some kind of command to the fifo. The fifo
management (a minimalist daemon) starts a new parser process when
there is none running and watches it's operation (failure handling).


If you are talking about named pipes (created by mkfifo) then your
"fifo approach" approach will break here.

Once the writing part of the fifo (eg a "gathering part") is done
writing and closes the writing part, the fifo is consumed and needs to
be re-created. no other "gathering part" will be able to write anything
to the fifo.


??? You don't understand operation of named pipes!

Any program (with appropriate rights) may open the "named pipe" (fifo) 
for writing. As long as data for one event is written in one big chunk, 
it won't interfere with possible parallel writers. If the fifo device is 
closed by a writer, this does not mean it vanishes, just reopen for 
writing and write next event hunk).




Basically what you describe as a "fifo manager" sounds more like a bus
like dbus.


It is the old and proven inter process communication mechanism of Unix, 
nothing new.


And "fifo manager" sounds really big, it is a really minimalistic 
daemon, with primitive operation (does never touch the data in the fifo, 
nor even read that data). It's main purpose beside creating the fifo, is 
to fire up a conf parser / device operation process when required, and 
to react on failure of them (spawn a script with args).


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker

Hi !

To Michael:

Don't be confused, Natanael provided an alternative version to achieve 
the initial device file system setup (which isn't bad, but may have it's 
cons for some people on small resource constrained systems of the 
embedded world). So I left it out for clarity ... but still may be 
implemented / used as an alternative method for setup.



To Natanael:

On 12.03.2015 10:14, Natanael Copa wrote:

- The third method (scanning sys file system) is the operation of "mdev
-s" (initial population of device file system, don't mix with "mdev" w/o
parameters), in addition some user take this as semi automatic device
management (embedded world)


I disagree here. mdev -s solves a different problem.


No.


there are 2 different problems.

1) handle hotplug events.

>


There are only 2 methods for this problem:
   A) using 1 fork per event (eg /sbin/mdev
 in /proc/sys/kernel/hotplug). this is simple but slow.


This is "mdev" (without any parameters).



   B) reading hotplug events via netlink using a long lived daemon.
   (like udev does)


Currently not in mdev, but shall be an alternative mechanism in my 
implementation, so you may chose and use the mechanism you like.



2) do cold plugging. at boot you need configure the devices kernel
already know about.


This is the operation of "mdev -s"



B) solve problem 1 mentioned above (set up a hotplug handler)
and then trigger the hotplug events. Now you don't need scan the
sysfs at all.


But still there are people in the embedded world, who like (or are 
forced) to use semi automatic device handling, that is calling something 
like "mdev -s" to scan sys file system for new devices.


My approach is to give them all the possibility to device management 
withe mechanism they like, without maintaining different device 
management systems and duplicating the code.


Three different mechanisms, with three different front end's, and one 
shared back end.


... the only thing I see is a difference in initial device system 
population. You provided a different method to trigger the coldplug 
events (and yes, I understand that approach), but that one will only 
work when you either use the kernel hotplug helper mechanism, or the 
netlink approach. You drop out those who can't / doesn't want to use either.


I left that out in the "short" summary for Michael, but didn't forget 
your hint. May be we can have an additional alternative for the setup 
part, implementing event triggering (still we do the sys file system 
scan, but with different handling then) ... or you do it in scripting, 
setup your hotplug handler or netlink listener and then run a script to 
trigger the plug events (nice idea otherwise, I like it, but won't 
unnecessary drop people on the other end).



What I currently do is:
- for problem 1 (dealing with hotplug events) I use method A.

- for problem 2 (set up devices at boot) I use method A because my
   hotplug handler is slow due to many forks.

What I would like to do is switch to method B for both problem 1 and
method B for problem 2 too. However, I want the long lived daemon
consume less memory and i want be able to use the mdev.conf syntax for
configuring the devices.


No problem, my approach shall give the possibility to do it the way you 
like, without blocking the others. Still you need to do the following 
steps on system startup:


- initial creation of the device file system (out of scope of mdev)

- prepare your system for hotplug events, either kernel method or 
netlink  (if you go that way)


- trigger initial device file system population (cold plug)
  (may be done in two ways, yours or the old "mdev -s")



- So how can we have all three methods without duplication of code, plus
how can we speed up the kernel hotplug method?


IMHO, there are not 3 methods so the rest of discussion is useless.


You are going to force others to do it the way you like and everything 
other is of no interest?


I'm trying to give most people the possibility to do it the way they 
like, your way inclusive! ... without blocking other methods (or may be 
external implementations with reusage of conf / handling back end).


... as other functions in BB, unwanted functionalities may be opted out 
in build config.


... so this would mean for you: Include the back end and the netlink 
into your Busybox, do the usual device file system creation, activate 
the netlink handler (shall auto start the fifo watcher), then trigger 
cold plug events (undiscussed if done with script or added to binary).


Anything wrong with this?

--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker

Hi Isaac !

On 12.03.2015 02:05, Isaac Dunham wrote:

I just don't think you're quite thinking through exactly what this means.

In which sense? Let me know, what I got wrong. I'm a human, making mistakes,
not a perfect machine.


It seems like you want to force everyone who uses mdev to use a multi-part
setup.


Whops, you got that definitely wrong! Who told you, I want to force you 
to use a different setup. My clear intention, was adding some extra 
flexibility, without breaking existing usage.


... but criticism arose, which poked for more clarity. Only due to this 
and the wish to allow fitting as much preferences of people as possible, 
there might result a slight change: One extra operation may be required 
(when not not combined / hidden in the mdev system for clarity). At the 
location you setup the hotplug handler or do "mdev -s" you either need a 
slight change of the command parameters or need to insert a single extra 
command (details on this have not been discussed yet). But that's it.


Are you really concerned about inserting a single extra line in your 
startup scripts or having a modification to one of those lines? That way 
you would block any innovation, *and* the needs of other people.




Specifically, you are proposing to *replace* the standard hotplugger design
(where the kernel spawns one program that then sets up the device itself)
with a new design where there's a hotplugger that submits each event to
a longer-lived program, which in turn submits the events to a program
that creates/sets up the devices.


You say I propose a change of the device management system?

The mdev system suffer on event bursts and this has been grumbled about 
several times in the past. What I'm now trying is to build a solution 
for this into Busybox. Not reinventing the wheel, but implementing known 
solutions.



I am saying, don't force this design on people who want the hotplug helper
that the kernel spawns to finish things up.


The only way to solve this, would be to let mdev as it is, and create an 
additional netlink-mdev, which brings us as at the situation where it 
gets complicated (or at least complex) to share code and work between 
both of them. Selection between both (when you don't want to include 
both hunks), needs to be done with BB configuration, which make all 
thinks like pre-build binaries a mess (how many different versions shall 
be maintained? Only yours? Only mine?).


To solve this conflict and to also give the current device management 
system some speed improvement on burst, I tried to find a solution. This 
solution centers on the most important problem, the parallelism and 
parsing the conf for each single hotplug event.


So I *propose* to split the function of "mdev" into two separate parts. 
Part one will contain the kernel hotplug helper part (and should be as 
small and fast as possible), and part two the parsing of the conf and 
device node operations ... with the requirement of communication between 
any number of running part ones and the single part two. To reduce cost 
on those "part one" hunks (remember they shall be as fast as possible), 
and the availability of failure management (which BB currently 
completely lack), the communication between the parts need to be watched 
by a small helper daemon. A very minimalist daemon, which doesn't touch 
any data, it just fire up the device operation stuff, when it's required 
and wait until this process dies. Then goes back to wait for more events 
to arrive (this not like udev).


Do I wish to overcome the suffering of Busybox device management? Yes.

Do this need some changes? Yes, I propose as one or two line change in 
your startup files, with the benefit of speed improvement.


Do I otherwise propose to change your system setup or flip to using 
netlink operation? *NO*


Hence, I do not force anybody to change from using the kernel hotplug 
feature. I improve, that mechanism ... with the ability to add further 
mechanisms, without duplicating code. One shall be a netlink reader in 
BB, the other may be external programs of others, with a better 
interface to use device management functions from there.


Are your really complaining against any kind of innovation? Any step to 
overcome long problems, discussed several times ... but dropped, mostly 
due to the amount of required work?


Do I *propose* some innovation? Yes.

Do I propose to force someone changing to different mechanism? No.


Agreed.
But I would include "hotplug daemons" under "etc."


I used "etc." so add any similar type of program you like, including
"hotplug daemons" ... but stop, what are "hotplug daemons"? Do you mean
daemons like udev? udev uses netlink reading. Otherwise I know about hotplug
helper programs (current mdev operation), but not about "hotplug daemons".


That's the best description I can come up with for the FIFO reader that
you proposed that would read from a fifo and write to a pipe, monitoring
the state of "mdev -p"


Wow? 

Re: [OT] long-lived spawners

2015-03-12 Thread Harald Becker

Hi Denys !

On 12.03.2015 13:10, Denys Vlasenko wrote:

I find it suboptimal to have, say, a hotplug daemon lingering
in the system five hours after the last hotplug event happened.


IMO that highly depend on the complexity and resource constrains of the 
daemon. For a big resource eating daemon system like udev, we absolutly 
agree.


On the other hand, a minimal watcher / reader process (e.g. netlink 
reader) daemon sitting in the back, doesn't hurt you much. The bigger 
event handling code, on the opposite, shall go to an on-demand process, 
only running when it's operation is required.


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker

Hi Natanael !
Hi Isaac !

Looks like you misunderstand my approach for the mdev changes ... ok, 
may be I explained it the wrong way, so lets go one step back and start 
with a wider view:


IMO Busybox is a set of tools, which allow to setup a working system 
environment. How this setup is done, is up to the distro / system 
maintainer, that is it is up their preferences.


I really like the idea to have an optimized version using netlink for 
the hotplug events and avoid unnecessary forking on each event, but 
there are other people which dislike that approach and prefer to use the 
hotplug handler method, or even ignore the hotplug feature completely 
and setup device nodes only when required (semi automatic, not manual 
mknod, but manual invoking of mdev).


The world is not uniform, where all people share the same preferences, 
so we need to be polite to accept those different kinds of preferences 
and don't try to force someone to setup their system in a specific way.


Right? ... else we would be at the end of discussion and the end of my 
project approach :(


... but I think you will agree:

As Busybox is the used tool set, it shall provide the tools for all 
users, and shall not try to force those users to use netlink, etc.


So how can we handle different preferences, but still allow each one to 
setup there system the way he likes?


We either need different versions of the commands / applets and allow to 
enable / disable them in the configuration, or we try to find a modular 
solution, which allows a maximum of code sharing, but still give the 
flexibility to setup the system up to the user preferences.


The config approach may be the simpler solution, but will add more and 
more options to the config system, and needs to do all those decisions 
before you have a working system. So many users neglect to build there 
own Busybox version (for different reasons) and grab a prebuild binary 
version (or are forced to stay on a specific version) ... but still all 
those users will have there own preferences on how to setup their 
system. The conclusion is, to have a modular tool set, which on default 
include all functionality and allow to select the desired usage on 
system setup. Still with the wish to minimize resource usage and a 
maximum of code sharing between the different functionalities.


Beside having an universal modular tool set, with all functions 
included, it may still be a good idea to allow disabling unwanted 
applets / functionalities, for those who are picky and / or low on 
system resources.


... using this wider view, I tried to find a modular solution for the 
device system management (here mdev):


Maximum code sharing means to have a look on flow of data and try not to 
duplicate same functionality into different commands. For the device 
management I see the following different usage scenarios:


- not using hotplug feature, semiautomatic device node setup
  (currently user calls "mdev -s" when required)

- using the hotplug handler approach of the kernel
  (current operation of "mdev")

- using a netlink based hotplug approach
  (currently not in Busybox, but external tools exit, e.g. nldev)


The current implementation suffers on event bursts, due to massive 
forking a separate parser for each event. So one of the major design 
decisions has to be avoiding those unnecessary parallelism. It does not 
only suffer from resource consumption, the parallelism is contra 
productive for device event management, due to not serialized operation.


So how can we avoid that unwanted parallelism, but still enable all of 
the above usage scenarios *and* still have a maximum of code sharing 
*and* a minimum of memory usage *without* delaying the average event 
handling too much?


The gathering parts need to acquire the device information, sanitize 
this information, and serialize the event operations to the right order. 
The device node handling part shall receive the information from the 
gathering part(s) (whichever is used) and call the required operations, 
but shall avoid reparsing the conf on every event (speed up) *and* drop 
as much memory usage as possible, when the event system is idle.


My idea is a fifo approach. This allows splitting the device management 
functionalities. Nevertheless which approach to gather the device 
information is used, the parser and device handling part can be shared 
(even on a mixed usage scenario).


So we have the following functional blocks for our device management:

- initial setup of the device file system environment
  (yes, can be done by shell scripting, but it is a functional block)

- starting the fifo management and automatic parser invocation
  (long lived minimalistic daemon)

- manual scanning of the sys file system and gathering device info

- setting up the usage of the hotplug helper system
  (check fifo availability and set hotplug helper in kernel)

- an hotplug helper spawned by the kernel on every event
  (should be as 

Re: RFD: Rework/extending functionality of mdev

2015-03-12 Thread Harald Becker
Interrupts ... no trouble, everybody agree you need only one unblocked 
interrupt source, but never ask for the detail which one ... :)



Hi Laurent !


  I'm sorry if I came across as dismissive or strongly opposed to the

idea.

It was not my intent. My intent, as always, is to try and make sure that
potential new code 1. is worth writing at all, and 2. is designed the
right way. I don't object to discourage you, I object to generate
discussion.
Which has been working so far. ;)


ACK


  I understand your point. If I don't modify my mdev.conf, everything
will still work the same and I can script the functionality if I prefer;
but if you prefer centralizing the mounts and other stuff, it will also
be possible to do it in mdev.conf.


This is my primary intention.


  It is very reasonable. The only questions are:
  - what are the costs of adding your functionality to mdev.conf ?
  - are the benefits worth the costs ?


I don't know the exact cost ahead, but they should be not massive. look, 
we need an mkdir and symlink plus setting owner/permissions, and we need 
to setup an arg vector to call mount. The rest will be some rework of 
the parser, but I don't consider this into cost. So this shouldn't be 
much of increasing for the extended syntax.


Ok may be an include option in mdev.conf has some extra cost ... but 
benefit would be for all who like to split mdev.conf into separate 
files. Could be a BB config option "allow include in mdev.conf".


Some more cost will result from possibility to use netlink, but the 
benefit will be not parsing the table for every event. So some cost is 
acceptable for me. Either part hotplug handler oder netlink reader may 
be excluded on BB config. I see no trouble grouping the code and 
excluding it when the option is deselected. Minor changes may persist, 
but shall not blow out the principles Busybox is based on, else I do 
have a big trash can ...




  As a preamble, let me just say that if you manage to make your syntax
extensions a compile-time option and I can deactivate them to get the
same mdev binary as I have today, then I have no objection, apart from
the fact that I don't think it's good design - see below.


Deactivating unwanted stuff in BB config is my intention. Deactivation 
will not result in same binary, due to some intended parser changes, but 
cost for this shouldn't be much notable ...


... and yes, I know to be picky on size restricted development. I 
started my programming practice on an 8008 with 512 Byte ROM and 128 
Byte RAM ... in addition to a Zuse Z31 computer (one of the first 
computer models using transistors) with 2000 words magnetic core memory 
of each 11 decimal digits ... :)




  My point, which I didn't make clear in my previous message because I was
too busy poking fun at you (my apologies, banter should never override
clarity), is that I find it bad design to bloat the parser for a hotplug
helper configuration file in order to add functionality for a /dev
initialization program, that has *nothing to do* with hotplug helper
configuration.


Ok, here we agree. I would highly deprecate blowing up the hotplug 
scanning, but here we come to the reason I stumbled about and asked if 
we can avoid this extra parsing all together, speeding up all. So I was 
back at netlink.




  The confusion between the two comes from two places:
  - the applet name, of course; I find it unfortunate that "mdev" and
"mdev -s" share the same name, and I'd rather have them called "mdev"
and "mdev-init", for instance.


ACK, this could be done, for all the functionality ... but this will be 
a philosophical discussion. As mdev the hotplug helper does not use the 
code of mdev -s, there is no time cost to have that in one binary.




  - the fact that "mdev"'s activity as a hotplug helper is about creating
nodes, fixing permissions, and generally doing stuff around /dev, which
is, as you say, logically close to what you need to do at initialization
time, so at first sight the configuration file looks like a tempting
place to put that initialization time work.



  But I maintain that mdev.conf should be for mdev, not for mdev -s.
mdev -s is just doing the equivalent of calling mdev for every device
that has been created since boot. If you make a special section in
mdev.conf for mdev -s, this 1. blurs the lines even more, which is not
desirable, and 2. bloats the parser for mdev with things it does not
need after the first mdev -s invocation.


To 1. -> ACK, see below

To 2. -> as I intent to avoid parsing on every event, that is I like to 
parse all rules into memory table and then scan only memory table for 
each arriving event, The extra code in the parser does not matter to 
your concern. Beside this I tried to chose syntax carefully, to produce 
not much overhead. We have two checks, one to the last char of the regex 
(slash means directory, atsign means symlink -> ignored on hotplug) and 
the second check is to the percent sign of the mount file s

Re: [OT] long-lived spawners

2015-03-12 Thread Harald Becker

On 11.03.2015 17:24, Laurent Bercot wrote:

On 11/03/2015 17:10, Harald Becker wrote:

And what is wrong with a long lived daemon?

Ok, what I see is brain damaged developers writing big monolithic
long lived daemons, which suck up tons of memory and / or cpu power
:( ...

... this is not what I understand of a carefully designed long lived
daemon ... it is silliness pure ...

... on the other hand, long lived daemons are no problem, as long as
they stay at low resource usage ... and don't try to do too
complicated things in a single process (multi threaded or not).


  Yes, exactly my point.
  And exactly the reason why I think a long-lived uevent handler is
good design and that respawns are unnecessary. ;)


Ok, now we can start discussing what is big or small, complicated or 
simple ;) ... a friend answered that question with "size and usage 
always matters" ... but ok, that was to a different (not computer 
related topic) :)


I like to be a bit more picky keeping simple, and in freeing memory when 
not used. So I try to split of such things in a listener and a consumer 
/ handler process. Like network connection listeners (e.g. tcpsvd), 
which accept incoming connections and spawn handler process for this 
incoming request.


For the netlink uevent handling stuff, this would be more like gathering 
information from netlink socket then forwarding this message to the 
handler. Not including conf parser and device node operation in the long 
lived daemon (that is handler part).


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-11 Thread Harald Becker

On 11.03.2015 22:44, Michael Conrad wrote:

What specifically is the appeal of a third approach which tries to
re-create the kernel netlink design in user-land using a fifo written
from forked hotplug helpers?


You mix things a bit.

My approach allows to either using netlink or kernel hotplug method, 
sharing the code and invoking only one instance of the parser / handler 
even on hotplug event bursts. The third method is the initial device 
file system population


Splitting this into different processes is the Unix way of multi 
threading, using fifo (= named pipe) for inter process communication.



___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-11 Thread Harald Becker

Hi Michael !

On 11.03.2015 22:44, Michael Conrad wrote:

I'm interested in this thread, but there is too much to read.  Can you
explain your reason in one concise paragraph?


One paragraph is a bit to short, my English sucks, but I try to 
summarize the intention of my approach in compact steps (a bit more than 
one screen page of text):


- current kernel hotplug based mdev suffers from parallel start and 
reading / scanning the conf on each instance


- the known and proven solution to this is using a netlink reader daemon 
(long lived daemon which stays active all the time)


- there are still people who insist staying on the kernel hotplug 
feature (for whatever reason, so accept we need hotplug + netlink)


- The third method (scanning sys file system) is the operation of "mdev 
-s" (initial population of device file system, don't mix with "mdev" w/o 
parameters), in addition some user take this as semi automatic device 
management (embedded world)


- So how can we have all three methods without duplication of code, plus 
how can we speed up the kernel hotplug method?


- answer is to split the gathering parts from the conf file parser and 
device operation, plus let the parser / handler accept many events with 
one invocation (event bursts), plus saving memory when event system is 
idle (parser process exits when idle for some duration)


- kernel hotplug helper could fire up the fifo and a parser / handler 
when there is one required, but this check adds extra delay / cost on 
first / all delivered events


- solution is a minimalistic fifo watcher and parser startup daemon 
(proven Unix concept for on demand N to 1 inter process communication), 
the fifo watcher creates and hold the fifo open, but never touches data 
in the fifo, only startup a parser when required, allows failure 
management when parser sucks


- now the system maintainer can decide which method to use, unwanted 
methods may be opted out on BB config, plus easier embedding BB based 
device management from external programs (include parser, drop methods)


- beside the netlink code, after rework I expect a near 1 : 1 average of 
binary size compared to current code, but less memory usage on event 
bursts (only one parser process), plus speed improvement on event bursts 
(faster system start up when using hotplug)


- other intended functional improvements are for personal preference to 
get the ability of a one-shot device file system startup (single command 
to setup all device stuff, still under full control of admin, no hard 
coded functionality in any binary)


And last: Don't stick on the "mdev -..." names mentioned, look at the 
intended functionalities, implementation details (names to use) are 
still under discussion.


Hope that was short enough.

--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-11 Thread Harald Becker

Hi Isaac !

> Agreed, whole-heartedly.

I just don't think you're quite thinking through exactly what this means.


In which sense? Let me know, what I got wrong. I'm a human, making 
mistakes, not a perfect machine.




Agreed.
But I would include "hotplug daemons" under "etc."


I used "etc." so add any similar type of program you like, including 
"hotplug daemons" ... but stop, what are "hotplug daemons"? Do you mean 
daemons like udev? udev uses netlink reading. Otherwise I know about 
hotplug helper programs (current mdev operation), but not about "hotplug 
daemons".




The gathering parts need to acquire the device information, sanitize this
information, and serialize the event operations to the right order. The
device node handling part shall receive the information from the gathering
part(s) (whichever is used) and call the required operations, but shall
avoid reparsing the conf on every event (speed up) *and* drop as much memory
usage as possible, when the event system is idle.


That *shall* is where you're forgetting about the first principle you
mentioned (assuming that you mean "shall" in the normative sense used
in RFCs).


??? Sorry, may be it's I'm not a native English speaker. Can you explain 
me what is wrong? How should it be?




Yes, some people find the hotplugger too slow.
But that doesn't mean that everyone wants a new architecture.


What do you mean with new architecture? Different system setup? Changing 
configuration files?


My first approach was not to change usage of current systems. Except a 
slightly bigger BB, you should not have noted from my modifications. 
Then came Laurent with some questions and suggestions to split of some 
things for clarification, so the changes may result in slightly modified 
applet names and/or parameter usage (still under discussion), to be able 
to adopted all functionality ... but otherwise you won't need to change 
your setup, if you do not like.




Some people, at some points in time, would prefer to use a plain, simple,
hotplugger, regardless of whether it's slow.


??? Didn't you notice the following:

> - using the hotplug handler approach of the kernel
>(current operation of "mdev")



(Personally, I'd like a faster boot, but after that a hotplugger that
doesn't daemonize is fine by me.)


Do you really like forking a separate conf parser for each hotplug 
event, even if they tend to arrive in bursts?


Won't you like to get a faster system startup with mdev, without 
changing system setup / configuration?




So, in order to respect that preference, it would be nice if you could
let the hotplug part of mdev keep doing what it does.


What expect you to be the hotplug part? The full hotplug handler 
including conf file parser, spawned in parallel for each event?
Won't you like to benefit from a faster system startup, only why you 
fear there is another (minimalistic) daemon sitting in the back? Sounds 
like "automobiles are dangerous, I won't use them"? ... sorry if this 
sounds bad, I try to understand what exactly you are fearing ... I 
expect you did misunderstand something (or I explained / translated it 
wrong).




My idea is a fifo approach. This allows splitting the device management
functionalities. Nevertheless which approach to gather the device
information is used, the parser and device handling part can be shared (even
on a mixed usage scenario).


I understand that the goal here is to allow people to use netlink or hotplug
interchangeably with "mdev -p" (which I still think is a poorly named
but very desireable feature).


Please don't stay at those specific parameter names, think of specific 
functionalities. The names are details under discussion ... but here 
especially I expect you misunderstand something. "mdev -p" would be for 
internal purpose to distinguish invocation from the usual "mdev", and 
"mdev -s" usage (current mdev). So if you won't change your system setup 
to benefit from extra functionality, you won't ever need "mdev -p", it 
is for internal usage and special purposes (the p stands for parser, a 
quick and dirty selection, to use something).




As stated before, I don't think that this approach is really functional,
and would be more opposed to using it than to using netlink or a plain
hotplugger. For this reason, I'm opposed to including it in *mdev*.


??? Not functional? In which way? What does you fear?



I also think that those who *do* want to use this approach would benefit
more from a non-busybox binary, since the hotpluggger needs to be as
small and minimal as possible. Hence, I suggest doing it outside busybox.


Yes, that hotplug helper may benefit from being as small and fast as 
possible, but a separate program means a separate binary, and that 
conflict with my one single statical binary preference, which I share 
with others. I consider splitting of that helper in a separate binary is 
under discussion, but otherwise won't change anything on the concept (it 
is not much more tha

Re: RFD: Rework/extending functionality of mdev

2015-03-11 Thread Harald Becker

On 11.03.2015 16:21, Laurent Bercot wrote:

I don't understand that binary choice... You can work on your own
project without forking Busybox. You can use both Busybox and your
own project on your systems. Busybox is a set of tools, why should it
be THE set of tools ?


Sure, I know how to do this, I started creating adapted Busybox versions 
to specific needs for a minimalistic 386SX Board. Around 1995, or so ... 
wow, long time now :)


It is neither a knowledge nor any technical problem, it is preference:
I want to have *one* statical binary in the minimal system, and being
able to run a full system setup with this (system) binary (or call it
tool set). All I need then is the binary, some configs, some scripts
(and may be special applications). I even go so far, to run a system
with exactly that one binary only, all other applications functions are
done with scripting (ash, sed, awk). Sure those are minimalist
(dedicated systems), but they may be used in a comfortable manner.

I even started a project to create a file system browser (comparable to 
Midnight Commander, with no two pane mode but up to 9 quick switch 
directories), using only BB and scripting. All packed in a (possibly 
self extracting) single shell script. The only requirements to run this, 
is (should be) a working BB (defconfig) environment, usual proc, sys, 
dev, setup and a writable /tmp directory (e.g. on tmpfs). The work for 
this was half way through to first public alpha, then Denys reaction on 
a slight change request was so frustrating that I pushed the project 
into an otherwise unused archive corner, and stopped any further 
development.




I'm not sure how heavily mdev [-s] relies on libbb and how hard it
would be to extract the source and make it into a standalone, easily
hackable project, but if you want to test architecture ideas, that's
the way to go - copy stuff and tinker with it until you have
something satisfying.


I always did it this way, and never posted untested stuff, except some
snippets when one asked for something and I quickly hacked something for
an answer (with mark set as untested).



... if not, you still have your harald-mdev, and you can still use it
along with Busybox - you'll have two binaries instead of one, but
even on whatever tiny noMMU you can work on, you'll be fine.


Sure, I could have two, three, four, ten, twenty, hundred, ... programs, 
but my preference is to have *one* statically linked binary for the 
complete system tool set (on minimal systems).


... Thats the reason why I dislike and don't use your system approach :( 
... otherwise great work :)




That does not preclude design discussions, which I like, and which
can happen here (unless moderators object), and people like Isaac,
me, or obviously Denys, wouldn't be so defensive - because it's now
about your project, not Busybox; you have the final say, and it's
totally fine, because I don't have to use harald-mdev if I don't want
to.


One of the things I really hate, is to force someone doing something 
(especially in a specific way), only topped by someone else forcing me 
to do something in a specific way :( ...


... so I always try to do modifications in a way which let others decide 
about usage, expecting not to break existing setups (at least without 
asking ahead if welcome). Slight modifications may happen from 
modifications (e.g. different parameter notion), if unavoidable, but 
they shall not require complete changes in system setup.


--
Harald
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-11 Thread Harald Becker

Hi William !

On 11.03.2015 17:03, William Haddon wrote:

I suppose it's time to dig out code from the secret archives of my
secret lair again. Someone named Vladimir Dronnikov called this "ndev"
and proposed it as a patch to Busybox in 2009. I dug it out and
separated it from Busybox, and probably made some other changes I don't
remember, for some nefarious agenda I again don't remember. It is a
modified version of mdev that seems to do some of the things you all
have been talking about. I'm not maintaining it in any way, so you all
are welcome to do whatever you want with it. I have no idea whether it
works or not, other than that it apparently was once useful to me.


May be worth to dig into, or not? ... I'm currently in the phase of 
collecting and discussing functionality, not concerned to code hacking.


Would get much more interesting, if you can give information, in which 
functionality it differs. What was the authors intention of forking 
mdev? And may be, why has it be neglected?


Otherwise: I saved your message and may come back to this when I start 
looking at concrete code, or I'm searching for a specific functionality 
and how it is handled by other developers. So, thanks for information.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [OT] long-lived spawners

2015-03-11 Thread Harald Becker

On 11.03.2015 16:34, Laurent Bercot wrote:

On 11/03/2015 14:02, Denys Vlasenko wrote:

But that nldev process will exist for all time, right? That's not
elegant.
Ideally, this respawning logic should be in the kernel.



...

  Needing daemons to answer notifications from userspace processes
or the kernel is the Unix way. It's not Hurd's, it's not Plan 9's
(AFAIK), but it's what we have, and it's not even that ugly. The
listening and spawning logic will have to be somewhere anyway, so
why not userspace ? Userspace memory is cheaper (because it can
be swapped), userspace processes are safer, and processes are not
a scarce resource.


And what is wrong with a long lived daemon?

Ok, what I see is brain damaged developers writing big monolithic long 
lived daemons, which suck up tons of memory and / or cpu power :( ...


... this is not what I understand of a carefully designed long lived 
daemon ... it is silliness pure ...


... on the other hand, long lived daemons are no problem, as long as 
they stay at low resource usage ... and don't try to do too complicated 
things in a single process (multi threaded or not).


IMO daemons are the Unix way of user space located watching, 
coordinating, and controlling instances, which may fire up the right 
handling jobs for matching events (or deliver appropriate signaling, 
messages, or commands to the handling instances).


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-11 Thread Harald Becker

Hi Denys !

> mdev rules are complicated already. Adding more cases

needs adding more code, and requires people to learn
new "mini-language" for mounting filesystems via mdev.

Think about it. You are succumbing to featuritis.
...
This is how all bloated all-in-one disasters start.

Fight that urge. That's not the Unix way.


Yes! ... my failure to mix those things without giving an explanation. 
Laurent pointed me to the better approach and gave the explanation, the 
idea is to have a one-shot point of device system initialization ... 
that means those device system related operations, and may be depp 
system related virtual file systems (proc, sys, e.g.) ... I even exclude 
things like setting up a tmpfs for /tmp from this, also it could be done 
with no extra cost.


My previous message to Natanael and Isaac shall clarify my approach and 
should have been the starting point of this discussion ... which is as 
told at the early phase of brain storming about doing a rework of the 
Busybox related device management system, eliminating the long standing 
problems, giving some more flexibility, and may be adding some more 
functionality of benefit for those who like them (not blocking others).


Though, RFD means request for discussion, not for hacking code in any 
way, before we reached a point of agreement ... and at least some 
problems of mdev has already been grumbled about, as I remember ... so a 
discussion, giving the chance to do the work to overcome those issues 
... Adding (some) extra functionality is to enhance flexibility (where I 
focused on the parts, I'm most concerned with - peoples preferences are 
different), but only part of the work I like to do.


I tried to stay as close as possible at the current mdev.conf syntax, 
with the intention not to break existing setups ... but would not 
neglect, creating a different syntax parser, if this would be the 
outcome of the discussion. I'm open to the results, but like to get a 
solution for my preferences, too. Can't be that a few people dictate how 
the rest of the worlds systems has to handle things ... the other side 
of what you called "featuritis" (I won't neglect your statement above)!


And one point to state clearly: I do not want to go the way to fork a 
project (that is the worst expected), but I'm at a point, I like / need 
to have Busybox to allow for some additional or modified solutions to 
fit my preferences, as others also have also stated already. I'm 
currently willing and able to do (most of) that work, but if it's not 
welcome, the outcome of the discussion may also be stepping to a 
"MyBusybox" (or however it will be called).


*Again*: I don't want to start a discussion about forking the project, 
it would be the worst possible outcome of my intention ... I like to get 
a tool set based on BB's principles, but giving more flexibility to fit 
more peoples preferences, without breaking things for others (at least 
the majority)! ... this means critical discussion of every belonging 
topic, but not blocking every new approach and functionality with the 
argument of size constraints or "featuritis", due to personal dislikes, 
and then accepting a patch which add several hundred of bytes for 
functionality I expect to be pure nonsense or "featuritis".


I apologize for my hard words. I don't want to hurt you or anybody else, 
but you did several decision in the past, which resulted in immense 
frustration to me (and others). With the consequence of even halting 
development of several BB focused projects ... please consider more 
opening to the discussion, based on topics not on pure criticism or 
personal liking (don't want to initiate lengthy philosophical quarrels 
with no practical outcome).


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: mdev and usb device node creation

2015-03-10 Thread Harald Becker

Hi Dallas !

> Hi Harald, I was sort of expecting a mdev scan to discover and create 
device nodes for usb.
> Sounds like I need some rules defined in /etc/mdev.conf.  I just have 
a default one.


What consider you a default one? There are so many distros and systems 
out in the wild, even the Busybox snapshot archives contain two or three 
different mdev configurations (as examples). SO I can't tell you 
anything specific, without seeing what your configuration describes.


mdev either scans /sys/class/... for devices entries or picks up hotplug 
event messages, then search the /etc/mdev.conf for a matching device 
entry. Then it does create/remove device nodes according to the given 
information. This includes moving location (e.g. into a subdirectory), 
creating symlinks, setting owner, group and permissions. Everything mdev 
does is controlled by the rules in mdev.conf, but it may contain catch 
all rules to do some default action.


A short search on the net gave me the following line:

$DEVNAME=bus/usb/([0-9]+)/([0-9]+) root:plugdev 0660 =bus/usb/%1/%2

Put this in your mdev.conf, may be it catches the names of your kernel.

..
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: mdev and usb device node creation

2015-03-10 Thread Harald Becker

Hi Dallas !

> I am using an older version of busybox (1.7.0) with a newer linux
> kernel (3.10) and trying to get it to discover and dynamically create 
> usb device nodes underneath /dev/bus/usb.  This doesn't seem to work 
> when I execute mdev with the -s option.


And were is the rest of your setup? What matching entries in 
/etc/mdev.conf belong to your concern?



Can someone please tell me if this capability is supported in newer
versions of busybox?


mdev -s scans the sys file system for devices and setup the /dev 
entries. If you want dynamic setup you should use the hotplug feature, 
but the device node handling of both depend on the entries in 
/etc/mdev.conf. So what happens and when, what is wrong in your eyes? 
May be you just need to enter your correct device setup.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: RFD: Rework/extending functionality of mdev

2015-03-10 Thread Harald Becker

Hi,

getting hints and ideas from Laurent and Natanael, I found we can get 
most flexibility, when when try to do some modularization of the steps 
done by mdev.


At fist there are two different kinds of work to deal with:

1) The overall operation and usage of netlink
2) Extending the mdev.conf syntax

Both are independent so first look at the overall operation ... and we 
are currently looking at operation/functionality. This neither means 
they are all separate programs/applets. We may put several 
functionalities in one applet and distinguish by some options. First 
look at, what to do, then decide how to implement:


mdev needs to do the following steps:

- on startup the sys file system is scanned for device entries

- as a hotplug handler for each event a process is forked passing 
information in environment variables


- when using netlink a long lived daemon read event messages from a 
network socket to assemble same information as the hotplug handler


- when all information for an event has been gathered, mdev needs to 
search it's configuration table for the required entry, and then ...


- ... do the required operations for the device entry


That is scanning the sys file system, hotplug event handler and netlink 
event daemon trigger operation of the mdev parser. Forking a conf file 
parser for each event is much overhead on system startup, when many 
event arrive in short amount of time. There we would benefit from a 
single process reading from a pipe, dieing when there are no more events 
and reestablish when new events arrive.


Both the sys file system scanner and a netlink daemon could easily 
establish a pipe and then send device commands to the parser. The parser 
reads mdev.conf once and creates a in memory table, then reads commands 
from the pipe and scans the memory table for the right entry. When EOF 
on pipe read, the parser can exit successfully.


The sys file system scanner (startup = mdev -s) can create the pipe, 
then scan the sysfs and send the commands to the parser. When done, the 
pipe can be closed and after waiting for the parser process just exit.


The netlink daemon, can establish the netlink socket, then read events 
an sanitize the messages. When there is any message for the parser, a 
pipe is created and messages can be passed to the parser. When netlink 
is idle for some amount of time, it can close the pipe and check the 
child status.


Confusion arise only on the hotplug handler part, as here a new process 
is started for every event by the kernel. Forking a pipe to send this to 
the parser would double the overhead. But leaving the parser running for 
some amount of time, would only work with a named fifo, startup of the 
parser when required and adds timeout management to the parser ...


... but ok, do we look at an alternative: Consider a small long lived 
daemon, which create a named fifo and then poll this fifo until data get 
available. On hotplug event a small helper is started, which read it's 
information, serializes then write the command to the fifo and exit. The 
long lived daemon sees the data (but does not read), then forks a parser 
and gives the read end of the pipe to the the parser. The parser reads 
mdev.conv once, then processes commands from the fifo. Now we are at the 
situation where the timeout needs to be checked in the parser. When 
there are no more events on the fifo the parser just dies successfully 
(freeing used memory). This will be detected by the small long lived 
daemon, which check the exit status and can act on failures (e.g. run a 
failure script). On successful exit of the parser the daemon starts 
again waiting for data on the fifo (which he still hold open for reading 
and writing). This way the hotplug helper will benefit from a single run 
parser on startup, but memory used by the conf parser is freed during 
normal system operation. The doubling of the timeout management in 
netlink daemon and parser can be intentional when different timeouts are 
used. Where a small duration for the idle timeout of netlink can be 
chosen, the parser itself does use a higher timeout, which only triggers 
when the hotplug helper method is used.


yes, there are some rough corners, but we are at the phase of brain 
storming. Beside those corners do we get a modular system, which avoid 
respawning/rereading the conf table for every event, but frees memory 
when there are no more events. Even the hotplug helper method will 
benefit, as the helper process can exit, as soon as the command has been 
written to the fifo. The parser reads serialized commands from the pipe 
and process required actions.


May be we should consider using that small parser helper daemon and the 
named fifo in all cases. sys file system scanner, hotplug helper and 
netlink daemon will then just use the fifo. This would even allow to use 
the same fifo to activate the mdev parser from a user space program 
(including single parser start for multiple events).

Re: RFD: Rework/extending functionality of mdev

2015-03-10 Thread Harald Becker

Hi Laurent !

>> 1) Starting up a system with mdev usually involves the same steps to

mount proc, sys and a tmpfs on dev, then adding some subdirectories
to /dev, setting owner, group and permissions, symlinking some
entries and possibly mounting more virtual file systems. This work
need to be done by the controlling startup script, so we reinvent the
wheel in every system setup.


  The thing is, this work doesn't always need to be done, and when it
does, it does whether you use udevd, mdev, or anything else. /dev
preparation is one of the possible building blocks of an init routine,
and it's quite separate from mounting /proc and /sys, and it's also
separate from populating /dev with mdev -s.


You missed the fact, I told everything stays under control of the admin 
or system setup. Nobody shall be forced to do things in a specific way. 
I want just give some extended functionality and flexibility to do this 
setup in an very easy and IMO straight looking way.


If you like/need to do those mounts in a specific way, just put them in 
your script and leave out the mount lines from mdev.conf, otherwise if 
you need to setup those virtual file systems with specific options you 
can specify them in the same line of configuration.




... I dislike the idea of integrating early init functions into mdev, because 
those
functions are geographically (as in the place where they're performed) close to 
mdev,

> but they have nothing to do with mdev logically.

Sorry, I don't agree. mdev's purpose is to setup the device file system 
infrastructure, or at least to help with setting up this. Ok, at first 
leave out proc and sysfs, there you may be right, but what about devpts 
on /dev/pts, or /dev/mqueue, etc., and finally the tmpfs for /dev 
itself. Do you really say they do not belong to the device file system 
infrastructure? Or all those symlinks like /dev/fd, /dev/stdin, 
/dev/stdout, etc., do you consider them not to belong to mdev? Not 
talking about setting owner informations and permissions for those 
entries in the device file system.


Sure, all that can be done with some shell scripts, scattering around 
all that information at different places, or the need to setup and 
read/process different configuration files. My intention is to get this 
information which depend on system setup in a single, relatively simple 
to use or modifiable location. The startup script itself just invokes 
mdev, which gets the information from /etc/mdev.conf, which calls the 
necessary commands to do the operation. That is, it frees the distro 
manager to write script code to parse the system specific information 
into the startup script.


I consider Busybox to be a set of tools, anybody may use to setup a 
system to there own wishes, not to force anybody to do things in the way 
any person may feel being it's the best way to do things, but I want to 
enable functionality for those who like to collect this information and 
put them in a central place. Information usually scattered around and 
hidden deep in the scripts controlling the startup.




MOUNTPOINT  UID:GID  PERMISSIONS  %FSTYPE [=DEVICE] [OPTIONS]


  I can't help but think this somehow duplicates the work of mount with
/etc/fstab. If you really want to integrate the functionality in some
binary instead of scripting it, I think the right way would be to
patch the mount applet so it accepts any file (instead of hardcoding
/etc/fstab), so you could have a "early mount points" file, distinct
from your /etc/fstab, that you would call mount on before running
mdev -s.


Laurent you didn't look at the examples. I do not want to hard code any 
*functionality* in mdev. I want add extra functionality to the current 
mdev.conf syntax to allow to do some more stuff, which is usually done 
"geographically close to calling mdev", and done in so many systems in 
similar matter.


Look at the usual usage of /etc/fstab, on how many systems do you find 
there information about your virtual file systems? Usually fstab is used 
for the disk devices. In addition what about creating the mount points, 
setting owner, group and permissions? This is not done by mount and not 
specified in fstab. So changing anything there would mean to modify 
fstab syntax, possibility breaking other programs and scripts, reading 
and modifying fstab.


Neither do I want to code any special functionality in a binary, nor do 
I try to duplicate the operation of mount. I just want to extend the 
mdev.conf syntax to add simple configuration information for those 
"close to mdev" operations at a central place, parse this information 
and call the usual commands, e.g. mount with the right options, as shell 
scripts do.


And what else are a few lines, in mdev.conf, describing those mounts, 
other then placing them in a separate "early mount points file"? ... ok, 
lets go one step further on this. Let us add an "include" option to 
mdev.conf, which allow to split the mdev configuration in different 
files, and/

Re: RFD: Rework/extending functionality of mdev

2015-03-09 Thread Harald Becker

Hi Natanael !

> I am interested in a netlink listener too for 2 reasons:


- serialize the events
- reduce number of forks for perfomance reasons


My primary intentions.


That is, I want to auto fork a daemon which just open the netlink
socket. When events arrive it forks again, creating a pipe. The new
instance read mdev.conf, build a table of rules in memory, then read
hotplug operations from the pipe (send by the first instance). When
there are no more events for more then a few seconds, the first instance
closes the pipe and the second instance exits (freeing the used memory).
On next hotplug event a new pipe / second instance is created.


I have a simlar idea, but slightly different. I'd like to separate the
netlink listener and the event handler.


Ack. After thinking on Laurents message, I came to this, too. Split off 
the netlink part and use a pipe for communication. That can even go 
further, split off the initial sys scanning and hotlink parts from the 
parser and also use the pipe to communicate. Creating an mdev wrapper 
around this, so handling stays as is. This does not mean we need 
separate applets, may be can all include in one mdev applet with 
operation controlled by options. Later I will write a reply to Laurents 
message, going into more detail.



I am thinking of using http://git.r-36.net/nldev/ which basically does the
same thing as s6-devd: minimal daemon that listens on netlink and for
each event it fork/exec mdev.


Ok, this may be a second alternative. As I do not want to reinvent the 
netlink part, I will take a deep look on the possible alternatives and 
try to adapt them for Busybox.



- the mdev pipe fd is added to the poll(2) call so we catch POLLHUP to
   detect mdev timeout. When that happens, set the mdev pipe fd to -1 so
   we know that it needs to be respawned on next kernel event.


Why doing that so complicated? The mdev parser shall just read simple 
device add/remove commands from stdin until EOF, then exit. That's it. 
The netlink part can easily watch how long it is idle and then just 
close the pipe. As soon as more events arrive it creates a new pipe and 
fork another mdev parser. This needs time management and poll in only 
one program. All other code is simple and straight forward. The netlink 
reader, as a long lived daemon, already needs to watch the forked 
processes and act on failures.


... but those are details in implementation and some optimization. I 
agree on the ideas/functionality behind this.



The benifits:
- the netlink listener who needs to be running all times is very
   minimal.


This may be an argument to have the netlink part not linked into BB, but 
does otherwise not change the idea behind this.



- when there are many events within short time, (eg coldplugging), we
   avoid the many forks and gain performance.



- when there are no events, mdev will timeout and exit.


ACK


- busybox mdev does not need set up netlink socket. (less intrusive
   changes in busybox)


Can be done as an alternative so admin may decide if he likes to use 
netlink or hotplug helper. That is, nobody is forced to handle things in 
a special way. BB shall just give the tools to build easy setups.
Unwanted parts/applets may be left out on BB configuration (if size 
matters else default all tools in and let the admin chose).



Then I'd like to do something similar with modprobe:
  - add support to read modalias from stdin and have 1 sec timeout.
  - have nldev to pipe/fork/exec modprobe --stdin on MODALIAS events.


Nice idea. May be with some slight modification for optimization of the 
timeout handling.



That way we can also avoid the many modprobe forks during coldplug.


ACK, so the project needs a wider view. Thanks for pointing me to this.

... later more details in my Reply to Laurents message.

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


RFD: Rework/extending functionality of mdev

2015-03-08 Thread Harald Becker

Hi,

I'm currently in the phase of thinking about extending the functionality 
of mdev. As here are the experts working with this kind of software, I 
like to here your ideas before I start hacking the code.


I like to focus on the following topics:


1) Starting up a system with mdev usually involves the same steps to 
mount proc, sys and a tmpfs on dev, then adding some subdirectories to 
/dev, setting owner, group and permissions, symlinking some entries and 
possibly mounting more virtual file systems. This work need to be done 
by the controlling startup script, so we reinvent the wheel in every 
system setup.


I like to extend the syntax of mdev.conf with some extra information and 
add code into mdev to allow to do this operation in a more simplified 
way, but still under full control of the system maintainer. Those extra 
entries will only be executed whith "mdev -s" not during hotplug. The 
syntax has been chosen to not (horribly) break existing mdev.conf setups.


Current major syntax of mdev.conf:

[-][envmatch] :  [...]
[envmatch]@ :  [...]
$envvar= :  [...]

- Additional syntax to mount (virtual) file systems:

MOUNTPOINT  UID:GID  PERMISSIONS  %FSTYPE [=DEVICE] [OPTIONS]

This rule is triggered by the percent sign indicating the file system type.

This shall create the mount point (if it not exist), set 
owner/group/permissions of the mount point, fork and exec "mount -t 
FSTYPE -o OPTIONS DEVICE MOUNTPOINT".  If DEVICE is not specified the 
literal "virtual" shall be used.


e.g.

# mount virtual file systems
/proc  root:root 0755 %proc
/sysroot:root 0755 %sysfs
/dev   root:root 0755 %tmpfs size=64k,mode=0755
/dev/pts root:root 0755 %devpts

This will do all the required mounting with a single "mdev -s" 
invocation, even on a system which has nothing else mounted. The old 
behavior to mount the file systems in the calling scripts will still be 
available, just leave out the mount lines from mdev.conf.


- Additional Syntax to add directories and set there owner informations:

DIRNAME/  UID:GID  PERMISSIONS [> LINKNAME]

This rule is triggered by the slash as the last character of the match 
string. It shall create the given directory, where relative names are 
from the expected /dev base.


e.g.

# add required subdirectories to the device file system
loop/   root:root 0755
input/  root:root 0755

Those directories may be created automatically due to other rules, but 
then you can't control there owner informations. The extra rule allows 
to create the subdirectories on startup and set the owner information as 
you like. Later matching device rules will not change this, so you can 
tune the directory and device permissions.


- Additional syntax to add symlinks and set there owner information:

PATHNAME@  UID:GID   > LINKNAME

This rule is triggerd by the at sign as last character of the match 
string. Shall add the given PATHNAME as a symlink to LINKNAME, and set 
the owner of the link.


e.g.

# add symbolic links to the device filesystem
fd@  root:root  >/proc/fd
stdin@   root:root  >fd/0
stdout@  root:root  >fd/1
stderr@  root:root  >fd/2

- Extending syntax for symlink handling on device nodes:

The current syntax allows to either move the device to a different 
name/location or to move and add a symlink. In some situations you need 
just a symlink pointing to the new device.


DEVICE_REGEX  UID:GID  PERMISSIONS  =NEW_NAME
(old) Moves the new device node to the given location.

DEVICE_REGEX  UID:GID  PERMISSIONS  >PATHNAME
(old) Will create the new device node with the name PATHNAME and create 
a symlink with DEVICE_NAME pointing to PATHNAME. Shall remove existing 
symlink and create a new one.


DEVICE_REGEX  UID:GID  PERMISSIONS  (new) Shall create the new device under it's expected device name, and 
in addition create a symlink of name PATHNAME pointing to the new 
device. Existing symlinks shall not be touched.


e.g. creating a /dev/cdrom symlink for the first cdrom drive

sr[0-9]+  root:cdrom  0775  Shall create /dev/sr0 and a symlink /dev/cdrom -> /dev/sr0, but will not 
overwrite the symlink for /dev/sr1, etc.


This may be combined with the move option:

DEVICE_REGEX  UID:GID  PERMISSIONS  =NEW_NAME  Shall move the device to NEW_NAME as expected and then create the 
symlink to that location.


e.g. moving sr0 into subdirectory and adding symlink

sr[0-9]+  root:cdrom  0775  =block/  Shall create /dev/block/sr0 and a symlink /dev/cdrom -> /dev/block/sr0, 
not changing /dev/cdrom if it already exists.



2) I like to use netlink to obtain hotplug information and avoid massive 
respawning of mdev as hotplug helper when several events arrive quickly. 
That is, I want to auto fork a daemon which just open the netlink 
socket. When events arrive it forks again, creating a pipe. The new 
instance read mdev.conf, build a table of rules in memory, then read 
hotplug operations from the pipe (send by the first instance). When 
there are no more events for more then

Re: Wrong order of operation in mdev.txt

2015-03-08 Thread Harald Becker

Hi !

On 08.03.2015 13:45, Guillermo Rodriguez Garcia wrote:

I think that it is just the numbers that are wrong as the text
explicitly says that steps 4-6 must be executed "before" the previous
code snippet (so 4-6 would actually be run before 1-3)


Yes! After additional studying of the text I found this description, but 
I think average user look only at the numbers.



But it is indeed misleading.


That's it. Not a bug, just a misleading description. Order of numbers 
should match the order the steps shall be done, but this means some 
resorting of the documentation.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Wrong order of operation in mdev.txt

2015-03-08 Thread Harald Becker

Hi !

I just stumbled about a disordering of operations in docs/mdev.txt:

---cut-here---
Basic Use
...
[0] mount -t proc proc /proc
[1] mount -t sysfs sysfs /sys
[2] echo /sbin/mdev > /proc/sys/kernel/hotplug
[3] mdev -s

Alternatively, without procfs the above becomes:
[1] mount -t sysfs sysfs /sys
[2] sysctl -w kernel.hotplug=/sbin/mdev
[3] mdev -s

Of course, a more "full" setup would entail executing this before the 
previous

code snippet:
[4] mount -t tmpfs -o size=64k,mode=0755 tmpfs /dev
[5] mkdir /dev/pts
[6] mount -t devpts devpts /dev/pts
---cut-here---

Look at the order of steps [3] and [4]. If you do "mdev -s" you populate 
the current /dev. In step [4] then you mount a tmpfs over the populated 
dev directory. This will result in a visible empty dev directory, as the 
populated directory gets hidden by the mounted tmpfs. So the order of 
steps should be 0, 1, 2, 4, 5, 6 then 3.


Would be better someone changes the doc, as I'm not a native English 
speaker.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: killall behaviour

2014-09-05 Thread Harald Becker

Hi Denys !

> Unfortunately, there is no commonly agreed definition of

"program named FOO".


... but standard usage cases should be acted on correct:

Think of:

ln -s /bin/Busybox ntpd
ln -s /bin/Busybox syslogd
ln -s /bin/Busybox klogd

ntpd ...
syslogd ...
klogd ...

Busybox killall ntpd

... will not only kill ntpd, but also syslogd and klogd ... do you think 
anyone expect this?



So, the proper solution, which never kills wrong processes,
is "killall should kill no processes".


Where is your upstream compatibility here???

... this is not a corner case problem!

--
Harald



___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: inittab: Start shell only if console is not null

2014-09-05 Thread Harald Becker

Hi Denys !

>> Problem means it may make diagnosing other problems a hell.

Why is it hell? "strace -p1" works.


If you have a working strace ...

... or in other words: Do you have a working Busybox applet for this?


You are right, I do not use init. Respawn on error is not the only,
or the biggest problem it has.

http://busybox.net/~vda/init_vs_runsv.html


Ok, this story is funny to read, but has nothing to do with the problems 
of Busybox init. And still there are people which dislike this 
runit/runsv/runsvdir. I'm highly disagreeing with all that information 
of daemons being scattered around so many directories and subdirectories 
and files. I tried this and I got lost on even simple systems. That's 
definitely not the way I like to have my system.


... but that does not change my mind, that Busybox init has a long 
standing bug of uncontrolled and endless respawning processes, and it's 
now at the time to solve this bug, anyhow!


I posted a simple change which stops this endless respawning for some 
kinds of problems, until init gets a SIGHUP (reload inittab).


- This catches open problems with the console (a case which may be 
detected in master, but this one is the only case, which may be handled 
there).


- Exec failures, which usually return with status -1, stop the respawning.

- Getty may return status -1 for open problems (e.g. all pre login 
prompt problems), and for problems when exec to login (as it was done in 
several gettys I used). This won't endless respawn this console, until 
someone could fix the reason and does a SIGHUP.


sysinit and runonce actions get not affected, and respawn actions 
usually run after sysinit actions have gone, so system resources shall 
all be available to start respawning.


If a process exits with any other value except the well known -1 status, 
or if a process is terminated by a signal, it is always triggered for 
restart and not stopped.


... but other solutions to fix the problem may also be welcome ... but 
not if only unspecific redirection to use a different init mechanics.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: killall behaviour

2014-09-05 Thread Harald Becker

Hi Denys !

> killall matches by /proc/PID/exe too.


Because some applets use a trick where they re-execute
themselves by execve("/proc/self/exe").
When you do that, /proc/PID/comm field gets set to string "exe" :(

Thus, matching by comm will fail to find a process
started this way.


... but here we start a program by different name, just pointing to same 
executable, when you do a killall nobody expects to kill other instances 
of the same executable, when called with a different name.


May be this /proc/self/exe is a special case, which shall only trigger 
the test on /proc/PID/exe when the comm field contains "exe", else you 
highly risk killing the wrong processes.


As you can see the comm fields contains the expected values, and would 
select the right processes here.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: What's the easiest way to make Busybox keep correct time?

2014-09-02 Thread Harald Becker

Hi Joshua !

> Doesn't ntpd already use adjtimex to make that correction?

That would be great!

It was about 20 years ago, I used adjtimex to correct the clock of some 
systems with no permanent Internet connection. Since this I did not look 
to close to the changes that have gone to the time keeping functions. If 
ntpd uses adjtimex, it can do all the things I told, just behind the scene.


Thx, for info.

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [Bug: Busybox 1.22.1] false return 0 instead of 1 with '--help' switch.

2014-09-02 Thread Harald Becker

Hi Joshua !

> It's also interesting that busybox "false --help" behaves completely

differently
depending on whether it's invoked from within a busybox shell or not:


This is wrong! You called different versions, not Busybox false.


 * invoking "false" as an ash builtin bypasses the "--help" check,
   does not print the usage message, and exits with EXIT_FAILURE
   per false_main().


Here you probably used a shell function, which overrides the applet in 
Busybox.


Look in your profile or sourced scripts, anywhere there I expect a 
definition like: "false() { return 1; }"


This case is not concerned with the question which Busybox false shall 
return. The maintainer of your profile script is inquestion, for what 
your shell function does.



 * invoking "false" directly as "/bin/false" or "busybox false"
   bypasses false_main(), prints the usage message, and exits
   with 0 per run_applet_no_and_exit().


This is due to common handling of --help in Busybox. The common code 
displays the usage message and exits.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: What's the easiest way to make Busybox keep correct time?

2014-09-02 Thread Harald Becker

Hi Denys!

On 02.09.2014 15:52, Denys Vlasenko wrote:

$ busybox ntpd --help
BusyBox v1.22.1 (2014-02-01 19:25:19 CET) multi-call binary.

Usage: ntpd [-dnqNwl] [-S PROG] [-p PEER]...

NTP client/server

 -dVerbose
 -nDo not daemonize
 -qQuit after clock is set
 -NRun at high priority
 -wDo not set time (only query peers), implies -n
 -lRun as server on port 123
 -p PEERObtain time from PEER (may be repeated)
 -S PROGRun PROG after stepping time, stratum change, and every 11 mins
^^
 use this to periodically set the hw clock


What about this (from hwclock manpage)?

Automatic Hardware Clock Synchronization By the Kernel

You should be aware of another way that the Hardware Clock is kept 
synchronized in some systems. The Linux kernel has a mode wherein it 
copies the System Time to the Hardware Clock every 11 minutes. This is a 
good mode to use when you are using something sophisticated like ntp to 
keep your System Time synchronized. (ntp is a way to keep your System 
Time synchronized either to a time server somewhere on the network or to 
a radio clock hooked up to your system. See RFC 1305).


This mode (we'll call it "11 minute mode") is off until something turns 
it on. The ntp daemon xntpd is one thing that turns it on. You can turn 
it off by running anything, including hwclock --hctosys, that sets the 
System Time the old fashioned way.


To see if it is on or off, use the command adjtimex --print and look at 
the value of "status". If the "64" bit of this number (expressed in 
binary) equal to 0, 11 minute mode is on. Otherwise, it is off.


If your system runs with 11 minute mode on, don't use hwclock --adjust 
or hwclock --hctosys. You'll just make a mess. It is acceptable to use a 
hwclock --hctosys at startup time to get a reasonable System Time until 
your system is able to set the System Time from the external source and 
start 11 minute mode.



The question is: Does Busybox ntpd activate this 11 minute mode?

--
Harald



___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: What's the easiest way to make Busybox keep correct time?

2014-09-02 Thread Harald Becker

Hi !

As you already note, the kernel does this update of the hardware clock, 
as long as ntpd gets a synchronized clock.


Beside this, an endless loop is no problem:

#!/bin/sh

# the actual loop as a shell function
clock_update_loop()
{
  while true
do
  sleep 3600
  hwclock -w
done
} 0<>/dev/null 1>&0 2>&0

# and here we start it in background
clock_update_loop &

# successful exit
exit 0


Put this in a file and just run it once on system startup. This shall 
automatically return but keep the loop in memory and running, until you 
send it a SIGTERM signal (kill/killall).


The I/O redirection stuff (0<>...) is to avoid clobbering your console 
output from any output the background loop may do just send it to the 
null device).


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: What's the easiest way to make Busybox keep correct time?

2014-09-02 Thread Harald Becker

Hi !

> I appreciate that you would like to know why this isn't working but
> I'm really not too keen on rebooting the device several times a day
> when my original script seems to be working fine.

This is your decision, I fully understand your concerns. Just come back, 
when you have need for this ...


> *adjtimex*: The thing I am wondering is, when you run it does it make
> a change that persists through reboots, or does it need to be run
> each time the system comes up?

I never user the Busybox version of adjtimex, but the original versions 
I used wrote a file in /etc (usually /etc/adjtime). This file contains 
all information the system needs to correct the clock. May be it is 
required to run adjtimex on startup, but then with just some constant 
parameter, pointing it to the file in /etc.


> Where it says:


For a machine connected to the Internet, or equipped with a precision
oscillator or radio clock, the best way is to regulate the system clock
with ntpd(8).  The kernel will automatically update the hardware clock
every eleven minutes.


This is correct, if the clock is marked synchronized, the kernel 
automatically updates the hardware clock periodically. This way you 
don't need to manually update the hardware clock in an extra loop. The 
only concern is the clock drift, when Internet connection is not 
available. As soon as ntpd stops updating the clock, the kernel will no 
longer write to the hardware clock.




But that does not work in the Busybox version of adjtimex.


Let me give a couple of days, I try to get a closer look at the Busybox 
adjtimex and send you a step by step description, how to use it.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [Bug: Busybox 1.22.1] false return 0 instead of 1 with '--help' switch.

2014-09-01 Thread Harald Becker

root@luxusAsus:~ type false
false is a shell builtin


Ohps!


# thats an explanation! 8-)


Indeed! :-)


... but this tells nothing about which return code

false --help

shall give.


IMO to display the usage information with --help is a successful 
execution and shall return 0 as other commands does, but I see the 
problem when the return value of false may be tricked by giving the 
--help parameter ... as this compatibility with GNU utilities seams to 
be the most important argument ... beside this, I stand away to see what 
others think about this question.


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [Bug: Busybox 1.22.1] false return 0 instead of 1 with '--help' switch.

2014-09-01 Thread Harald Becker


> seems to be this busybox one 8-):

How can that be?


root@luxusAsus:~ which false
/bin/false

root@luxusAsus:~ which busybox
/bin/busybox

root@luxusAsus:~ ls -la /bin/false
lrwxrwxrwx1 root root 7 Aug 29 16:06 /bin/false -> busybox


Ok!


root@luxusAsus:~ false --help
root@luxusAsus:~ echo $?
1



root@luxusAsus:~ /bin/busybox false --help
BusyBox v1.22.1 (2014-08-29 10:01:39 EDT) multi-call binary.

Usage: false

Return an exit code of FALSE (1)

root@luxusAsus:~ echo $?
0


How can that be, if both false are Busybox?

Please try:

/bin/false --help; echo $?

Also you may have a look at your aliases?

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: [Bug: Busybox 1.22.1] false return 0 instead of 1 with '--help' switch.

2014-09-01 Thread Harald Becker



root at box:~ busybox false --help
BusyBox v1.22.1 (2014-08-28 18:55:30 EDT) multi-call binary.

Usage: false

Return an exit code of FALSE (1)

root at box:~ echo $?
0

root at box:~ false --help
root at box:~ echo $?
1


This shows me, your false is not same as Busybox internal false. You are 
probably using a different false executable, script or alias.


Try "which false" and "ls -al WHAT_YOU_GET_FROM_WHICH".


___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: What's the easiest way to make Busybox keep correct time?

2014-09-01 Thread Harald Becker

Try this one ...

start() {
  echo -n "Starting ntpd: "
  /usr/sbin/ntpd -p north-america.pool.ntp.org && echo "OK" || echo 
"failed"

}

Are you sure your ntpd (a symlink to /bin/busybox) lives in /usr/sbin ?



Well, I had nothing to do with building this system so I have no idea why
the formal correct version doesn't seem to want to work.


It is just ugly ... and now I got interested to see why?



Very interesting, and thank you, but it's not something I am particularly
concerned about at the moment,


Yep ... was to give you some information how it works/what we are 
discussing about.




If nothing else, this has been educational.  I'm still hopeful that the
company that makes this device might have a better solution that will work
without any problems.


That would indeed be the better way ...

--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Re: What's the easiest way to make Busybox keep correct time?

2014-09-01 Thread Harald Becker

Hi !

> Actually, the hwclock time is what's inaccurate

:-( ... bad hardware!


That is very interesting but since this system is always connected to the
Internet, I'm not sure I need to be that concerned about the hardware clock.


If your system is always connected to a functioning Internet connection, 
you won't need it, with a running ntpd ... but as you told you are going 
to record Satellite or Terrestrial TV, so think about situations where 
the Internet connection dropped (for which reason ever). As soon as ntpd 
can't contact the public time server, your clock starts to run away from 
the real time. So it will depend how long the Connection is missing. For 
short drop outs, it won't make much difference, but when the connection 
is lost for hours, the time drifts away and your recording time may be 
too. That is using adjtimex to correct you system clock to drift as less 
as possible, will help to solve such Internet drop outs.


It is your decision, if you use it or not, I just wanted to tell you the 
possibility, if you get in need of a solution without working Internet 
connection.



Although, it might be nice if there were a way to check and see if ntpd is
running, and if so, update the hardware clock from the system clock
periodically.


The usual way is to see output of ps/top if process is running, and a 
ping to the time server (see if working connection). Then ntpd normally 
does it's job pretty well. After that, just watch your time, now and the 
day after and see how much clock difference you get ...


--
Harald

___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


  1   2   3   4   5   6   7   >