Re: systemd services that are not equivalent to LSB init scripts

2019-07-15 Thread Ben Hutchings
On Mon, 2019-07-15 at 00:00 +0200, Martin Steigerwald wrote:
> Hello.
> 
> Theodore Ts'o - 14.07.19, 22:07:
> > So requiring support of non-systemd ecosystems is in general, going to
> > require extra testing.  In the case of cron/systemd.timers, this
> > means testing and/or careful code inspection to make sure the
> > following cases work:
> > 
> > * systemd && cron
> > * systemd && !cron
> > * !systemd && cron
> > 
> > Support of non-systemd ecosystems is not going to be free, and some
> > cases, it is not going to be fun, something which many have asserted
> > should be something we should be striving for.  The challenge is how
> > do we develop the consensus to decide whether or not we force
> > developers to pay this cost.
> 
> I believe forcing someone who does volunteer work by maintaining 
> packages for Debian is not going to work out. Or even more so: I do not 
> see how Debian project could force developers. The only effective way to 
> force anything would be to threaten with loosing membership status or 
> privileged. But I would not go that route as I see it as destructive 
> one.
[...]

We can't force individual developers to do anything, and yet we manage
to release with a large number of packages that mostly follow policy. 
If it's our policy that packages must support cron (where applicable)
then a failure to do so can be fixed by any developer, following the
usual process.

Ben.

-- 
Ben Hutchings
If God had intended Man to program,
we'd have been born with serial I/O ports.




signature.asc
Description: This is a digitally signed message part


Re: systemd services that are not equivalent to LSB init scripts

2019-07-15 Thread Russ Allbery
Peter Pentchev  writes:
> On Sun, Jul 14, 2019 at 12:30:16PM -0700, Russ Allbery wrote:

>> There seems to be a clear infrastructure gap for the non-systemd world
>> here that's crying out for some inetd-style program that implements the
>> equivalent of systemd socket activation and socket passing using the
>> same protocol, so that upstreams can not care whether the software is
>> started by systemd or by that inetd, and provides an easy-to-configure
>> way for Debian packages to indicate this should be used if systemd
>> isn't in play.  It doesn't seem like it would be too difficult to
>> implement such a thing, but I don't think it already exists.

> https://bugs.debian.org/922353

> https://gitlab.com/dkg/socket-activate

> In the words of Douglas Adams, "there is another theory which states
> that this has already happened" :)

Great!  So can we close the loop on the rest of the puzzle, which is how
to transparently use this facility in packaging for daemons that are
normally socket-activated with systemd on systems that don't use systemd?
That would help the sustainability of our approach here a lot, I think.

-- 
Russ Allbery (r...@debian.org)   



Re: systemd services that are not equivalent to LSB init scripts

2019-07-15 Thread Simon Richter
Hi,

On Mon, Jul 15, 2019 at 01:49:04PM +0200, Guillem Jover wrote:

> > In the same way, we could implement "service monitoring" in sysvinit by
> > adding an "inittab.d" directory, but I'm fairly sure that I'm not the first
> > person who had this idea in the last thirty years, so there is probably a
> > reason why it hasn't been done.

> Yeah, this is something that has slightly bothered me too, even though
> sysvinit is a bit poor at services monitoring TBH. I guess I might either
> file a request upstream, or send a patch at some point.

That's why I put it in "scare quotes". My general thoughts on this are:

 - service monitoring is itself a service that may have an arbitrarily
   complex technology stack and dependency tree
 - except for very specific circumstances, restarting a failed service is
   the wrong thing to do
- files might be in an inconsistent or invalid state
- if the service was attacked, restarting it gives more attack surface

So my guess is that there was simply no real demand for init to ever
restart daemons in a real world scenario.

I've never needed it in twenty years, at least.

   Simon



Re: systemd services that are not equivalent to LSB init scripts

2019-07-15 Thread Paul Wise
On Mon, Jul 15, 2019 at 6:48 PM Simon Richter wrote:

> The main limitation seems to be that it's not permitted to modify
> inetd.conf from maintainer scripts. We could probably "fix" this by adding
> an "inetd.conf.d" mechanism.

There is update-inetd, but it doesn't support xinetd and doesn't
appear to have debhelper integration.

-- 
bye,
pabs

https://wiki.debian.org/PaulWise



Re: systemd services that are not equivalent to LSB init scripts

2019-07-15 Thread Guillem Jover
On Mon, 2019-07-15 at 12:30:09 +0200, Simon Richter wrote:
> On Sun, Jul 14, 2019 at 07:23:31PM +0100, Simon McVittie wrote:
> > Some systemd system services are meant to start on-demand via socket
> > events (systemd.socket(5)), and can work via inetd on non-systemd-booted
> > systems. micro-httpd appears to be an example of this - I'm a bit surprised
> > there aren't more. Perhaps this indicates limitations in the infrastructure
> > around inetd services making it hard to implement "use systemd.socket(5)
> > under systemd or inetd otherwise"?
> 
> The main limitation seems to be that it's not permitted to modify
> inetd.conf from maintainer scripts. We could probably "fix" this by adding
> an "inetd.conf.d" mechanism.

Oh, but inetutils-inetd does support /etc/inetd.d/ (since 2000). The
problem is that this would need to be implemented by all inetd daemons
in Debian.

And I'd like to move forward at some point with a switch to declarative
update-inetd handling, which would cover some of this.

I also added the equivalent for inetutils-syslogd with /etc/syslog.d/
(since 2008).

> In the same way, we could implement "service monitoring" in sysvinit by
> adding an "inittab.d" directory, but I'm fairly sure that I'm not the first
> person who had this idea in the last thirty years, so there is probably a
> reason why it hasn't been done.

Yeah, this is something that has slightly bothered me too, even though
sysvinit is a bit poor at services monitoring TBH. I guess I might either
file a request upstream, or send a patch at some point.

Thanks,
Guillem



Re: systemd services that are not equivalent to LSB init scripts

2019-07-15 Thread Peter Pentchev
On Sun, Jul 14, 2019 at 12:30:16PM -0700, Russ Allbery wrote:
> Vincent Bernat  writes:
> 
> > inetd uses stdin/stdout to communicate with the daemon and have to
> > launch one instance for each client connecting. systemd.socket pass a
> > regular listening socket on first connection to the daemon and the
> > daemon can then serve multiple clients.
> 
> I believe the wait option for at least xinetd behaves in roughly the same
> way, although it's normally only used for UDP services.
> 
> There seems to be a clear infrastructure gap for the non-systemd world
> here that's crying out for some inetd-style program that implements the
> equivalent of systemd socket activation and socket passing using the same
> protocol, so that upstreams can not care whether the software is started
> by systemd or by that inetd, and provides an easy-to-configure way for
> Debian packages to indicate this should be used if systemd isn't in play.
> It doesn't seem like it would be too difficult to implement such a thing,
> but I don't think it already exists.

https://bugs.debian.org/922353

https://gitlab.com/dkg/socket-activate

In the words of Douglas Adams, "there is another theory which states
that this has already happened" :)

> I believe the convention in the runit/daemontools world is to decide this
> is not an important problem to solve and lots of small running daemons is
> not something that needs to be avoided, and to use tcpserver or some
> equivalent that behaves like inetd for a single service.  Even here,
> though, I'm not sure if any of those implementations use the same socket
> passing protocol as systemd, and I'm not sure if they're yet trivial to
> configure as part of Debian packaging.

tcpserver certainly does not, it implements the UCSPI protocol, which is
a good one in itself, but it is still a different one.

G'luck,
Peter

-- 
Peter Pentchev  roam@{ringlet.net,debian.org,FreeBSD.org} p...@storpool.com
PGP key:http://people.FreeBSD.org/~roam/roam.key.asc
Key fingerprint 2EE7 A7A5 17FC 124C F115  C354 651E EFB0 2527 DF13


signature.asc
Description: PGP signature


Re: systemd services that are not equivalent to LSB init scripts

2019-07-15 Thread Simon Richter
Hi,

On Sun, Jul 14, 2019 at 07:23:31PM +0100, Simon McVittie wrote:

> Some systemd system services are meant to start on-demand via socket
> events (systemd.socket(5)), and can work via inetd on non-systemd-booted
> systems. micro-httpd appears to be an example of this - I'm a bit surprised
> there aren't more. Perhaps this indicates limitations in the infrastructure
> around inetd services making it hard to implement "use systemd.socket(5)
> under systemd or inetd otherwise"?

The main limitation seems to be that it's not permitted to modify
inetd.conf from maintainer scripts. We could probably "fix" this by adding
an "inetd.conf.d" mechanism.

In the same way, we could implement "service monitoring" in sysvinit by
adding an "inittab.d" directory, but I'm fairly sure that I'm not the first
person who had this idea in the last thirty years, so there is probably a
reason why it hasn't been done.

   Simon



Re: systemd services that are not equivalent to LSB init scripts

2019-07-14 Thread Russ Allbery
Vincent Bernat  writes:
>  ❦ 14 juillet 2019 12:30 -07, Russ Allbery :

>> There seems to be a clear infrastructure gap for the non-systemd world
>> here that's crying out for some inetd-style program that implements the
>> equivalent of systemd socket activation and socket passing using the
>> same protocol, so that upstreams can not care whether the software is
>> started by systemd or by that inetd, and provides an easy-to-configure
>> way for Debian packages to indicate this should be used if systemd
>> isn't in play.

> What's the point? The alternative to not using systemd socket server is
> to run the daemon as usual.

The point is to support on sysvinit services that, upstream, only support
socket activation.

> If an upstream decides to tie a daemon to systemd socket server by
> delegating the socket creation to systemd, why would we need to
> implement anything? Don't we have better things to do?  This init
> diversity crusade is eating our time.

I'm not saying you should write this.  I'm saying that people who want to
support alternatives to systemd should consider writing this, since it
would make it much easier to continue to support a whole class of services
on both systemd and non-systemd environments.

In other words, I think the best way forward for those who want to support
alternatives to systemd would be for those interested to collaborate on
building tools that implement the functionality of systemd unit files
without being tied to systemd.  That way, there could be a suite of tools
that either interprets unit files directly or that could use some
configuration generated automatically from unit files.  That way, we've
separatedly the API from the implementation and added multiple
implementations, and hopefully this will significantly increase the
chances that packages will work on non-systemd systems even if the
maintainer doesn't test them on such systems.  One such tool would be an
inetd-style service that implements socket activation.  Another could be a
jailing wrapper program that implements the namespacing and syscall
filtering features of systemd.  And so forth.

For better or worse, the unit file syntax is becoming increasingly common
upstream, and more upstreams assume they can configure their software with
that syntax and get a consistent set of features.  Implementing that
configuration syntax and those features seems more likely to me to be
viable in the long run than maintaining multiple configuration sets for
every package.

-- 
Russ Allbery (r...@debian.org)   



Re: systemd services that are not equivalent to LSB init scripts

2019-07-14 Thread Martin Steigerwald
Hello.

Theodore Ts'o - 14.07.19, 22:07:
> So requiring support of non-systemd ecosystems is in general, going to
> require extra testing.  In the case of cron/systemd.timers, this
> means testing and/or careful code inspection to make sure the
> following cases work:
> 
> * systemd && cron
> * systemd && !cron
> * !systemd && cron
> 
> Support of non-systemd ecosystems is not going to be free, and some
> cases, it is not going to be fun, something which many have asserted
> should be something we should be striving for.  The challenge is how
> do we develop the consensus to decide whether or not we force
> developers to pay this cost.

I believe forcing someone who does volunteer work by maintaining 
packages for Debian is not going to work out. Or even more so: I do not 
see how Debian project could force developers. The only effective way to 
force anything would be to threaten with loosing membership status or 
privileged. But I would not go that route as I see it as destructive 
one.
 
> And if we don't, is it better to just let this rot where we allow
> developers to violate current policy with a wink and a nudge until
> it's clear that we do have consensus?  Or do we force them to do the
> work?  Or do we somehow go through the pain and effort to try to
> determine what that consensus actually is?

I do not have a solution for the divide behind all of this.

But I do have some hints that may be beneficial or constructive to 
someone:

There is a group of Debian developers in the Debian Init Diversity team 
working on exactly that: diversity regarding init systems. This is a 
group consisting on both Debian and Devuan developers who decided that 
working together is going to benefit both Debian and Devuan. Some people 
of that group did a *ton* of work to improve sysvinit, insserv, 
startpar, runit and I believe also openrc packages. When I look at the 
bug tracker I believe sysvinit in Buster is in a better state than it 
was since a long, long time. It even has a new upstream maintainer who 
actively dug through Debian bug reports with the aim to fix as much 
upstream bugs as possible. Also another developer worked on elogind 
package. I running Debian Sid with Plasma desktop on Sysvinit since more 
than a month already. It needed some minor tweaks as for example 
Pulseaudio was not started automatically and Evolution which I use for 
work needs some services that are nowadays started by user systemd 
service files. I am currently using some quite half baked user services 
frontend for runit for starting these I worked on some time ago.

So I believe there is some talent to draw from when it comes to 
supporting alternative init systems within Debian packages. Both within 
Debian and Devuan communities.

So far the Debian init diversity team runs their publicly accessible 
mailing list on infrastructure outside Debian, also to stay out of the 
heat of all the discussions regarding systemd and focus on getting 
actually work done.

As for the fio package I maintain: I did a sysvinit script myself for it 
and I am gladly willing to accept patches to support other init systems. 
During that work I was even able to improve upon the upstream systemd 
service file considerably as I learned about an option to daemonize fio I 
was not aware of.

Testing sysvinit stuff can be done on either Debian or Devuan inside a 
VM. Actually for me it is the other way around: To actually test the 
systemd service file for the fio service I need to use another system 
meanwhile, as my main laptop runs on sysvinit and elogind since more 
than a month and I have no intentions to change it back at the moment. 
Instead I plan to switch my other laptop as well, which should be as 
easy as:

apt install elogind libpam-elogind-compat sysvinit

In addition to that if you have some cron job or sysvinit script which 
requires some testing I am happy to help out as I manage to allocate 
time for that. I am not sure whether that libpam-elogind-compat package 
from experimental is still needed. Also it can easily be switched back 
to Systemd as well as well.

That mentioned I believe there is no use in forcing anyone to do 
anything within the Debian project except what is necessary to maintain 
the boundaries of a code of conduct which helps everyone who uses Debian 
or contributes to it welcome.

Again, I do not have a quick or easy solution. And I may be the only 
Debian package maintainer who switched his main system to sysvinit, but… 
it may still be interesting to read about this different perspective 
here.

I did not share my reasons for switching to Sysvinit. Simply because I 
am just not interested in triggering yet another discussion for or 
against Systemd here.

Thanks,
-- 
Martin




Re: systemd services that are not equivalent to LSB init scripts

2019-07-14 Thread Vincent Bernat
 ❦ 14 juillet 2019 12:30 -07, Russ Allbery :

> There seems to be a clear infrastructure gap for the non-systemd world
> here that's crying out for some inetd-style program that implements the
> equivalent of systemd socket activation and socket passing using the same
> protocol, so that upstreams can not care whether the software is started
> by systemd or by that inetd, and provides an easy-to-configure way for
> Debian packages to indicate this should be used if systemd isn't in
> play.

What's the point? The alternative to not using systemd socket server is
to run the daemon as usual. If an upstream decides to tie a daemon to
systemd socket server by delegating the socket creation to systemd, why
would we need to implement anything? Don't we have better things to do?
This init diversity crusade is eating our time.
-- 
Make sure every module hides something.
- The Elements of Programming Style (Kernighan & Plauger)


signature.asc
Description: PGP signature


Re: systemd services that are not equivalent to LSB init scripts

2019-07-14 Thread Theodore Ts'o
On Sun, Jul 14, 2019 at 07:23:31PM +0100, Simon McVittie wrote:
> micro-httpd appears to be an example of this - I'm a bit surprised
> there aren't more. Perhaps this indicates limitations in the infrastructure
> around inetd services making it hard to implement "use systemd.socket(5)
> under systemd or inetd otherwise"?

I'll note that it's a bit tricky even in the cron vs systemd.timer use
case.  That's what I was referring to when I said we had to go through
some effort just to enable the "use cron" functionality, since we had
to make sure that this was inhibited in the case where cron and
systemd is enabled on the system.

So requiring support of non-systemd ecosystems is in general, going to
require extra testing.  In the case of cron/systemd.timers, this means
testing and/or careful code inspection to make sure the following
cases work:

* systemd && cron
* systemd && !cron
* !systemd && cron

Support of non-systemd ecosystems is not going to be free, and some
cases, it is not going to be fun, something which many have asserted
should be something we should be striving for.  The challenge is how
do we develop the consensus to decide whether or not we force
developers to pay this cost.

And if we don't, is it better to just let this rot where we allow
developers to violate current policy with a wink and a nudge until
it's clear that we do have consensus?  Or do we force them to do the
work?  Or do we somehow go through the pain and effort to try to
determine what that consensus actually is?

- Ted



Re: systemd services that are not equivalent to LSB init scripts

2019-07-14 Thread Russ Allbery
Vincent Bernat  writes:

> inetd uses stdin/stdout to communicate with the daemon and have to
> launch one instance for each client connecting. systemd.socket pass a
> regular listening socket on first connection to the daemon and the
> daemon can then serve multiple clients.

I believe the wait option for at least xinetd behaves in roughly the same
way, although it's normally only used for UDP services.

There seems to be a clear infrastructure gap for the non-systemd world
here that's crying out for some inetd-style program that implements the
equivalent of systemd socket activation and socket passing using the same
protocol, so that upstreams can not care whether the software is started
by systemd or by that inetd, and provides an easy-to-configure way for
Debian packages to indicate this should be used if systemd isn't in play.
It doesn't seem like it would be too difficult to implement such a thing,
but I don't think it already exists.

I believe the convention in the runit/daemontools world is to decide this
is not an important problem to solve and lots of small running daemons is
not something that needs to be avoided, and to use tcpserver or some
equivalent that behaves like inetd for a single service.  Even here,
though, I'm not sure if any of those implementations use the same socket
passing protocol as systemd, and I'm not sure if they're yet trivial to
configure as part of Debian packaging.

-- 
Russ Allbery (r...@debian.org)   



Re: systemd services that are not equivalent to LSB init scripts

2019-07-14 Thread Vincent Bernat
 ❦ 14 juillet 2019 19:23 +01, Simon McVittie :

> Some systemd system services are meant to start on-demand via socket
> events (systemd.socket(5)), and can work via inetd on non-systemd-booted
> systems. micro-httpd appears to be an example of this - I'm a bit surprised
> there aren't more. Perhaps this indicates limitations in the infrastructure
> around inetd services making it hard to implement "use systemd.socket(5)
> under systemd or inetd otherwise"?

inetd uses stdin/stdout to communicate with the daemon and have to
launch one instance for each client connecting. systemd.socket pass a
regular listening socket on first connection to the daemon and the
daemon can then serve multiple clients. It is simple to convert an
existing daemon to systemd.socket and it doesn't come with a performance
impact. It can even simplify some aspects of an always-running daemon,
like reloading without impacting the traffic.
-- 
Familiarity breeds contempt -- and children.
-- Mark Twain


signature.asc
Description: PGP signature


Re: systemd services that are not equivalent to LSB init scripts

2019-07-14 Thread Simon McVittie
On Sun, 14 Jul 2019 at 09:21:37 -0400, Theodore Ts'o wrote:
> P.S.  I'm going to be adding an override in e2fsprogs for
> package-supports-alternative-init-but-no-init.d-script because it
> has false positive, regardless of its claim:
> 
> N:Severity: important, Certainty: certain
> 
> It most *definitely* is not certain.  We went through quite a bit of
> trouble providing alternative functionality via cron, and not via
> (only) systemd timers.

Every LSB init script is equivalent to a systemd service that is
"required" or "wanted" by one of the targets that are reached during boot,
but the converse is not true. Not every systemd service is "required"
or "wanted" by those targets: some systemd services start on-demand,
which is not a concept that exists within the narrower scope of LSB
init scripts. Some of the triggering events have equivalents (at least
approximately) in a broader non-systemd ecosystem that includes things
like cron; others do not.

Putting this one first because I think it's the least ambiguous:
Some systemd system services are meant to be started on-demand via
D-Bus activation (/usr/share/dbus-1/system-services/*.service with
SystemdService= pointing to a systemd.service(5)). If the D-Bus
service has a suitable Exec= line, then dbus-daemon can launch the
daemon via traditional D-Bus activation (dbus-daemon-launch-helper)
on non-systemd-booted systems; but on a systemd system it's best if it
configures SystemdService= to delegate that job to systemd, because unlike
systemd, dbus-daemon is not really designed to be a service manager,
and is certainly not a fully-featured service manager. udisksd in the
udisks2 package is one of the canonical examples of this pattern. This
is probably not allowed by a strict reading of Policy, but in the case
where traditional activation already works (which in practice it usually
does) there's clearly no actual functional bug for non-systemd users -
the service is working as well as it ever did - so it should be allowed.

Some systemd system services are meant to be triggered by systemd timers
(systemd.timer(5)), which trigger execution of a systemd service when
the timer "goes off", and can have an analogous (ana)cron job used on
non-systemd-booted systems. It sounds as though your use-case in e2fsprogs
is a good example of this; apt is another. As with D-Bus above, this
doesn't seem to be allowed by a strict reading of Policy. I think the case
where there is an approximately equivalent cron job should certainly be
allowed. If there is no equivalent cron job, I would personally say that's
a bug but probably not RC; it would be in the spirit of previous Technical
Committee decisions on init systems to expect maintainers to apply good
patches, if someone with an interest in non-systemd inits contributes
a cron job that doesn't seem like it will harm future maintenance.

Some systemd system services are meant to start on-demand via socket
events (systemd.socket(5)), and can work via inetd on non-systemd-booted
systems. micro-httpd appears to be an example of this - I'm a bit surprised
there aren't more. Perhaps this indicates limitations in the infrastructure
around inetd services making it hard to implement "use systemd.socket(5)
under systemd or inetd otherwise"?

Some systemd system services are meant to start during suspend/resume by
hooking into targets like suspend.target, and could presumably work via
/etc/pm or /etc/acpi or whatever else non-systemd users use for power
management events on non-systemd-booted systems (I've lost track of what
that would be). tlp-sleep.service in the tlp package is an example, which
appears to hook into elogind via /lib/elogind/system-sleep/49-tlp-sleep,
a mechanism of which I was not previously aware.

Some systemd system services (I don't know of any examples in Debian)
are meant to be triggered by an inotify event (systemd.path(5)), which
could presumably have an equivalent using something like the incron
package if non-systemd users want it badly enough. I don't think the
maintainers of any systemd services that make use of that mechanism
should be expected to invent a whole parallel infrastructure that they
will not, themselves, use, but if non-systemd users build a suitable
mechanism, then it might be reasonable to expect service maintainers
to apply patches that add simple integration "glue" files analogous to
.path units for that other mechanism.

I've deliberately been saying "the non-systemd ecosystem" rather than
"the sysvinit ecosystem" because the latter would be very misleading -
sysvinit itself is really very simple, and its only contribution to any
of this is to run /etc/init.d/rc at runlevel changes. For full coverage
of equivalents of the units I described above it would be at least the
sysvinit/sysv-rc/cron/anacron/dbus-daemon-launch-helper/inetd/elogind/incron
ecosystem, 

Re: LSB init scripts

2007-05-04 Thread Tim Dijkstra
On Thu, 03 May 2007 18:13:25 -0700
Russ Allbery [EMAIL PROTECTED] wrote:

 Lars Wirzenius [EMAIL PROTECTED] writes:
  On to, 2007-05-03 at 13:39 -0700, Russ Allbery wrote:
 
  My ideal output format would just list subsystem OK
 
  While we're daydreaming, I'd like an empty screen with a timer counting
  down how long (in seconds) I have until I can actually use the machine.
  Unless there's a problem, of course, in which case I want all the info I
  need to debug things.
 
 One of the great parts of using a library to handle the output formatting
 is that people who want this sort of boot presentation (or anything else,
 really) can then develop themes that do exactly what they want.
 

Splashy is indeed already using that functionality. It installs a file 
/etc/lsb-base-logging.sh, which is their to override functions from the
LSB library. It increments a progress bar triggered by calls to
log_end_msg made from init-scripts. That way we don't need change the
initscripts or install extra scripts, we get out info from all scripts
that use the LSB library.

grts Tim


signature.asc
Description: PGP signature


Re: LSB init scripts

2007-05-04 Thread Marc Haber
In Thu, 03 May 2007 13:39:32 -0700, Russ Allbery [EMAIL PROTECTED]
wrote:
I think that would give you the best of both worlds, particularly if it's
combined with logging so that the *full* output, without any
prettification, goes into a file on disk somewhere.

Agreed, with the option of having the whole blurb on the console for
debugging just in case that the box does not come up cleanly.

Greetings
Marc

-- 
-- !! No courtesy copies, please !! -
Marc Haber |Questions are the | Mailadresse im Header
Mannheim, Germany  | Beginning of Wisdom  | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG Rightful Heir | Fon: *49 621 72739834



Re: LSB init scripts

2007-05-03 Thread Russ Allbery
Lennart Sorensen [EMAIL PROTECTED] writes:
 On Thu, May 03, 2007 at 10:08:20PM +0200, Marc Haber wrote:

 This is also a sales issue. I have seen to many faces darken when
 people see a Debian system boot for the first time. People get reminded
 of the old DOS days and decide that this Linux thing is too ugly to
 use.

 So knowledge is ugly?  It is obviously so much better to not know what
 is going on so that if it doesn't work you don't have to worry about it
 because you know nothing that could help you to solve it so it isn't
 your problem of course.

 Now being able to just turn on the details when something is wrong, and
 have them no there the rest of the time might be OK.  On the other hand
 you boot once every few months so who cares what the boot messages look
 like. :)

My ideal output format would just list subsystem OK (probably lined up
neatly) for each subsystem that's started successfully, so if everything
is fine, you'd see nothing but OKs.  (Special exception for startup that
needs to tell you what it's doing, in which case I'd like to see the
indented output of that subsystem before or after or bracketed by the OK
messages.)  If something fails, I want it to say it failed and show all of
the appropriate error messages immediately before or after.

I think that would give you the best of both worlds, particularly if it's
combined with logging so that the *full* output, without any
prettification, goes into a file on disk somewhere.

-- 
Russ Allbery ([EMAIL PROTECTED])   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts

2007-05-03 Thread Lars Wirzenius
On to, 2007-05-03 at 13:39 -0700, Russ Allbery wrote:
 My ideal output format would just list subsystem OK

While we're daydreaming, I'd like an empty screen with a timer counting
down how long (in seconds) I have until I can actually use the machine.
Unless there's a problem, of course, in which case I want all the info I
need to debug things.

(If I'm *really* daydreaming, I want a boot so fast I can't even see the
timer.)

-- 
C is the *wrong* language for your application.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts

2007-05-03 Thread Russ Allbery
Lars Wirzenius [EMAIL PROTECTED] writes:
 On to, 2007-05-03 at 13:39 -0700, Russ Allbery wrote:

 My ideal output format would just list subsystem OK

 While we're daydreaming, I'd like an empty screen with a timer counting
 down how long (in seconds) I have until I can actually use the machine.
 Unless there's a problem, of course, in which case I want all the info I
 need to debug things.

One of the great parts of using a library to handle the output formatting
is that people who want this sort of boot presentation (or anything else,
really) can then develop themes that do exactly what they want.

I really want to enable that feature.

-- 
Russ Allbery ([EMAIL PROTECTED])   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-02 Thread Joey Hess
Adam Borowski wrote:
   The friend muttered something about Ubuntu being as flaky as
   Windows, then rebooted and started the installation anew...

This is not an Ubuntu mailing list. It's pretty annoying to require all
us d-i developers to get this far down in the mail before we realize
that the problems you are describing are (probably) not d-i problems.

-- 
see shy jo


signature.asc
Description: Digital signature


Re: LSB init scripts and multiple lines of output

2006-06-02 Thread Thomas Viehmann
martin f krafft wrote:
 also sprach martin f krafft [EMAIL PROTECTED] [2006.06.01.2122 +0200]:
 Starting RAID device md0 ... 3 drives, done
 Starting RAID device md1 ... 3 drives, done
 Starting RAID device md2 ... 2/3 drives, degraded
 Starting RAID device md3 ... 1/3 drives, failed
 Starting RAID device md4 ... 3 drives, done
 What do people think about that?

Would combining the drives that went well make sense?
Starting RAID devices...  md0, md1, md4 done
Starting RAID device md2 ... 2/3 drives, degraded
Starting RAID device md3 ... 1/3 drives, failed

The Starting RAID devices... could be put before the startup.
I guess the gain I see is that on normal operation only one line is
printed while providing more detailed and still legible information in
case of a failure.

Kind regards

T.
-- 
Thomas Viehmann, http://thomas.viehmann.net/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-02 Thread Adam Borowski
On Fri, Jun 02, 2006 at 02:36:15AM -0400, Joey Hess wrote:
 Adam Borowski wrote:
The friend muttered something about Ubuntu being as flaky as
Windows, then rebooted and started the installation anew...
 
 This is not an Ubuntu mailing list. It's pretty annoying to require all
 us d-i developers to get this far down in the mail before we realize
 that the problems you are describing are (probably) not d-i problems.

My sincere apologies.  For my defense, I was ranting about why
hiding error messages is unacceptable not about the installation is
bad, but certainly mentioning just a certain brand new Debian-like
distribution wasn't clear enough.


And I still haven't completed the report I once promised on the
behavior of d-i on low-end 486 with various amounts of memory...

-- 
1KB // Microsoft corollary to Hanlon's razor:
//  Never attribute to stupidity what can be
//  adequately explained by malice.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-02 Thread martin f krafft
also sprach Thomas Viehmann [EMAIL PROTECTED] [2006.06.02.0847 +0200]:
 Would combining the drives that went well make sense?
 Starting RAID devices...  md0, md1, md4 done
 Starting RAID device md2 ... 2/3 drives, degraded
 Starting RAID device md3 ... 1/3 drives, failed

Nice, but is it worth the trouble? Remember, I need to do all this
with POSIX shell. Sure, it's not that hard, but...

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
ah, but a man's reach should exceed his grasp,
 or what's a heaven for?
-- robert browning


signature.asc
Description: Digital signature (GPG/PGP)


LSB init scripts and multiple lines of output

2006-06-01 Thread martin f krafft
Hi,

I am faced with the problem on how to tackle multiline output from
an init.d script, which I have just converted to LSB. Since the
package is mdadm and RAID is kinda essential to those that have it
configured, I'd rather not hide information but give the user the
entire process.

In my ideal world, this is what it would look like:

Starting RAID devices ...
  /dev/md0 has been started with 3 drives.
  /dev/md1 has been started with 3 drives.
  /dev/md2 assembled from 2 drives - need all 3 to start it
  /dev/md3 assembled from 1 drive - not enough to start the array.
  /dev/md4 has been started with 3 drives.
... done assembling RAID devices: failed.

I don't seem to be able to realise this with lsb-base, nor does it
seem that they even provide for this. The alternative -- all in one
line -- just seems rather uninviting:

  Starting RAID devices ... /dev/md0 has been started with 3 drives,
  /dev/md1 has been started with 3 drives, /dev/md2 assembled from
  2 drives - need all 3 to start it, /dev/md3 assembled from 1 drive
  - not enough to start the array, /dev/md4 has been started with
  3 drives. failed.

Generally, I would not have a problem doing something like

  Starting RAID devices ... failed (see log for details).

But the problem is quite simply that by the time the script runs,
/var may not be there, and neither is /usr/bin/logger.

So what to do?

My current approach, which is to map short terms to the long errors
is just too much of an obfuscating hack, and it runs more than 80
characters as well:

  Starting RAID devices ... md0 started, md1 started, md2
  degraded+started, md3 degraded+failed, md4 started ... failed.

Any suggestions?

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
drink canada dry! you might not succeed, but it *is* fun trying.


signature.asc
Description: Digital signature (GPG/PGP)


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Gustavo Franco

On 6/1/06, martin f krafft [EMAIL PROTECTED] wrote:

Hi,

I am faced with the problem on how to tackle multiline output from
an init.d script, which I have just converted to LSB. Since the
package is mdadm and RAID is kinda essential to those that have it
configured, I'd rather not hide information but give the user the
entire process.

In my ideal world, this is what it would look like:

Starting RAID devices ...
  /dev/md0 has been started with 3 drives.
  /dev/md1 has been started with 3 drives.
  /dev/md2 assembled from 2 drives - need all 3 to start it
  /dev/md3 assembled from 1 drive - not enough to start the array.
  /dev/md4 has been started with 3 drives.
... done assembling RAID devices: failed.

I don't seem to be able to realise this with lsb-base, nor does it
seem that they even provide for this. The alternative -- all in one
line -- just seems rather uninviting:

  Starting RAID devices ... /dev/md0 has been started with 3 drives,
  /dev/md1 has been started with 3 drives, /dev/md2 assembled from
  2 drives - need all 3 to start it, /dev/md3 assembled from 1 drive
  - not enough to start the array, /dev/md4 has been started with
  3 drives. failed.

Generally, I would not have a problem doing something like

  Starting RAID devices ... failed (see log for details).

But the problem is quite simply that by the time the script runs,
/var may not be there, and neither is /usr/bin/logger.

So what to do?

My current approach, which is to map short terms to the long errors
is just too much of an obfuscating hack, and it runs more than 80
characters as well:

  Starting RAID devices ... md0 started, md1 started, md2
  degraded+started, md3 degraded+failed, md4 started ... failed.

Any suggestions?


Yes.

Starting RAID devices ... md0 ok, md1 ok, md2 2/3, md3 failed, md4
ok ... failed

I would go to check why just 2 out of 3 disks are ok in md2 and why
md3 failed. The only missing information from the output above is if
md3 failed with 0 or 1 disk ok.

I really think that all that multpline lines are annoying and hard to debug,
since it's not in RAID services but almost everywhere.

If the service 'foo' isn't starting and you've no idea the reason, because
there's too much stuff to read during the boot, it will be easier just look
at 'md3 failed', and associate it with the mountpoint that hosts the files
for that service. Unfortunately it seems that the common sense says
otherwise, and people are just populating more and more the boot
output as admin it isn't useful for me, really.

regards,
-- stratus


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Andreas Fester
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

martin f krafft wrote:
 I am faced with the problem on how to tackle multiline output from
 an init.d script, which I have just converted to LSB. Since the
 package is mdadm and RAID is kinda essential to those that have it
 configured, I'd rather not hide information but give the user the
 entire process.
 
 In my ideal world, this is what it would look like:
 
 Starting RAID devices ...
   /dev/md0 has been started with 3 drives.
   /dev/md1 has been started with 3 drives.
   /dev/md2 assembled from 2 drives - need all 3 to start it
   /dev/md3 assembled from 1 drive - not enough to start the array.
   /dev/md4 has been started with 3 drives.
 ... done assembling RAID devices: failed.
 
 I don't seem to be able to realise this with lsb-base, nor does it
 seem that they even provide for this. The alternative -- all in one
[...]

what about

$ cat raid
. /lib/lsb/init-functions

log_action_msg Starting RAID devices ...
log_action_msg   /dev/md0 has been started with 3 drives.
log_action_msg   /dev/md1 has been started with 3 drives.
log_action_msg   /dev/md2 assembled from 2 drives - need all 3 to start it
log_action_msg   /dev/md3 assembled from 1 drive - not enough to start the 
array.
log_action_msg   /dev/md4 has been started with 3 drives.
log_failure_msg ... done assembling RAID devices: failed.

$ ./raid
Starting RAID devices ...
  /dev/md0 has been started with 3 drives..
  /dev/md1 has been started with 3 drives..
  /dev/md2 assembled from 2 drives - need all 3 to start it.
  /dev/md3 assembled from 1 drive - not enough to start the array..
  /dev/md4 has been started with 3 drives..
* ... done assembling RAID devices: failed.


Best Regards,

Andreas


- --
Andreas Fester
mailto:[EMAIL PROTECTED]
WWW: http://www.littletux.net
ICQ: 326674288
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEfyVLZ3bQVzeW+rsRAvD/AKCJza+5KPQOZ2wMVm/5upylsdcEjgCdFmT7
kfYIlW9IyE/Lcf4gZuWtzOU=
=HR7O
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-01 Thread martin f krafft
also sprach Andreas Fester [EMAIL PROTECTED] [2006.06.01.1935 +0200]:
 log_action_msg Starting RAID devices ...

log_action_msg is supposed to be used to log an atomic message,
which is not the case the way we/you use it here.

 log_failure_msg ... done assembling RAID devices: failed.

According to /usr/share/doc/lsb-base/README.Debian, this function
does not comply with Debian policy. Whether I can actually use it or
not isn't exactly clear from the README.

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
i'd rather be riding a high speed tractor
with a beer on my lap,
and a six pack of girls next to me.


signature.asc
Description: Digital signature (GPG/PGP)


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Andreas Fester
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

martin f krafft wrote:
  also sprach Andreas Fester [EMAIL PROTECTED] [2006.06.01.1935 +0200]:
  log_action_msg Starting RAID devices ...
 
  log_action_msg is supposed to be used to log an atomic message,
  which is not the case the way we/you use it here.

why did I already expect that without having read the documentation?  :-(

module-init-tools also use the lsb functions, but they use
a raw echo to print the module list, one module per line.
Probably not the solution you are looking for ...

Another possibility:

log_begin_msg Starting RAID devices ...
log_success_msg   /dev/md0 has been started with 3 drives.
log_success_msg   /dev/md1 has been started with 3 drives.
log_failure_msg   /dev/md2 assembled from 2 drives - need all 3 to start it
log_failure_msg   /dev/md3 assembled from 1 drive - not enough to start the 
array.
log_success_msg   /dev/md4 has been started with 3 drives.
log_failure_msg ... done assembling RAID devices: failed.

but I did not have a look into the log_failure_msg and log_success_msg
constraints. I think there are no other possibilities because all
other functions internalls use echo -n, and adding the line feed manually
might also not be what you want...

Regards,

Andreas

- --
Andreas Fester
mailto:[EMAIL PROTECTED]
WWW: http://www.littletux.net
ICQ: 326674288
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEfyxjZ3bQVzeW+rsRAt8qAJ4hIS/3TYtjC0UT/awzh3/TKDZPVwCggbMx
Dl+KYP3c3nYpDH+Lwxfqxt0=
=Bl++
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Daniel Jacobowitz
On Thu, Jun 01, 2006 at 06:51:54PM +0200, martin f krafft wrote:
 In my ideal world, this is what it would look like:
 
 Starting RAID devices ...
   /dev/md0 has been started with 3 drives.
   /dev/md1 has been started with 3 drives.
   /dev/md2 assembled from 2 drives - need all 3 to start it
   /dev/md3 assembled from 1 drive - not enough to start the array.
   /dev/md4 has been started with 3 drives.
 ... done assembling RAID devices: failed.

Have you considered:

Starting RAID device md0 ... 3 drives, done
Starting RAID device md1 ... 3 drives, done
Starting RAID device md2 ... 2/3 drives, degraded
Starting RAID device md3 ... 1/3 drives, failed
Starting RAID device md4 ... 3 drives, done

-- 
Daniel Jacobowitz
CodeSourcery


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-01 Thread martin f krafft
also sprach Daniel Jacobowitz [EMAIL PROTECTED] [2006.06.01.2012 +0200]:
 Starting RAID device md0 ... 3 drives, done
 Starting RAID device md1 ... 3 drives, done
 Starting RAID device md2 ... 2/3 drives, degraded
 Starting RAID device md3 ... 1/3 drives, failed
 Starting RAID device md4 ... 3 drives, done

What do people think about that?

The only problem is that I start them all at once, then parse the
output. So the above is not really true.

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
when faced with a new problem, the wise algorithmist
 will first attempt to classify it as np-complete.
 this will avoid many tears and tantrums as
 algorithm after algorithm fails.
  -- g. niruta


signature.asc
Description: Digital signature (GPG/PGP)


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread martin f krafft
also sprach martin f krafft [EMAIL PROTECTED] [2006.06.01.2122 +0200]:
  Starting RAID device md0 ... 3 drives, done
  Starting RAID device md1 ... 3 drives, done
  Starting RAID device md2 ... 2/3 drives, degraded
  Starting RAID device md3 ... 1/3 drives, failed
  Starting RAID device md4 ... 3 drives, done
 
 What do people think about that?
 
 The only problem is that I start them all at once, then parse the
 output. So the above is not really true.

not true means that I am not starting md1 after md0 has started
successfully.

Right now, the script does this:

Stopping RAID devices... md6 busy; md5 busy; md3 busy; md2 busy; md0
busy; md1 busy; failed (6 busy, 1 stopped).
Starting RAID devices... md0 running; md1 running; md2 running; md3
running; md4 started (3/3); md5 running; md6 running; done (6
running, 1 started, 0 failed).

Which do people prefer? Btw: I cannot yet log which ones have been
stopped, there's no easy way to find out with only POSIX and without
/usr/bin/*.

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
what do you mean, it's not packaged in debian?


signature.asc
Description: Digital signature (GPG/PGP)


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

martin f krafft wrote:
 also sprach Daniel Jacobowitz [EMAIL PROTECTED] [2006.06.01.2012 +0200]:
 Starting RAID device md0 ... 3 drives, done
 Starting RAID device md1 ... 3 drives, done
 Starting RAID device md2 ... 2/3 drives, degraded
 Starting RAID device md3 ... 1/3 drives, failed
 Starting RAID device md4 ... 3 drives, done
 
 What do people think about that?
 
 The only problem is that I start them all at once, then parse the
 output. So the above is not really true.

Starting them individually just seems better IMO, more atomic.

For example, if starting md1 pukes really hard, the parent process
still exists to start md[234].

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEf0bpS9HxQb37XmcRAhoZAJ4ydDm3roD7c9iuFKach2pwMgab/gCgpvjc
HaPS22BbDcMgUKlFaEkWDbw=
=dIUY
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-01 Thread martin f krafft
also sprach Ron Johnson [EMAIL PROTECTED] [2006.06.01.2158 +0200]:
 Starting them individually just seems better IMO, more atomic.

Mh, I would have to do config file parsing in the init.d script to
figure out all available devices. mdadm already handles it; it
starts all devices that haven't been started; short of a segfault,
nothing can prevent it from starting md1 after md0 failed.

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
everyone has a little secret he keeps,
 i like the fires when the city sleeps.
  -- mc 900 ft jesus


signature.asc
Description: Digital signature (GPG/PGP)


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread martin f krafft
also sprach martin f krafft [EMAIL PROTECTED] [2006.06.01.2150 +0200]:
 Stopping RAID devices... md6 busy; md5 busy; md3 busy; md2 busy; md0
 busy; md1 busy; failed (6 busy, 1 stopped).
 Starting RAID devices... md0 running; md1 running; md2 running; md3
 running; md4 started (3/3); md5 running; md6 running; done (6
 running, 1 started, 0 failed).
 
 Which do people prefer? Btw: I cannot yet log which ones have been
 stopped, there's no easy way to find out with only POSIX and without
 /usr/bin/*.

Here's the other version:

piper:~# /etc/init.d/mdadm-raid restart   [575]
Stopping RAID array md6...failed (busy).
Stopping RAID array md5...failed (busy).
Stopping RAID array md0...failed (busy).
Stopping RAID array md1...failed (busy).
Stopping RAID arrays...done (3 array(s) stopped).
Starting RAID array md0...done (already running).
Starting RAID array md1...done (already running).
Starting RAID array md2...failed (not enough devices).
Starting RAID array md3...done (started, degraded [2/3]).
Starting RAID array md4...done (started [3/3]).
Starting RAID array md5...done (already running).
Starting RAID array md6...done (already running).

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
everyone smiles as you drift past the flower
 that grows so incredibly high.
-- the beatles


signature.asc
Description: Digital signature (GPG/PGP)


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Adam Borowski
On Thu, Jun 01, 2006 at 06:51:54PM +0200, martin f krafft wrote:
 In my ideal world, this is what it would look like:
 
 Starting RAID devices ...
   /dev/md0 has been started with 3 drives.
   /dev/md1 has been started with 3 drives.
   /dev/md2 assembled from 2 drives - need all 3 to start it
   /dev/md3 assembled from 1 drive - not enough to start the array.
   /dev/md4 has been started with 3 drives.
 ... done assembling RAID devices: failed.
[...]
 Generally, I would not have a problem doing something like
 
   Starting RAID devices ... failed (see log for details).

Really, PLEASE, don't!
Hiding information is _never_ good.  For any reason.  Even if you
expect the user to be non-technical.


Let me go into a longer rant, only partially on the topic.


I witnessed a smart but non-guru user install a certain brand new
Debian-like distribution on her home box today.  That person is a
physicist doing medical research, with no sysadmin experience but
familiar with quite a bunch of Unices.
The machine was known to be good except for the disk, which in turn
was checked on an identical box before.

The trouble she faced consistent of a number of _random_ breakages
with totally no usable error messages:
* In about 1/3 tries, parted failed with whatever the error message
  was hidden by the installer's GUI.  Tell me, how she was supposed
  to figure out what's wrong?  (Skipping the fact that something as
  dangerous as parted should never be allowed to break in a common
  setup [1].)

* At some point during the main installation phase (with nothing but
  a progressbar shown), the installer coughed up and barfed a window
  full of a Python backtrace right in the user's face, then died.  I
  looked at the dialog, but couldn't figure out what went wrong,
  either.  Perhaps taking a look at the logs (HIDDEN FROM THE USER)
  would be insightful, but the friend claimed that if the
  installation is supposed to be graphical, she's not going to let me
  do everything by hand [2].

  Where's the goddamn meaningful error message?  

* Two times, a dialog simply popped up, saying: The installer has
  crashed.  Just that.  Without a single damn word about the cause,
  or even just a mention of what it was doing at the time.

  The friend muttered something about Ubuntu being as flaky as
  Windows, then rebooted and started the installation anew...

* The GUI network setup tools simply ignore any failures.  After
  choosing a SSID from a list (the list shows that the hardware and
  kernel-side stuff was working ok) then clicking Activate, the
  dialog simply shows nothing for ~20sec and then acts as if
  everything was in working order.

  What was wrong?  Can you tell me from the provided information? 
  Current vanilla text-mode Debian will give you a message after
  every action.  With Ubuntu, you need to dig deeply to force the
  system to reveal what's going on.

Finally, I had to leave; the friend hasn't succeed yet.

[1]. A 50GB unformatted primary partition, the rest being on
extended: two 50GB unformatted ones, 50+GB ext3 on /home, 1GB swap,
7GB ext3 on /.

[2]. Get a reflex of going into mkfs/debootstrap mode at the smell of
the first installer trouble and people will start spreading evil
rumors about you :p


--End of rant--

Now, let's go into your question:
 Starting RAID devices ...
   /dev/md0 has been started with 3 drives.
   /dev/md1 has been started with 3 drives.
   /dev/md2 assembled from 2 drives - need all 3 to start it
   /dev/md3 assembled from 1 drive - not enough to start the array.
   /dev/md4 has been started with 3 drives.
 ... done assembling RAID devices: failed.

An user who has just the basic idea what RAID is will know what's
going on instantly.  When they'll call you, you will be able to help
them right away.

 Starting RAID devices ... failed (see log for details).

Now, you'll have to explain to the user how to get to the log.  And
what if the system is inoperative (with failed RAID, it almost
certainly will be)?


You can't be too verbose, but if you hide the most important parts,
it will be a great disservice to the users.

-- 
1KB // Microsoft corollary to Hanlon's razor:
//  Never attribute to stupidity what can be
//  adequately explained by malice.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Arjan Oosting
Op do, 01-06-2006 te 22:29 +0200, schreef martin f krafft:
 also sprach martin f krafft [EMAIL PROTECTED] [2006.06.01.2150 +0200]:
  Stopping RAID devices... md6 busy; md5 busy; md3 busy; md2 busy; md0
  busy; md1 busy; failed (6 busy, 1 stopped).
  Starting RAID devices... md0 running; md1 running; md2 running; md3
  running; md4 started (3/3); md5 running; md6 running; done (6
  running, 1 started, 0 failed).
  
  Which do people prefer? Btw: I cannot yet log which ones have been
  stopped, there's no easy way to find out with only POSIX and without
  /usr/bin/*.
 
 Here's the other version:
 
 piper:~# /etc/init.d/mdadm-raid restart   
 [575]
 Stopping RAID array md6...failed (busy).
 Stopping RAID array md5...failed (busy).
 Stopping RAID array md0...failed (busy).
 Stopping RAID array md1...failed (busy).
 Stopping RAID arrays...done (3 array(s) stopped).
 Starting RAID array md0...done (already running).
 Starting RAID array md1...done (already running).
 Starting RAID array md2...failed (not enough devices).
 Starting RAID array md3...done (started, degraded [2/3]).
 Starting RAID array md4...done (started [3/3]).
 Starting RAID array md5...done (already running).
 Starting RAID array md6...done (already running).
Seems ok to me. This way all boot messages look consistent. 

It would be nice if the log_action_end_msg would support warnings in
addition to succes and failures, so the output would clearly distinguish
a degraded array from a completely succesfully started array.

Greetings Arjan
 


signature.asc
Description: Dit berichtdeel is digitaal ondertekend


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread martin f krafft
also sprach Arjan Oosting [EMAIL PROTECTED] [2006.06.01.2307 +0200]:
 It would be nice if the log_action_end_msg would support warnings
 in addition to succes and failures, so the output would clearly
 distinguish a degraded array from a completely succesfully started
 array.

Consider filing a bug? It would be trivial, but the question is
whether we want to extend the policy in that way.

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
geld ist das brecheisen der macht.
 - friedrich nietzsche


signature.asc
Description: Digital signature (GPG/PGP)


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Matthew R. Dempsky
On Thu, Jun 01, 2006 at 06:51:31PM +0200, martin f krafft wrote:
 Any suggestions?

Submit a feature request to LSB?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LSB init scripts and multiple lines of output

2006-06-01 Thread martin f krafft
also sprach Matthew R. Dempsky [EMAIL PROTECTED] [2006.06.02.0238 +0200]:
  Any suggestions?
 
 Submit a feature request to LSB?

And wait 15 years?

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
microsoft: for when quality, reliability, and security
   just aren't that important!


signature.asc
Description: Digital signature (GPG/PGP)


Re: LSB init scripts and multiple lines of output

2006-06-01 Thread Matthew R. Dempsky
On Fri, Jun 02, 2006 at 02:44:54AM +0200, martin f krafft wrote:
 also sprach Matthew R. Dempsky [EMAIL PROTECTED] [2006.06.02.0238 +0200]:
   Any suggestions?
  
  Submit a feature request to LSB?
 
 And wait 15 years?

Eh, that's only 2 or 3 debian releases from now.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]