[systemd-devel] Possible regression/ABI breakage in Xorg socket-activation support

2016-02-23 Thread Laércio de Sousa
Hi there!

Regarging https://bugs.freedesktop.org/show_bug.cgi?id=93072, I've also
observed this issue in Ubuntu 15.10 (xorg-server 1.17.2 and systemd 225)
and 16.04 alpha 2 (xorg-server 1.17.3 and systemd 229), but not in openSUSE
Leap 42.1 (xorg-server 1.17.2 and systemd 210).

Could it be a Xorg regression? A systemd regression? Or some kind of
libsystemd ABI breakage?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] udev rules for MCS7715 USB-attached parallel port

2016-02-23 Thread Alex Henrie
2016-02-22 7:15 GMT-07:00 Lennart Poettering :
> On Sun, 21.02.16 15:26, Alex Henrie (alexhenri...@gmail.com) wrote:
>
>> Hi,
>>
>> I recently bought an MCS7715 USB-attached parallel port,[1] but there
>> seem to be a couple of problems using it with Linux:
>>
>> 1. The lp, parport, and parport_pc kernel modules are not loaded when
>> the device is plugged in.
>
> AFAIK parport_pc is the driver for old built-in parallel ports, it
> it not used f you have a USB paralell port adapter.

OK. Still, at minimum lp needs to be loaded when the USB parallel port
adapter is plugged in.

>> 2. After manually loading the kernel modules, /dev/lp0 is not deleted
>> when the device is unplugged.
>
> /dev/lp0 is also the old built-in parallel port. USB printers and
> parallel ports show up as /dev/usb/lp0 or so..

When I plug in the device, /dev/lp0 appears; /dev/usb/lp0 does not.
Furthermore, when I unplug and plug it back in, /dev/lp1 appears
alongside /dev/lp0. A third time and I have /dev/lp0, /dev/lp1, and
/dev/lp2.

Are you sure that this is not a udev rules bug? If it isn't, where
should I report it?

-Alex
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] socket activation systemd holding a reference to an accepted socket

2016-02-23 Thread Ben Woodard
On Thu, Feb 18, 2016 at 9:08 AM, Lennart Poettering 
wrote:

> On Wed, 17.02.16 17:44, Ben Woodard (wood...@redhat.com) wrote:
>
> > Is it intentional that systemd holds a reference to a socket it has
> > just accepted even though it just handed the open socket over to a
> > socket.activated service that it has just started.
>
> Yes, we do that currently. While there's currently no strict reason to
> do this, I think we should continue to do so, as there was always the
> plan to provide a bus interface so that services can rerequest their
> activation and fdstore fds at any time. Also, for the listening socket
> case (i.e. Accept=no case) we do the same and have to, hence it kinda
> is makes sure we expose the same behaviour here on all kinds of
> sockets.
>

I'm having trouble believing that consistency of behavior is an ideal to
strive for in this case. Consistency with xinetd's behavior would seem to
be a better benchmark in this case.

And I'm not sure that I understand the value being able to rerequest a
fdstore of fds. To me this sounds like it would be a very rarely used
feature. Could this be made an option that could be enabled when you add
the bus service that allows a service to rerequest their activation and
fdstore fds?



> Did you run into problems with this behaviour?
>

Oh yes. It breaks a large number of management tools that we have on to do
various things on clusters. It is a kind of pseudoconcurrency. Think of
like this:

foreach compute-node;
   rsh node daemonize quick-but-not-instant-task

With xinetd the demonization would close the accepted socket.  Then the
foreach loop would nearly instantly move onto the next node. We could zip
through 8000 nodes in a couple of seconds.

With systemd holding onto the socket the "rsh" hangs until the
quick-but-not-instant-task completes. This causes the foreach loop to take
anywhere from between 45min and several hours. Because it isn't really rsh
and demonize and I just used that to make it easy to understand what is
going on, rewriting several of our tools is non-trivial and would end up
violating all sorts of implicit logical layering within the tools and
libraries that we use to build them.

Where is this in the source code? I've been planning to send you a patch to
change the behavior but I have't quite connected the dots from where a job
within a transaction is inserted on the run queue to where the fork for the
"ExecStart" actually happens.

-ben
Red Hat Inc.


> Lennart
>
> --
> Lennart Poettering, Red Hat
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [HEADS-UP] resolved APIs fully documented

2016-02-23 Thread Lennart Poettering
Heya,

just wanted to let you know that I finished documenting resolved's bus
APIs:

https://wiki.freedesktop.org/www/Software/systemd/resolved/
https://wiki.freedesktop.org/www/Software/systemd/writing-network-configuration-managers/
https://wiki.freedesktop.org/www/Software/systemd/writing-resolver-clients/

Enjoy!

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Moving systemd-bootchart to a standalone repository

2016-02-23 Thread Lennart Poettering
On Mon, 22.02.16 10:45, Kok, Auke-jan H (auke-jan.h@intel.com) wrote:

> On Wed, Feb 17, 2016 at 4:49 PM, Zbigniew Jędrzejewski-Szmek
>  wrote:
> > On Wed, Feb 17, 2016 at 09:17:51AM -0800, Kok, Auke-jan H wrote:
> >> Splitting it out increases that potential and will allow
> >> systemd-bootchart to evolve out of cycle again, and look a bit over
> >> the fence. I've reviewed most of the changes to it, and noticed a bit
> >> of a drop of risky commits, and those are the ones that are going to
> >> be needed for this project to make it a useful tool in the future.
> >>
> >> So, I think this is a great move, one that certainly will motivate me
> >> to engage more deeply again :)
> > Hi Auke,
> >
> > what kind of big changes would you have in mind?
> > Just the fact of being in one repo with systemd should not have
> > much effect on changes to bootchart which is mostly standalone...
> 
> I have been asked on several occasions to make bootchart more
> palatable for other OS's, including ChromeOS and even Android. This
> has previously been shelved entirely, and I don't know if it's
> currently even feasible, but at least logistically it should be a lot
> easier to attempt.
> 
> The rest of the things I was looking at were items that being in-tree
> did not affect, like new ways of grouping the process bars, better IO
> visualization, etc.

Auke, could you have a look at
https://github.com/systemd/systemd/pull/2664 and give your blessing? 

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-23 Thread Lennart Poettering
On Fri, 19.02.16 15:13, Tomasz Torcz (to...@pipebreaker.pl) wrote:

> On Fri, Feb 19, 2016 at 12:49:53PM +, Zbigniew Jędrzejewski-Szmek wrote:
> > On Fri, Feb 19, 2016 at 01:42:12PM +0100, Michal Sekletar wrote:
> > > On Wed, Feb 17, 2016 at 1:35 PM, Avi Kivity  wrote:
> > > 
> > > > 3. watchdog during startup
> > > >
> > > > Sometimes we need to perform expensive operations during startup (log
> > > > replay, rebuild from network replica) before we can start serving. 
> > > > Rather
> > > > than configure a huge start timeout, I'd prefer to have the service 
> > > > report
> > > > progress to systemd so that it knows that startup is still in progress.
> > > >
> > > 
> > > Did you have a look at sd_notify (man 3 sd_notify)? Basically, you can
> > > easily patch your service to report status to systemd and tell to
> > > systemd exactly when it is ready to serve the clients. Thus you can
> > > avoid hacks like huge start timeout you've mentioned.
> > 
> > I don't think that helps, unless the service "lies" to systemd and
> > tells it has finished startup when it really hasn't (systemd would
> > ignore watchdog notifications during startup, and would do nothing if
> > they stopped coming, so the service has to tell systemd first that it
> > has started successfully, for the watchdog to be effective). Doing
> > that would fix this issue, but would have that systemd wouldn't know
> > that the service is still starting and would for example start
> > subsequent jobs.
> > 
> > I don't think there's a way around the issue short of allowing
> > watchdog during startup. Databases which do long recovery are a bit
> > special, most programs don't exhibit this kind of behaviour, but maybe
> > this case is important enough to add support for it.
> 
>   Maybe systemd could ignore watchdog notification during startup UNTIL
> first WATCHDOG=1 notification comes? Then normal watchdog logic would kick in.
>   This way we retain current logic (ignore watchdog during startup) unless
> application WANTS to tickle watchdog during startup.
>   Or is it too much magic?

Well, this means the time until this first WATCHDOG=1 is sent is
unprotected by the watchdog stuff. I am pretty sure this should be an
explicit configuration option, to close this gap.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] socket activation systemd holding a reference to an accepted socket

2016-02-23 Thread Lennart Poettering
On Wed, 17.02.16 17:44, Ben Woodard (wood...@redhat.com) wrote:

> Is it intentional that systemd holds a reference to a socket it has
> just accepted even though it just handed the open socket over to a
> socket.activated service that it has just started.

Yes, we do that currently. While there's currently no strict reason to
do this, I think we should continue to do so, as there was always the
plan to provide a bus interface so that services can rerequest their
activation and fdstore fds at any time. Also, for the listening socket
case (i.e. Accept=no case) we do the same and have to, hence it kinda
is makes sure we expose the same behaviour here on all kinds of
sockets.

Did you run into problems with this behaviour?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] test program won't stop after hitting MemoryLimit

2016-02-23 Thread Lennart Poettering
On Thu, 18.02.16 07:05, jeremy_f...@dell.com (jeremy_f...@dell.com) wrote:

> Dell - Internal Use - Confidential

Really? You sent this to a public mailing list...

>  Hi All:
> 
> I am trying to test Resource Control of system by setting
> MemoryLimit on my Debian system. Unfortunately it won't work after
> my testing. Maybe I am not configuring right. Please let me know how
> to fix this.

Linux has an overcommiting memory manager. MemoryLimit= operates on
actual memory usage. With malloc() you only allocate address space
however. It basically just gives you an address range and the promise
that keep the memory around should you ever actually store something
in it. In your test program you never do that however, hence it will
never be backed by real memory.

Consider invoking memset() on the memory you allocate, so that it is
actually touched.


Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] New "ubuntu-ci" integration tests are being added to PRs

2016-02-23 Thread Lennart Poettering
On Thu, 18.02.16 09:01, Martin Pitt (martin.p...@ubuntu.com) wrote:

> Hello all,
> 
> you might already have noticed, but from now on PRs will not only
> trigger the semaphore checks (which are essentially a "make
> distcheck"), but also trigger more comprehensive integration tests on
> Ubuntu's autopkgtest infrastructure. These build actual binary .deb
> packages, install them in an actual OS, and cover things like
> networkd, bootchart, that crucial services like NetworkManager or
> window manager come up, timedatectl and friends, cryptsetup,
> systemd-sysv-install (with various combinations of SysV+systemd
> units), and the "boot-smoke" test where the whole thing has to boot
> successfully 20 times [2].
> 
> This new test can be seen at e. g.
> https://github.com/systemd/systemd/pull/2641 or /2650 which now also
> have a second "ubuntu-ci" test.
> 
> I now wrote all the glue between github and autopkgtest.ubuntu.com,
> running into gory details like firewall issues, three different ways
> to authenticate, and my own hilarious incompetence when it comes to
> web programming (but I'm learning :) ). Thanks to Daniel for his great
> help with kickstarting me into GitHub webhooks!
> 
> However, an awful lot of the runs currently fail with a linker error.
> I filed [2] and will investigate.
> 
> So please don't put too much attention to these results yet. I want to
> to enable them to see how the testing and communication holds up in
> practice, but before this we definitively need to sort out [2] first.

Excellent work! Thanks a lot to Daniel and you for setting this up! Thanks!

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Moving systemd-bootchart to a standalone repository

2016-02-23 Thread Lennart Poettering
On Wed, 17.02.16 17:03, Jóhann B. Guðmundsson (johan...@gmail.com) wrote:

> 
> 
> On 02/17/2016 04:51 PM, Daniel Mack wrote:
> >Hey,
> >
> >[I've put all people in Cc who have had more than one commit related to
> >systemd-bootchart in the past]
> >
> >As part of our spring cleaning, we've been thinking about giving
> >systemd-bootchart a new home, in a new repository of its own. I've been
> >working on this and put the result here:
> 
> What's the reason for splitting it out into it's own repository as what's
> the criteria you used to determine that which may or may not be applicable
> to other bits of systemd?+

We have been discussing splitting this out for a while, and the lines
are a bit blurry. In the case of bootchart a couple of different
factors came into place: the fact that it's still relatively little
intertwined with the rest of the codebase and that it's more of a
"debugging" tool thatn core functionality. Also, on of the major
issues was that we trouble testing this, simply because distro kernels
usually don't turn on the necessary kernel features to make it work
(Fedora for example turns this off, because of the performance
penalty). It's primarily an excercise in lowering the maintance burden
of core systemd, as bootchart (in particular because it's more of a
debugging tool) doesn't really require the constant love and strict
release cycle as the rest of the project.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-23 Thread Lennart Poettering
On Wed, 17.02.16 20:47, Avi Kivity (a...@scylladb.com) wrote:

> btw I hope that with this change the service is only restarted after the
> dump is complete, or oom is likely.

Yeah, the crashed process actually stays around as long as we don't
close the pipe the kernel passes to us where the coredump
serialization is written to. We use that to read additional process
metadata from /proc/$PID. But this also means that systemd's service
logic will still see the process around all the time while the
coredump is generated.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Regression in ipv6 resolutions in systemd-resolved with AF_UNSPEC

2016-02-23 Thread Lennart Poettering
On Fri, 19.02.16 15:43, Sébastien Luttringer (se...@seblu.net) wrote:

> Hello,
> 
> Since systemd v229, I have one server which no more resolve ipv6 adresses when
> it use nss-resolve and AF_UNSPEC.
> 
> This issue seems to be linked with the DNS resolver used on its network. This
> resolved is provided by a french FAI box (SFR).
> 
> I'm currently not able to understand precisely where is the issue, but opening
> the socket with AF_UNSPEC does not resolve ipv6 and with AF_INET6
> does.

Note that resolved will not look up IPv6 addresses if this isn't
explicitly requested if there are no local routable IPv6 addresses
configured. And vice versa, it won't look for IPv4 addresses if this
isn't explicitly requested and there are no local routable Ipv4
addresses configured. Basically, when doing lookups without specifying
what you want, we'll return something that you can actually talk
to. If during resolving you however specify what you want, then we'll
actually return that.

How precisely does your IP configuration look like? Do you use
per-interface DNS servers (i.e. configured via networkd), or do you
have global DNS servers configured via /etc/resolv.conf or via DNS= in
/etc/systemd/resolved.conf?

If you use per-interface DNS servers, do you have a routable IPv6
address on that interface? If you use global DNS servers instead, do
you have any routable Ipv6 address on any interface?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC] the chopping block

2016-02-23 Thread Lennart Poettering
On Thu, 18.02.16 20:33, Mike Gilbert (flop...@gentoo.org) wrote:

> On Thu, Feb 11, 2016 at 12:06 PM, Lennart Poettering
>  wrote:
> > 1) systemd-initctl (i.e. the /dev/initctl SysV compat support). Last
> >time Debian was still using that, maybe this changed now?
> 
> Gentoo allows switching between systemd and openrc (sysvinit) at boot
> time, and will continue to do so for the foreseeable future.
> 
> By default, sysvinit owns /sbin/initctl, /sbin/halt, etc. Users may
> swap these to symlinks to systemd and systemctl by setting a USE flag,
> but if they do so they knowingly lose the ability to switch back to
> openrc without a rebuild of the affected packages.
> 
> I would like to selfishly request that you keep this interface around
> as long as possible; if you remove it I will have to come up with some
> replacement.

So here's probably what is going to happen.

The initctl support in systemctl will be dropped and replaced by some
callout script support. i.e. when you want to use systemctl to reboot
a sysvinit system, then systemctl won't do that anymore, but it will
invoke some shell script as a fallback, where distros can place the
necessary commands if they care about this. This follows how we do
sysv script enable/disable handling (i.e. the chkconfig hookup).

We'll eventually kill /dev/initctl support. Distros should really find
their own replacement for this. They can either take the current code,
build it externally, or write some new code. You might be able to
implement this in a carefully prepared shell script that invokes
busctl to do the reboot. You could use "dd" to read the initreq
structure from STDIN with the precise size, then figure out which kind
of request it is (again, by using dd to extract the four bytes
starting at index 4 of that request structure) and then simply execute
the right busctl command to poweroff/reboot/...

I'll not drop this right-away, but let's say in 6month or so this will
go away. This should be an ample heads-up to find a replacement and
prepare for this.

Thanks,

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Moving systemd-bootchart to a standalone repository

2016-02-23 Thread Lennart Poettering
On Thu, 18.02.16 11:32, Daniel Mack (dan...@zonque.org) wrote:

> On 02/17/2016 08:02 PM, Umut Tezduyar Lindskog wrote:
> > Hi,
> > 
> > src/shared & src/basic have very useful code that upstream have been
> > static linking to most binaries. My understanding is that we haven’t
> > been feeling comfortable about the API to make these paths a
> > standalone library (or include them in libsystemd).
> 
> That's correct.
> 
> > Now that we started duplicating the code outside of systemd main
> > repo, wouldn’t it be wise to make it a library even if it was
> > something like libsystemd_onlyandonlyinternal.so.
> > 
> > For people who can follow upstream’s speed and catch up with API
> > changes we would gain:
> 
> I see your point, and that's one reason why we are not splitting out
> more packages. Downstream deviation would be cumbersome to handle, and
> providing API/ABI stability for a library is considered outside of the
> scope of the systemd project. And without this guarantee, things will
> break all the time, so that's not a win.
> 
> In the case of bootchart, however, I believe amount of code this small
> tool shares with the rest of systemd (from src/shared and src/basic) is
> small enough to justify an exception. And things like lists, hashmaps
> and trivial file parses could eventually even be solved differently,
> with other libraries or whatever, if the maintainer decides so.
> 
> Auke, did you have a look at the current code base of the standalone
> repo? Does it look feasible to you?

I'd be willing to explore the idea where we make src/basic a somewhat
self-contained dir that could be imported as git submodule like gnulib
(as suggested by Armin) by other packages. Key would be that they
pinpoint a specific revision though, as we'd not provide API compat
for this.

Of course, we should do so only if there are actually projects IRL
that are interested in this.

I am fine with making the code in src/basic more reusable, I am not
very keen on establishing a fixed API for it though.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-23 Thread Lennart Poettering
On Wed, 17.02.16 14:35, Avi Kivity (a...@scylladb.com) wrote:

> We are using systemd to supervise our NoSQL database and are generally
> happy.

Thank you for the feedback! We are always interested in good feedback
like yours.

> A few things will help even more:
> 
> 1. log core dumps immediately rather than after the dump completes
> 
> A database will often consume all memory on the machine; dumping 120GB can
> take a lot of time, especially if compression is enabled. As the situation
> is now, there is a period of time where it is impossible to know what is
> happening.
> 
> (I saw that 229 improves core dumps, but did not see this
> specifically)

With 229 the coredump hook will collect a bit information and then
pass things off (including the pipe the coredump is streamed in on) to
a mini service that then processes the crash, extracts the stacktrace
and writes it to disk. This means you should see the coredump
processing as a normal service in "systemctl" and "systemd-cgtop" and
similar tools. You should see normal logs about this service being
started now, and you can do resource management on it.

> 2. parallel compression of core dumps
> 
> As well as consuming all of memory, we also consume all cpus.  Once we dump
> core we may as well use those cores for compressing the huge dump.

We get the stuff via a pipe from the kernel. I am not sure whether gz
or lz4 can distribute work on multiple CPUs if the data is flowing in
strictly sequentially and there's no random access to the input data.

But if the compressors support that then we should definitely make use
of it!

> 3. watchdog during startup
> 
> Sometimes we need to perform expensive operations during startup (log
> replay, rebuild from network replica) before we can start serving. Rather
> than configure a huge start timeout, I'd prefer to have the service report
> progress to systemd so that it knows that startup is still in
> progress.

Interesting. How would you suggest this precisely looks like? I mean,
you say "report progress", does this mean you want a textual string
like "STATUS=" in sd_notify() – which you already have really? Or do
you mean behaviour like the existing "WATCHDOG=1" logic, i.e. that
start-up is aborted if the keep-alive messages are missing?

I think adding a WatchdogMode= setting that allows optional
configuration to require regular WATCHDOG=1 notifications even in the
start and stop phase of a service certainly makes sense, if that's
what you are asking for.

> Hope this is useful,

Yes, it is! Thanks!

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-23 Thread Lennart Poettering
On Fri, 19.02.16 12:49, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:

> On Fri, Feb 19, 2016 at 01:42:12PM +0100, Michal Sekletar wrote:
> > On Wed, Feb 17, 2016 at 1:35 PM, Avi Kivity  wrote:
> > 
> > > 3. watchdog during startup
> > >
> > > Sometimes we need to perform expensive operations during startup (log
> > > replay, rebuild from network replica) before we can start serving. Rather
> > > than configure a huge start timeout, I'd prefer to have the service report
> > > progress to systemd so that it knows that startup is still in progress.
> > >
> > 
> > Did you have a look at sd_notify (man 3 sd_notify)? Basically, you can
> > easily patch your service to report status to systemd and tell to
> > systemd exactly when it is ready to serve the clients. Thus you can
> > avoid hacks like huge start timeout you've mentioned.
> 
> I don't think that helps, unless the service "lies" to systemd and
> tells it has finished startup when it really hasn't (systemd would
> ignore watchdog notifications during startup, and would do nothing if
> they stopped coming, so the service has to tell systemd first that it
> has started successfully, for the watchdog to be effective). Doing
> that would fix this issue, but would have that systemd wouldn't know
> that the service is still starting and would for example start
> subsequent jobs.
> 
> I don't think there's a way around the issue short of allowing
> watchdog during startup. Databases which do long recovery are a bit
> special, most programs don't exhibit this kind of behaviour, but maybe
> this case is important enough to add support for it.

Yeah, see my other mail. I am open to optionally require watchog
notifications even during the start and stpo operations.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] udev rules for MCS7715 USB-attached parallel port

2016-02-23 Thread Lennart Poettering
On Sun, 21.02.16 15:26, Alex Henrie (alexhenri...@gmail.com) wrote:

> Hi,
> 
> I recently bought an MCS7715 USB-attached parallel port,[1] but there
> seem to be a couple of problems using it with Linux:
> 
> 1. The lp, parport, and parport_pc kernel modules are not loaded when
> the device is plugged in.

AFAIK parport_pc is the driver for old built-in parallel ports, it
it not used f you have a USB paralell port adapter.

> 2. After manually loading the kernel modules, /dev/lp0 is not deleted
> when the device is unplugged.

/dev/lp0 is also the old built-in parallel port. USB printers and
parallel ports show up as /dev/usb/lp0 or so..

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bootchart speeding up boot time

2016-02-23 Thread Martin Townsend
I'm pretty sure they are, they are part of the Xilinx Zynq SoC platform,
from their specs
32 KB Level 1 4-way set-associative instruction and data caches
(independent for each CPU)
512 KB 8-way set-associative Level 2 cache (shared between the CPUs)

Good idea on disabling a core, this could then prove/disprove my first
theory, a bit of googling tells me that there's a Kernel boot arg 'nosmp',
I'll give this a try.

Cheers, Martin.


On Tue, Feb 23, 2016 at 3:33 PM, Umut Tezduyar Lindskog 
wrote:

> On Mon, Feb 22, 2016 at 8:51 PM, Martin Townsend
>  wrote:
> > Hi,
> >
> > Thanks for your reply.  I wouldn't really call this system stripped
> down, it
> > has an nginx webserver, DHCP server, postgresql-server, sftp server, a
> few
> > mono (C#) daemons running, loads quite a few kernel modules during boot,
> > dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> > would imagine glibc will be a tiny portion of what gets loaded during
> boot.
> > I have another arm system which has a similar boot time with systemd,
> it's
> > only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> > version of systemd as it's built with the Jethro version of Yocto so
> > probably a newer version of glibc and this doesn't speed up when using
> > bootchart and in fact slows down slightly (which is what I would expect).
> > So my current thinking is that it's either be down to the fact that it's
> a
> > dual core and only one core is being used during boot unless a fork/execl
> > occurs? Or it's down to the newer kernel/systemd/glibc or some other
> > component.
>
> Are you sure both cores have the same speed and same size of L1
> data&instruction cache?
> You could try to force the OS to run systemd on the first core by A)
> make the second one unavailable B) play with control groups and pin
> systemd to first core.
>
> Umut
>
> >
> > Is there anyway of seeing what the CPU usage for each core is for
> systemd on
> > boot without using bootchart then I can rule in/out the first idea.
> >
> > Many Thanks,
> > Martin.
> >
> >
> > On Mon, Feb 22, 2016 at 6:52 PM, Kok, Auke-jan H <
> auke-jan.h@intel.com>
> > wrote:
> >>
> >> On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
> >>  wrote:
> >> > Hi,
> >> >
> >> > I'm new to systemd and have just enabled it for my Xilinx based dual
> >> > core
> >> > cortex A-9 platform.  The linux system is built using Yocto (Fido
> >> > branch)
> >> > which is using version 219 of systemd.
> >> >
> >> > The main reason for moving over to systemd was to see if we could
> >> > improve
> >> > boot times and the good news was that by just moving over to systemd
> we
> >> > halved the boot time.  So I read that I could analyse the boot times
> in
> >> > detail using bootchart so I set init=//bootchart in my kernel
> >> > command
> >> > line and was really suprised to see my boot time halved again.
> Thinking
> >> > some weird caching must have occurred on the first boot I reverted
> back
> >> > to
> >> > normal systemd boot and boot time jumped back to normal (around 17/18
> >> > seconds), putting bootchart back in again reduced it to ~9/10 seconds.
> >> >
> >> > So I created my own init using bootchart as a template that just slept
> >> > for
> >> > 20 seconds using nanosleep and this also had the same effect of
> speeding
> >> > up
> >> > the boot time.
> >> >
> >> > So the only difference I can see is that the kernel is not starting
> >> > /sbin/init -> /lib/systemd/systemd directly but via another program
> that
> >> > is
> >> > performing a fork and then in the parent an execl to run
> >> > /lib/systemd/systemd.  What I would really like to understand is why
> it
> >> > runs
> >> > faster when started this way?
> >>
> >>
> >> systemd-bootchart is a dynamically linked binary. In order for it to
> >> run, it needs to dynamically link and load much of glibc into memory.
> >>
> >> If your system is really stripped down, then the portion of data
> >> that's loaded from disk that is coming from glibc is relatively large,
> >> as compared to the rest of the system. In an absolute minimal system,
> >> I expect it to be well over 75% of the total data loaded from disk.
> >>
> >> It seems in your system, glibc is about 50% of the stuff that needs to
> >> be paged in from disk, hence, by starting systemd-bootchart before
> >> systemd, you've "removed" 50% of the total data to be loaded from the
> >> vision of bootchart, since, bootchart cannot start logging data until
> >> it's loaded all those glibc bits.
> >>
> >> Ultimately, your system isn't likely booting faster, you're just
> >> forcing it to load glibc before systemd starts.
> >>
> >> systemd-analyze may actually be a much better way of looking at the
> >> problem: it reports CLOCK_MONOTONIC timestamps for the various parts
> >> involved, including, possibly, firmware, kernel time, etc.. In
> >> conjunction with bootchart, this should give a full picture.
> >>
> >> Auke
> >
> >
> >
> > 

Re: [systemd-devel] Support for large applications

2016-02-23 Thread Umut Tezduyar Lindskog
On Wed, Feb 17, 2016 at 1:35 PM, Avi Kivity  wrote:
> We are using systemd to supervise our NoSQL database and are generally
> happy.
>
> A few things will help even more:
>
> 1. log core dumps immediately rather than after the dump completes
>
> A database will often consume all memory on the machine; dumping 120GB can
> take a lot of time, especially if compression is enabled. As the situation
> is now, there is a period of time where it is impossible to know what is
> happening.
>
> (I saw that 229 improves core dumps, but did not see this specifically)
>
> 2. parallel compression of core dumps
>
> As well as consuming all of memory, we also consume all cpus.  Once we dump
> core we may as well use those cores for compressing the huge dump.
>
> 3. watchdog during startup
>
> Sometimes we need to perform expensive operations during startup (log
> replay, rebuild from network replica) before we can start serving. Rather
> than configure a huge start timeout, I'd prefer to have the service report
> progress to systemd so that it knows that startup is still in progress.

Hi. Similar topic from the past -
https://lists.freedesktop.org/archives/systemd-devel/2015-March/028919.html.
Though, I believe this is an architectural problem than real
necessity.

Umut

>
> Hope this is useful,
>
> Avi
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bootchart speeding up boot time

2016-02-23 Thread Umut Tezduyar Lindskog
On Mon, Feb 22, 2016 at 8:51 PM, Martin Townsend
 wrote:
> Hi,
>
> Thanks for your reply.  I wouldn't really call this system stripped down, it
> has an nginx webserver, DHCP server, postgresql-server, sftp server, a few
> mono (C#) daemons running, loads quite a few kernel modules during boot,
> dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> would imagine glibc will be a tiny portion of what gets loaded during boot.
> I have another arm system which has a similar boot time with systemd, it's
> only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> version of systemd as it's built with the Jethro version of Yocto so
> probably a newer version of glibc and this doesn't speed up when using
> bootchart and in fact slows down slightly (which is what I would expect).
> So my current thinking is that it's either be down to the fact that it's a
> dual core and only one core is being used during boot unless a fork/execl
> occurs? Or it's down to the newer kernel/systemd/glibc or some other
> component.

Are you sure both cores have the same speed and same size of L1
data&instruction cache?
You could try to force the OS to run systemd on the first core by A)
make the second one unavailable B) play with control groups and pin
systemd to first core.

Umut

>
> Is there anyway of seeing what the CPU usage for each core is for systemd on
> boot without using bootchart then I can rule in/out the first idea.
>
> Many Thanks,
> Martin.
>
>
> On Mon, Feb 22, 2016 at 6:52 PM, Kok, Auke-jan H 
> wrote:
>>
>> On Fri, Feb 19, 2016 at 7:15 AM, Martin Townsend
>>  wrote:
>> > Hi,
>> >
>> > I'm new to systemd and have just enabled it for my Xilinx based dual
>> > core
>> > cortex A-9 platform.  The linux system is built using Yocto (Fido
>> > branch)
>> > which is using version 219 of systemd.
>> >
>> > The main reason for moving over to systemd was to see if we could
>> > improve
>> > boot times and the good news was that by just moving over to systemd we
>> > halved the boot time.  So I read that I could analyse the boot times in
>> > detail using bootchart so I set init=//bootchart in my kernel
>> > command
>> > line and was really suprised to see my boot time halved again.  Thinking
>> > some weird caching must have occurred on the first boot I reverted back
>> > to
>> > normal systemd boot and boot time jumped back to normal (around 17/18
>> > seconds), putting bootchart back in again reduced it to ~9/10 seconds.
>> >
>> > So I created my own init using bootchart as a template that just slept
>> > for
>> > 20 seconds using nanosleep and this also had the same effect of speeding
>> > up
>> > the boot time.
>> >
>> > So the only difference I can see is that the kernel is not starting
>> > /sbin/init -> /lib/systemd/systemd directly but via another program that
>> > is
>> > performing a fork and then in the parent an execl to run
>> > /lib/systemd/systemd.  What I would really like to understand is why it
>> > runs
>> > faster when started this way?
>>
>>
>> systemd-bootchart is a dynamically linked binary. In order for it to
>> run, it needs to dynamically link and load much of glibc into memory.
>>
>> If your system is really stripped down, then the portion of data
>> that's loaded from disk that is coming from glibc is relatively large,
>> as compared to the rest of the system. In an absolute minimal system,
>> I expect it to be well over 75% of the total data loaded from disk.
>>
>> It seems in your system, glibc is about 50% of the stuff that needs to
>> be paged in from disk, hence, by starting systemd-bootchart before
>> systemd, you've "removed" 50% of the total data to be loaded from the
>> vision of bootchart, since, bootchart cannot start logging data until
>> it's loaded all those glibc bits.
>>
>> Ultimately, your system isn't likely booting faster, you're just
>> forcing it to load glibc before systemd starts.
>>
>> systemd-analyze may actually be a much better way of looking at the
>> problem: it reports CLOCK_MONOTONIC timestamps for the various parts
>> involved, including, possibly, firmware, kernel time, etc.. In
>> conjunction with bootchart, this should give a full picture.
>>
>> Auke
>
>
>
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bootchart speeding up boot time

2016-02-23 Thread Martin Townsend
I'm using a physical stopwatch and running it from the moment U-Boot hands
over until I get a prompt so I'm not taking any timing information from
systemd or even the system itself.  I'm sure that glibc does indeed take
some time to load into memory but I can't see it being the culprit of an
8-9 second difference.  Even without a stopwatch you can easily see the
speed difference when it boots by the speed that the systemd messages
appear so I think it's something more fundamental.  Or am I missing
something here?

Cheers, Martin.

On Mon, Feb 22, 2016 at 9:20 PM, Kok, Auke-jan H 
wrote:

> On Mon, Feb 22, 2016 at 11:51 AM, Martin Townsend
>  wrote:
> > Hi,
> >
> > Thanks for your reply.  I wouldn't really call this system stripped
> down, it
> > has an nginx webserver, DHCP server, postgresql-server, sftp server, a
> few
> > mono (C#) daemons running, loads quite a few kernel modules during boot,
> > dbus, sshd, avahi, and a bunch of other stuff I can't quite remember.  I
> > would imagine glibc will be a tiny portion of what gets loaded during
> boot.
> > I have another arm system which has a similar boot time with systemd,
> it's
> > only a single cortex A9 core, it's running a newer 4.1 kernel with a new
> > version of systemd as it's built with the Jethro version of Yocto so
> > probably a newer version of glibc and this doesn't speed up when using
> > bootchart and in fact slows down slightly (which is what I would expect).
> > So my current thinking is that it's either be down to the fact that it's
> a
> > dual core and only one core is being used during boot unless a fork/execl
> > occurs? Or it's down to the newer kernel/systemd/glibc or some other
> > component.
> >
> > Is there anyway of seeing what the CPU usage for each core is for
> systemd on
> > boot without using bootchart then I can rule in/out the first idea.
>
> Not that I know of, but, to work around the issue of dynamic linking,
> one can link systemd-bootchartd statically. It'll become larger, but
> you can then clearly ascern that the impact of glibc bits being loaded
> are properly recorded by bootchart. And, it's fairly trivial link it
> statically.
>
> Auke
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel