[systemd-devel] [PATCH] tree-wide usage of %m specifier instead of strerror(errno)

2013-11-26 Thread Daniel Buch
Also for log_error() except where a specific error is specified

e.g. errno ? strerror(errno) : Some user specified message
---
 TODO   |  2 --
 src/core/automount.c   |  2 +-
 src/core/main.c|  2 +-
 src/core/manager.c |  2 +-
 src/core/mount-setup.c |  2 +-
 src/core/service.c |  4 ++--
 src/initctl/initctl.c  | 12 +---
 src/journal/coredumpctl.c  |  2 +-
 src/journal/journald-console.c |  4 ++--
 src/journal/journald-kmsg.c|  2 +-
 src/udev/collect/collect.c |  6 +++---
 src/udev/scsi_id/scsi_id.c |  2 +-
 src/udev/scsi_id/scsi_serial.c |  8 +++-
 src/udev/udev-rules.c  |  2 +-
 14 files changed, 23 insertions(+), 29 deletions(-)

diff --git a/TODO b/TODO
index d63e13e..653258a 100644
--- a/TODO
+++ b/TODO
@@ -824,8 +824,6 @@ Regularly:
 
 * Use PR_SET_PROCTITLE_AREA if it becomes available in the kernel
 
-* %m in printf() instead of strerror(errno);
-
 * pahole
 
 * set_put(), hashmap_put() return values check. i.e. == 0 doesn't free()!
diff --git a/src/core/automount.c b/src/core/automount.c
index 49a64b1..66e3d78 100644
--- a/src/core/automount.c
+++ b/src/core/automount.c
@@ -304,7 +304,7 @@ static int open_dev_autofs(Manager *m) {
 
 m-dev_autofs_fd = open(/dev/autofs, O_CLOEXEC|O_RDONLY);
 if (m-dev_autofs_fd  0) {
-log_error(Failed to open /dev/autofs: %s, strerror(errno));
+log_error(Failed to open /dev/autofs: %m);
 return -errno;
 }
 
diff --git a/src/core/main.c b/src/core/main.c
index dbc98db..69d3a43 100644
--- a/src/core/main.c
+++ b/src/core/main.c
@@ -131,7 +131,7 @@ _noreturn_ static void crash(int sig) {
 
 pid = fork();
 if (pid  0)
-log_error(Caught %s, cannot fork for core dump: 
%s, signal_to_string(sig), strerror(errno));
+log_error(Caught %s, cannot fork for core dump: 
%m, signal_to_string(sig));
 
 else if (pid == 0) {
 struct rlimit rl = {};
diff --git a/src/core/manager.c b/src/core/manager.c
index aa4baaa..65cb73c 100644
--- a/src/core/manager.c
+++ b/src/core/manager.c
@@ -307,7 +307,7 @@ static int enable_special_signals(Manager *m) {
 } else {
 /* Enable that we get SIGWINCH on kbrequest */
 if (ioctl(fd, KDSIGACCEPT, SIGWINCH)  0)
-log_warning(Failed to enable kbrequest handling: %s, 
strerror(errno));
+log_warning(Failed to enable kbrequest handling: %m);
 }
 
 return 0;
diff --git a/src/core/mount-setup.c b/src/core/mount-setup.c
index 73c2698..c601c97 100644
--- a/src/core/mount-setup.c
+++ b/src/core/mount-setup.c
@@ -188,7 +188,7 @@ static int mount_one(const MountPoint *p, bool relabel) {
   p-type,
   p-flags,
   p-options)  0) {
-log_full((p-mode  MNT_FATAL) ? LOG_ERR : LOG_DEBUG, Failed 
to mount %s: %s, p-where, strerror(errno));
+log_full((p-mode  MNT_FATAL) ? LOG_ERR : LOG_DEBUG, Failed 
to mount %s: %m, p-where);
 return (p-mode  MNT_FATAL) ? -errno : 0;
 }
 
diff --git a/src/core/service.c b/src/core/service.c
index 28b1465..7c5d5d8 100644
--- a/src/core/service.c
+++ b/src/core/service.c
@@ -3507,7 +3507,7 @@ static int service_enumerate(Manager *m) {
 d = opendir(path);
 if (!d) {
 if (errno != ENOENT)
-log_warning(opendir(%s) failed: %s, 
path, strerror(errno));
+log_warning(opendir(%s) failed: %m, 
path);
 
 continue;
 }
@@ -3540,7 +3540,7 @@ static int service_enumerate(Manager *m) {
 if (access(fpath, X_OK)  0) {
 
 if (errno != ENOENT)
-log_warning(access() failed 
on %s: %s, fpath, strerror(errno));
+log_warning(access() failed 
on %s: %m, fpath);
 
 continue;
 }
diff --git a/src/initctl/initctl.c b/src/initctl/initctl.c
index d6c..284319f 100644
--- a/src/initctl/initctl.c
+++ b/src/initctl/initctl.c
@@ -217,7 +217,7 @@ static int fifo_process(Fifo *f) {
 if (errno == EAGAIN)
 return 0;
 
-log_warning(Failed to read from fifo: %s, strerror(errno));
+log_warning(Failed to read from fifo: %m);
 return -1;
 }
 
@@ -278,7 +278,7 @@ static int server_init(Server *s, unsigned n_sockets) {
 s-epoll_fd = epoll_create1(EPOLL_CLOEXEC);
 if (s-epoll_fd  0) {
 r = 

Re: [systemd-devel] pam: Don't use loginuid [was: Re: Fix PAM module to not clobber XDG_RUNTIME_DIR with su]

2013-11-26 Thread Colin Guthrie
'Twas brillig, and Lennart Poettering at 26/11/13 04:17 did gyre and gimble:
 On Wed, 20.11.13 19:19, Colin Walters (walt...@verbum.org) wrote:
 
 So yeah, there your mix
 and match is broken: 

 I'm proposing a simple goal: XDG_RUNTIME_DIR should always be that
 matching the current uid.  I can't think of any case where you'd
 want it otherwise.
 
 That can't work. As the directory only exists when a real login session
 is around. su/sudo don't get their own login sessins, hence the dir
 doesn't necessarily exist and from the perspective of the code running
 in su/sudo the lifetime semantics of the dir wouldn't match any
 expections...


Colin W's later patch did implement these semantics for the root user's
XDG_RUNTIME_DIR. It kept it around and didn't tidy it up. Doesn't this
solve the problem for the root user nicely (which is the primary problem)?

[The rest of this mail is more asking daft questions than any kind of
complaint. I'm still struggling to see the problems of some proposed
solutions]

But regardless, why doesn't this code create the dirs if they don't
exist anyway? Sure, this doesn't deal with lifetime problem (i.e.
knowing when to delete the dirs again) which is highlighted when su'ing
to another non-root user, but I think there was some talk in this thread
of adding some kind of refcounting here to make this behave better.

Could logind not learn to track these nested-mini-sessions? They all
seem to go through the pam stack so what is the fundamental problem in
adding some kind of tracking to them? The statement that As the
directory only exists when a real login session is around appears to be
a self-imposed limitation inside logind. Surely we just create the
folder as needed - even on a su, track this mini-session. We don't need
to expose it as a real session - nor really track the processes
specifically (this isn't done now anyway), but just use it internally in
any how many sessions are there for this user checks when nuking the
runtime dir. This would solve the lifetime issue so it's possible to
tidy up the runtime dirs properly only when the last user really does
disappear.

I'm sure I'm missing something but all this just seems to be being made
more complicated than is necessary due to artificial problems that are
just a product of the current implementation! Is this a route forward or
is there a fundamental problem with that? I appreciate that not making
it a proper session and tracking the processes properly would maybe feel
nasty, but I'd still say it was a step in the right direction.

Perhaps it is a problem tracking when one of these nested sessions
actually logs out? Or perhaps the a problem would be doing something
like su[mini-session]+screen+detatch+logout[mini-session]? The
mini-session would be closed and the runtime dir cleaned up even tho'
processes in screen were still needing it. If this is a problem case
does screen need to be capable of registering it's own session (and I'd
say it needs to be a full session here such that it doesn't really die
when the containing session logs out). Would logind have to learn a new
mode for creating detachable sessions for things like screen or would
screen have to become setuid or something equally ugly here?


I'd really appreciate it if someone who groks all this stuff could write
up a how it should be done wiki page or something explaining what the
best practices would be to approach solutions to:

 1. starting a text shell as another user (either within a graphical env
or not (and whether to run GUI apps and play sound as the other user or
not).
 2. starting a new, detachable session e.g. screen as the current user
(possible done *after* 1. above).

These are things people do and I think there are legitimate reasons for
doing them - regardless, even if some of the cases only have
illegitimate reasons, we need to be able to fail gracefully and,
ideally, present a warning to the user. Making a stance and not
supporting certain (stupid) setups only works when you actually *tell*
the user that they can't do something and explain why and what the
alternative is.


Col

-- 

Colin Guthrie
gmane(at)colin.guthr.ie
http://colin.guthr.ie/

Day Job:
  Tribalogic Limited http://www.tribalogic.net/
Open Source:
  Mageia Contributor http://www.mageia.org/
  PulseAudio Hacker http://www.pulseaudio.org/
  Trac Hacker http://trac.edgewall.org/
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam: Don't use loginuid [was: Re: Fix PAM module to not clobber XDG_RUNTIME_DIR with su]

2013-11-26 Thread Colin Guthrie
'Twas brillig, and Martin Pitt at 26/11/13 06:19 did gyre and gimble:
 Hey Lennart,
 
 Lennart Poettering [2013-11-26  5:12 +0100]:
 I implemented this now, using a different approach than Martin's
 original patch (i.e. I don't think it is a good idea to involve stat()
 here, instead let's just let logind pass all information to
 pam_systemd).
 
 Thanks!

Indeed, thanks for this!

If anyone backports this fix to v208 (i.e. pre sd-bus) please share it
here. I'll likely do it just to have the upstream-blessed fix, but
doubt I'll get around it it until later in the week.


Col


-- 

Colin Guthrie
gmane(at)colin.guthr.ie
http://colin.guthr.ie/

Day Job:
  Tribalogic Limited http://www.tribalogic.net/
Open Source:
  Mageia Contributor http://www.mageia.org/
  PulseAudio Hacker http://www.pulseaudio.org/
  Trac Hacker http://trac.edgewall.org/
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] tree-wide usage of %m specifier instead of strerror(errno)

2013-11-26 Thread David Timothy Strauss
Thanks. Applied.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd 208:trouble with inactive user sessions at non-seat0 seats

2013-11-26 Thread Laércio de Sousa
David,

Looking at GDM debug and gdm-simple-slave.c source file, as well as
loginctl show-seat output. I guess that GDM only requests user session
activation for seats with CanMultiSession=yes, but currently systemd-logind
still sets CanMultiSession=no for my non-seat0 seats.

Does seat_can_multi_session() heuristics need some improvement since
generic multi-session was introduced?

I've applied your last patch. I confirm it solves my problem for now.

Thanks for all, and keep the good work!

CANTATE DOMINO CANTICUM NOVUM
QUIA MIRABILIA FECIT

Laércio


2013/11/26 David Herrmann dh.herrm...@gmail.com

 Hi

 On Tue, Nov 26, 2013 at 11:45 AM, Laércio de Sousa lbsous...@gmail.com
 wrote:
  Hi David!
 
  I just tested your patch, and unfortunately it didn't work. However, I
 guess
  what could be the question.
 
  If I understand correctly, your patch applies for the case we have no
 active
  session at all in a non-seat0 seat. However, I do have an active session
 in
  my non-seat0 seat --- the one GDM opens to show the greeter. The
 question is
  that, when a user logs in at a non-seat0 seat, GDM (or whatever) should
  activate automatically the new user session, sending the greeter session
 to
  the background while it's in state closing, as we have for seat0. In my
  current setup, it doesn't occur: in a non-seat0 seat, the closing
 greeter
  session remains active, while the new user session starts at background
  (inactive).
 
  If I activate manually the user session (loginctl activate session),
 the
  converse occurs: when the user logs out, its closing user session
 remains
  active, while the new greeter session starts at background (inactive).
 
  Just for comparison: if I configure autologin for GDM (to avoid opening
 the
  greeter session), or use multiseat-patched KDM (which doesn't open
 greeter
  sessions), the user session starts at foreground, as expected, even
 without
  this patch.

 That gdm behavior is actually weird. It should explicitly request the
 new session to become active. There is no reason for logind to
 *assume* the new session should be activated automatically.. hmm.

 The appended patch reverts the new behavior. Can you give it a try? I
 actually cannot get gdm to pick up my additional seats.. and looking
 into the monstrosity called gdm-source-base I have no clue what it
 does. If you can confirm that this resets the behavior, I will apply
 it so we don't break existing setups.

 I will figure out something else for new multi-session capable seats.

 Thanks
 David


 diff --git a/src/login/logind-seat.c b/src/login/logind-seat.c
 index b30c4ce..ca0e8d7 100644
 --- a/src/login/logind-seat.c
 +++ b/src/login/logind-seat.c
 @@ -413,8 +413,8 @@ int seat_attach_session(Seat *s, Session *session) {
  seat_send_changed(s, Sessions, NULL);

  /* On seats with VTs, the VT logic defines which session is
 active. On
 - * seats without VTs, we automatically activate the first
 session. */
 -if (!seat_has_vts(s)  !s-active)
 + * seats without VTs, we automatically activate new sessions. */
 +if (!seat_has_vts(s))
  seat_set_active(s, session);

  return 0;

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam: Don't use loginuid [was: Re: Fix PAM module to not clobber XDG_RUNTIME_DIR with su]

2013-11-26 Thread Dr. Werner Fink
On Tue, Nov 26, 2013 at 10:41:36AM +, Colin Guthrie wrote:
 'Twas brillig, and Martin Pitt at 26/11/13 06:19 did gyre and gimble:
  Hey Lennart,
  
  Lennart Poettering [2013-11-26  5:12 +0100]:
  I implemented this now, using a different approach than Martin's
  original patch (i.e. I don't think it is a good idea to involve stat()
  here, instead let's just let logind pass all information to
  pam_systemd).
  
  Thanks!
 
 Indeed, thanks for this!
 
 If anyone backports this fix to v208 (i.e. pre sd-bus) please share it
 here. I'll likely do it just to have the upstream-blessed fix, but
 doubt I'll get around it it until later in the week.

I've backported it.  But during tests I've found that it does not help
if the environment variable XDG_RUNTIME_DIR already exists before doing
su.  It will not unset but exported.

Werner

-- 
  Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool. -- Edward Burr
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question regarding the NotifyAccess parameter

2013-11-26 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Nov 21, 2013 at 08:56:25PM +0530, salil GK wrote:
 Hello
 
I have a service in shell script in which I am sending notification to
 systemd using `systemd-notify WATCHDOG=1` command. What happens is -
 systemd-notify will be a child process and in the systemd notification will
 not be honoured if NotifyAccess is set to main. Is there any work around
 for this.
Use NotifyAccess=all?

One more issue I observed is - if I specify Restart=on-failure, if
 watchdog timer expire, it restart the service. But I can see that it create
 two processes rather than restarting the process. But if I do systemctl
 restart Myservice , it kills the previous instance of service and start a
 new service. Any pointers on why it happens so.
That would be a significant bug! Can you post a short example which shows
the bug?

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam: Don't use loginuid [was: Re: Fix PAM module to not clobber XDG_RUNTIME_DIR with su]

2013-11-26 Thread Colin Guthrie
'Twas brillig, and Dr. Werner Fink at 26/11/13 14:21 did gyre and gimble:
 On Tue, Nov 26, 2013 at 10:41:36AM +, Colin Guthrie wrote:
 'Twas brillig, and Martin Pitt at 26/11/13 06:19 did gyre and gimble:
 Hey Lennart,

 Lennart Poettering [2013-11-26  5:12 +0100]:
 I implemented this now, using a different approach than Martin's
 original patch (i.e. I don't think it is a good idea to involve stat()
 here, instead let's just let logind pass all information to
 pam_systemd).

 Thanks!

 Indeed, thanks for this!

 If anyone backports this fix to v208 (i.e. pre sd-bus) please share it
 here. I'll likely do it just to have the upstream-blessed fix, but
 doubt I'll get around it it until later in the week.
 
 I've backported it.

Can you link to it or attach it please?

 But during tests I've found that it does not help
 if the environment variable XDG_RUNTIME_DIR already exists before doing
 su.  It will not unset but exported.

That's expected.

su does not do any env cleaning, su - does, sudo does, pkexec does.

su's behaviour is to always not touch stuff and thus this is known and
expected and has always been a problem.

Longer term we need to solve this more holistically (hence why I've
tried to get information about how things should be done in the future
to start helping lay the ground work for thise bits), but this is still
the best fix we have just now.

For the specific case of su'ing to root, (which is the most common and
potentially problematic one), I will probably use Colin W's most recent
patch to have a static root runtime dir and for logind to set it. This
should fix XDG_RUNTIME_DIR when su'ing to root. I'm not so worried about
su'ing to other users (the damage that can be done is much more
limited), but longer term we do need to address that nicely too IMO
(which will likely need changes in su itself and a number of other places)

Col


-- 

Colin Guthrie
gmane(at)colin.guthr.ie
http://colin.guthr.ie/

Day Job:
  Tribalogic Limited http://www.tribalogic.net/
Open Source:
  Mageia Contributor http://www.mageia.org/
  PulseAudio Hacker http://www.pulseaudio.org/
  Trac Hacker http://trac.edgewall.org/
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question regarding the NotifyAccess parameter

2013-11-26 Thread Hoyer, Marko (ADITG/SW2)
 One more issue I observed is - if I specify Restart=on-failure, if
  watchdog timer expire, it restart the service. But I can see that it
  create two processes rather than restarting the process. But if I do
  systemctl restart Myservice , it kills the previous instance of
  service and start a new service. Any pointers on why it happens so.

This part has been already reported as a bug in May:
http://lists.freedesktop.org/archives/systemd-devel/2013-May/011030.html

Best to my knowledge, this has been fixed in systemd 203, 204, or 205 ...
Please note that the link above does not contain the final bug fix. Some 
discussions followed which led to the final solution at a certain point. Follow 
the threads, you'll find it ...


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam: Don't use loginuid [was: Re: Fix PAM module to not clobber XDG_RUNTIME_DIR with su]

2013-11-26 Thread Dr. Werner Fink
On Tue, Nov 26, 2013 at 02:39:49PM +, Colin Guthrie wrote:
 'Twas brillig, and Dr. Werner Fink at 26/11/13 14:21 did gyre and gimble:
  On Tue, Nov 26, 2013 at 10:41:36AM +, Colin Guthrie wrote:
  'Twas brillig, and Martin Pitt at 26/11/13 06:19 did gyre and gimble:
  Hey Lennart,
 
  Lennart Poettering [2013-11-26  5:12 +0100]:
  I implemented this now, using a different approach than Martin's
  original patch (i.e. I don't think it is a good idea to involve stat()
  here, instead let's just let logind pass all information to
  pam_systemd).
 
  Thanks!
 
  Indeed, thanks for this!
 
  If anyone backports this fix to v208 (i.e. pre sd-bus) please share it
  here. I'll likely do it just to have the upstream-blessed fix, but
  doubt I'll get around it it until later in the week.
  
  I've backported it.
 
 Can you link to it or attach it please?

Yep ...

  But during tests I've found that it does not help
  if the environment variable XDG_RUNTIME_DIR already exists before doing
  su.  It will not unset but exported.
 
 That's expected.
 
 su does not do any env cleaning, su - does, sudo does, pkexec does.

I'm aware ;) neverthelesss ...

 
 su's behaviour is to always not touch stuff and thus this is known and
 expected and has always been a problem.
 
 Longer term we need to solve this more holistically (hence why I've
 tried to get information about how things should be done in the future
 to start helping lay the ground work for thise bits), but this is still
 the best fix we have just now.
 
 For the specific case of su'ing to root, (which is the most common and
 potentially problematic one), I will probably use Colin W's most recent
 patch to have a static root runtime dir and for logind to set it. This
 should fix XDG_RUNTIME_DIR when su'ing to root. I'm not so worried about
 su'ing to other users (the damage that can be done is much more
 limited), but longer term we do need to address that nicely too IMO
 (which will likely need changes in su itself and a number of other places)

... I've a bug report here which indeed shows exactly the ``other'' problem.
If you do not like the enhancement in my backport you may replace

+} else {
+(void) unsetenv(XDG_RUNTIME_DIR);
+r = pam_putenv(handle, XDG_RUNTIME_DIR);
+if (r != PAM_SUCCESS  r != PAM_BAD_ITEM) {
+ pam_syslog(handle, LOG_ERR, Failed to unset runtime 
dir.);
+}

with `}'  ... otherwise both the pam_putenv() and unsetenv() are required
to make sure that XDG_RUNTIME_DIR is removed.

Werner

-- 
  Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool. -- Edward Burr
Based on upstream baae0358f349870544884e405e82e4be7d8add9f
| From: Lennart Poettering lenn...@poettering.net
| Date: Tue, 26 Nov 2013 04:05:00 +
| Subject: pam_systemd: do not set XDG_RUNTIME_DIR if the session's original user is not the same as the newly logged in one
| It's better not to set any XDG_RUNTIME_DIR at all rather than one of a
| different user. So let's do this.
--- systemd-208/src/login/logind-dbus.c
+++ systemd-208/src/login/logind-dbus.c	2013-11-26 13:37:05.730735774 +
@@ -523,6 +523,7 @@ static int bus_manager_create_session(Ma
 DBUS_TYPE_OBJECT_PATH, path,
 DBUS_TYPE_STRING, session-user-runtime_path,
 DBUS_TYPE_UNIX_FD, fifo_fd,
+DBUS_TYPE_UINT32, session-user-uid,
 DBUS_TYPE_STRING, cseat,
 DBUS_TYPE_UINT32, vtnr,
 DBUS_TYPE_BOOLEAN, exists,
--- systemd-208/src/login/logind-session-dbus.c
+++ systemd-208/src/login/logind-session-dbus.c	2013-11-26 13:36:07.478236401 +
@@ -755,6 +755,7 @@ int session_send_create_reply(Session *s
 DBUS_TYPE_OBJECT_PATH, path,
 DBUS_TYPE_STRING, s-user-runtime_path,
 DBUS_TYPE_UNIX_FD, fifo_fd,
+DBUS_TYPE_UINT32, s-user-uid,
 DBUS_TYPE_STRING, cseat,
 DBUS_TYPE_UINT32, vtnr,
 DBUS_TYPE_BOOLEAN, exists,
--- systemd-208/src/login/pam-module.c
+++ systemd-208/src/login/pam-module.c	2013-11-26 14:32:20.194235777 +
@@ -93,24 +93,18 @@ static int get_user_data(
 assert(ret_username);
 assert(ret_pw);
 
-r = audit_loginuid_from_pid(0, uid);
-if (r = 0)
-pw = pam_modutil_getpwuid(handle, uid);
-else {
-r = pam_get_user(handle, username, NULL);
-if (r != PAM_SUCCESS) {
-pam_syslog(handle, LOG_ERR, Failed to get user name.);
-return r;
-}
-
-if 

Re: [systemd-devel] [PATCH] core/socket: we only want one event on standard sockets

2013-11-26 Thread Lennart Poettering
On Tue, 26.11.13 14:44, David Timothy Strauss (da...@davidstrauss.net) wrote:

 
 On Tue, Nov 26, 2013 at 2:35 PM, Lennart Poettering
 lenn...@poettering.net wrote:
  Not following here. What precisely does this fix, can you elaborate?
 
  We currently turn off the poll for the socket fds as soon as we queued
  the service socket, so that we don't get woken up anymore. What would
  EPOLLET do good here?
 
 I think he's been working on having the distribute pool scale up
 on-demand, which would involve systemd getting events on new
 connections coming into the listener socket. More specifically, I
 think he intends to, as each new connection comes in, check if we've
 maxed out the number of processes. If not, spin another one up.
 Presumably, we'd drop the listener once the max-size pool is running.

Well, but EPOLLET only works correctly if each time an event is
triggered we dispatch *all* possibly queued events on the fd, until
EAGAIN is read again. But we don't do that, heck, if Listen=no is used
we don''t even read a single incoming connection of the scket, we leave
that to the daemon we spawn, but we cannot trust that. So, I don't get
what EPOLLET is supposed to do good here?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam: Don't use loginuid [was: Re: Fix PAM module to not clobber XDG_RUNTIME_DIR with su]

2013-11-26 Thread Lennart Poettering
On Tue, 26.11.13 09:53, Colin Guthrie (gm...@colin.guthr.ie) wrote:

  I'm proposing a simple goal: XDG_RUNTIME_DIR should always be that
  matching the current uid.  I can't think of any case where you'd
  want it otherwise.
  
  That can't work. As the directory only exists when a real login session
  is around. su/sudo don't get their own login sessins, hence the dir
  doesn't necessarily exist and from the perspective of the code running
  in su/sudo the lifetime semantics of the dir wouldn't match any
  expections...
 
 Colin W's later patch did implement these semantics for the root user's
 XDG_RUNTIME_DIR. It kept it around and didn't tidy it up. Doesn't this
 solve the problem for the root user nicely (which is the primary
 problem)?

I am pretty sure we shouldn't exclude the root user's XDG_RUNTIME_DIR
from the usual clean-up logic.

Also, as mentioned, if you want su-l/sudo-i work, then please add the
force-new-session=1 switch to pam_systemd, that I offered to merge.

 But regardless, why doesn't this code create the dirs if they don't
 exist anyway? Sure, this doesn't deal with lifetime problem (i.e.
 knowing when to delete the dirs again) which is highlighted when su'ing
 to another non-root user, but I think there was some talk in this thread
 of adding some kind of refcounting here to make this behave better.

We should avoid creating dirs we don't know are cleaned up again. logind
is the component that cleans up those dirs, hence it should also create
them.

 Could logind not learn to track these nested-mini-sessions? They all

That's what force-new-session=1 would do.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC][PATCH] conf-parser: distinguish between multiple sections with the same name

2013-11-26 Thread Lennart Poettering
On Thu, 21.11.13 02:27, Tom Gundersen (t...@jklm.no) wrote:

  Maybe then back to labelled sections:
 
  [Address:foobar]
  Label=waldo
  Address=1.1.1.1
 
  or so, so that the suffix foobar is purely an id that is by default
  disconnected from any setting? And then maybe optionally inherit it into
  Label= if Label= is not explicitly set?
 
 I think the section name should not have any other function/meaning. I
 first tried to make 'section name'='Label', but I'm worried it might
 be confusing: If section names are required, it means we now require
 labeling, or force the admin to set Label= to disable it, which
 seems a bit weird. 

It hink that would actually be OK. Doing something useful without
explicit configuration sounds like a good approach. Inheriting the
Label= from the section name if no explicit Label= is specified sounds
quite OK to me...

 Also, I find it asymmetric the way section names
 for Addresses have this extra meaning, but for Routes or other types
 of sections where there is no natural equivalent to Label=, they have
 no meaning apart from a name.

Well, we wouldn't really bind the section name only and exclusively to
the label. It's just that when fields are not explicitly set they might
inherit the section name, in some field-specific way. For Label= it
would be quiet obvious, but for other things this might work too. For
example, for the ipv6 tunnel stuff we could inherit the section name
into the tunnel iface name or so, whatever comes up...

 I'm really not convinced one way or the other, but I think my
 preferred solution is: go with unnamed sections now, and if ever it
 becomes necessary, introduce named ones additionally.

Sounds OK.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core/socket: we only want one event on standard sockets

2013-11-26 Thread David Timothy Strauss
On Wed, Nov 27, 2013 at 2:32 AM, Lennart Poettering
lenn...@poettering.net wrote:
 Well, but EPOLLET only works correctly if each time an event is
 triggered we dispatch *all* possibly queued events on the fd, until
 EAGAIN is read again. But we don't do that, heck, if Listen=no is used
 we don''t even read a single incoming connection of the scket, we leave
 that to the daemon we spawn, but we cannot trust that. So, I don't get
 what EPOLLET is supposed to do good here?

I should have more selectively quoted before answering. I was only
trying to answer why we'd continue having poll for the socket fds
after queuing the initial service. I was not trying to suggest EPOLLET
is appropriate for the goal of lazy distribute process pool
expansion. You've clearly correct about that.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core/socket: we only want one event on standard sockets

2013-11-26 Thread Shawn Landden
As we only reciece one event at a time, and dequeue it in the same go,
yeah ONESHOT won't change anything.

On Tue, Nov 26, 2013 at 9:33 AM, David Timothy Strauss
da...@davidstrauss.net wrote:
 On Wed, Nov 27, 2013 at 2:32 AM, Lennart Poettering
 lenn...@poettering.net wrote:
 Well, but EPOLLET only works correctly if each time an event is
 triggered we dispatch *all* possibly queued events on the fd, until
 EAGAIN is read again. But we don't do that, heck, if Listen=no is used
 we don''t even read a single incoming connection of the scket, we leave
 that to the daemon we spawn, but we cannot trust that. So, I don't get
 what EPOLLET is supposed to do good here?

 I should have more selectively quoted before answering. I was only
 trying to answer why we'd continue having poll for the socket fds
 after queuing the initial service. I was not trying to suggest EPOLLET
 is appropriate for the goal of lazy distribute process pool
 expansion. You've clearly correct about that.
I was worried that the fact that we never accept() the socket when using
distribute (now I am convinced we shouldn't use it otherwise)
would cause it to trigger multiple times for only one incoming connection,
if the spawned thread never entered accept() (or we raced it),
but reading this thread makes me think I don't fully understand the
semantics of EPOLLET.
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core/socket: we only want one event on standard sockets

2013-11-26 Thread David Timothy Strauss
On Wed, Nov 27, 2013 at 6:23 AM, Shawn Landden sh...@churchofgit.com wrote:
 I was worried that the fact that we never accept() the socket when using
 distribute (now I am convinced we shouldn't use it otherwise)

I'm not sure what you mean here. Distribute-style functionality is
absolutely useful with Accept=true (to cap the number of simultaneous
connections) as well as Accept=false (to allow running of a process
pool of self-accepting, long-running workers).

 would cause it to trigger multiple times for only one incoming connection,
 if the spawned thread never entered accept() (or we raced it),
 but reading this thread makes me think I don't fully understand the
 semantics of EPOLLET.

There are some decent examples on the man page: http://linux.die.net/man/7/epoll
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core/socket: we only want one event on standard sockets

2013-11-26 Thread Shawn Landden
On Tue, Nov 26, 2013 at 1:48 PM, David Timothy Strauss
da...@davidstrauss.net wrote:
 On Wed, Nov 27, 2013 at 6:23 AM, Shawn Landden sh...@churchofgit.com wrote:
 I was worried that the fact that we never accept() the socket when using
 distribute (now I am convinced we shouldn't use it otherwise)

 I'm not sure what you mean here. Distribute-style functionality is
 absolutely useful with Accept=true (to cap the number of simultaneous
 connections)
Are you sure applications can handle the extra file descriptor of
passing both the sockfd
and the acceptfd in this case? I don't see why they wouldn't just do
the accept() themselves?

Can you explain what you mean here, and how it differs from the
existing MaxConnections.

 as well as Accept=false (to allow running of a process
 pool of self-accepting, long-running workers).

 would cause it to trigger multiple times for only one incoming connection,
 if the spawned thread never entered accept() (or we raced it),
 but reading this thread makes me think I don't fully understand the
 semantics of EPOLLET.

 There are some decent examples on the man page: 
 http://linux.die.net/man/7/epoll
Yeah I had read that, and seen it in the kernel source. I am just
confused and need to think about it some more.

-Shawn
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core/socket: we only want one event on standard sockets

2013-11-26 Thread Lennart Poettering
On Wed, 27.11.13 07:48, David Timothy Strauss (da...@davidstrauss.net) wrote:

 
 On Wed, Nov 27, 2013 at 6:23 AM, Shawn Landden sh...@churchofgit.com wrote:
  I was worried that the fact that we never accept() the socket when using
  distribute (now I am convinced we shouldn't use it otherwise)
 
 I'm not sure what you mean here. Distribute-style functionality is
 absolutely useful with Accept=true (to cap the number of simultaneous
 connections) as well as Accept=false (to allow running of a process
 pool of self-accepting, long-running workers).

We already enforce a connection limit (MaxConnections=) for Accept=yes
sockets... We did this since day one.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-commits] man/systemd-socket-proxyd.xml src/socket-proxy TODO

2013-11-26 Thread Lennart Poettering
On Sun, 24.11.13 16:47, David Strauss (strau...@kemper.freedesktop.org) wrote:

  TODO |1 
  man/systemd-socket-proxyd.xml|   80 
 ++-
  src/socket-proxy/socket-proxyd.c |   63 --
  3 files changed, 121 insertions(+), 23 deletions(-)
 
 New commits:
 commit adcf4c81c58511b67644e17fa743d1729d3c9ccf
 Author: David Strauss da...@davidstrauss.net
 Date:   Mon Nov 25 10:44:48 2013 +1000
 
 socket-proxyd: Add --listener option for listener/destination
 pairs.

Could you please explain what the usecase here is? Why is this better
than having two socket units with two proxy services?

 +/usr/bin/systemd-socket-proxyd --listener=3 localhost:8080 
 +/usr/bin/systemd-socket-proxyd --listener=4 localhost:8443 
 +wait]]

If our examples suggest that people should write shell scripts that fork
things, then this is usually an indication that we need to rethink what
we are doing here. In fact, running multiple manually forked off
processes inside the same service already sounds very wrong. We should
try to keep a 1:1 mapping between processes we fork and services they
run it.

What's the usecase here? 

If this is about running multiple things in the same Network namespace,
then there are certainly other, better ways to achieve the same. For
example, we could introduce JoinPrivateNetwork=$SERVICE or so which
would allow one service to join the network namespace of another. That
makes this much smoother and more powerful, too. 

With that in place you could simply have three proxy instances, and make
them join the private network of your nginx instance and you should be
set?


Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core/socket: we only want one event on standard sockets

2013-11-26 Thread David Timothy Strauss
On Wed, Nov 27, 2013 at 7:57 AM, Shawn Landden sh...@churchofgit.com wrote:
 Are you sure applications can handle the extra file descriptor of
 passing both the sockfd
 and the acceptfd in this case? I don't see why they wouldn't just do
 the accept() themselves?

 Can you explain what you mean here, and how it differs from the
 existing MaxConnections.

Actually, MaxConnections was exactly what I was thinking of. I agree
with you now on Distribute=true only being useful with Accept=false.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-commits] man/systemd-socket-proxyd.xml src/socket-proxy TODO

2013-11-26 Thread David Timothy Strauss
On Wed, Nov 27, 2013 at 12:03 PM, Lennart Poettering
lenn...@poettering.net wrote:
 Could you please explain what the usecase here is? Why is this better
 than having two socket units with two proxy services?

Right now, it's because separate services cannot exist in the same
network namespace with another service. The main motivation for this
change was a desire within CoreOS to run etcd within a
network-namespaced service. With the need to only map one port in,
that would be equivalent to the network namespaced nginx example. But,
they need to map *two* ports in and to distinct destination ports
within the namespace. That was actually impossible before, where
socket-proxyd would map all inherited sockets to the same destination.

This change allows forcing a single socket-proxyd to inherit only one
socket each.

 +/usr/bin/systemd-socket-proxyd --listener=3 localhost:8080 
 +/usr/bin/systemd-socket-proxyd --listener=4 localhost:8443 
 +wait]]

 If our examples suggest that people should write shell scripts that fork
 things, then this is usually an indication that we need to rethink what
 we are doing here. In fact, running multiple manually forked off
 processes inside the same service already sounds very wrong. We should
 try to keep a 1:1 mapping between processes we fork and services they
 run it.

Agreed. I really don't want socket-proxyd directly involved in
exec'ing of things, either.

 What's the usecase here?

 If this is about running multiple things in the same Network namespace,
 then there are certainly other, better ways to achieve the same. For
 example, we could introduce JoinPrivateNetwork=$SERVICE or so which
 would allow one service to join the network namespace of another. That
 makes this much smoother and more powerful, too.

That would be wonderful. I'd love to shed the shell scripts.

 With that in place you could simply have three proxy instances, and make
 them join the private network of your nginx instance and you should be
 set?

I think that would do exactly the trick. I didn't know it was possible
to join into an existing namespace that way.

It would be my pleasure to update the documentation to use something
like that if it becomes available. I'd also enjoy dropping the
--listener option. :-)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] fstab, rootfs on btrfs

2013-11-26 Thread Chris Murphy

In Fedora 20, by default anaconda sets fs_passno in fstab to 1 for / on btrfs. 
During offline updates, this is causing systemd-fstab-generator to freak out 
not finding fsck.btrfs.

https://bugzilla.redhat.com/show_bug.cgi?id=1034563

For some time I've been suggesting that fstab should use fs_passno 0 for btrfs 
file systems:
https://bugzilla.redhat.com/show_bug.cgi?id=862871

But because of this suggestion by an XFS dev, I'm wondering if that's not a 
good idea. Or if we should expect some smarter behavior on the part of systemd 
(now or in the future) when it comes to devices that take a long time to appear?
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg29231.html

It doesn't seem to me that for file systems that don't require an fs check, 
that fstab should indicate it does require an fs check, just to inhibit hissy 
fits by other processes not liking that the device is missing. But maybe I'm 
missing something.


Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel