[systemd-devel] Incorrect use return value of mount_one in mount_setup_early/mount_setup?

2015-09-14 Thread cee1
Hi all,

mount_one will return 1 if a mount action is performed; 0 for no mount
performed; and <0 for an error occurred. Right?

In mount_setup, we have the following logic:
"""
for (i = 0; i < ELEMENTSOF(mount_table); i ++) {
int j;

j = mount_one(mount_table + i, loaded_policy);

if (r == 0)
r = j;

}

if (r < 0)
return r;
"""

That means the first non-zero return value determines the return value
of mount_setup - If a mount is performed successfully in
mount_one(which set r to 1), an error in next call of mount_one will
*NOT* be detected(since r == 1). Is this the expected behavior?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] possible message leak in bus->wqueue ?

2015-08-07 Thread cee1
2015-08-07 17:18 GMT+08:00 eshark :
> Hi, all
>   If  some message went into bus->wqueue, and  failed to run
> ioctl(KDBUS_CMD_SEND) and returned  r < 0,
> I found that this message will remain in the bus->wqueue.   If  the peer is
> killed for some reason, this message will fail to be sent and remain in the
> wqueu for ever.
>
> Because in  dispatch_wqueue() ,  when bus_write_message() return r <0,
> dispatch_wqueue() will simply return this  "r " into the caller.
> And the wqueue is invisible to user application, so user application also
> cannot   remove this message to handle this error case.
>
>  I wonder whether this is a problem, and if yes,  should we remove this
> message  in dispatch_wqueue() when  r < 0 ?

I've the same question.

E.g.

dispatch_wqueue()
bus_write_message()
bus_kernel_write_message()

"""
r = ioctl(bus->output_fd, KDBUS_CMD_SEND, &cmd);
if (r < 0) {
...
else if (errno == ENXIO || errno == ESRCH) {
...
if (m->header->type == SD_BUS_MESSAGE_METHOD_CALL)
sd_bus_error_setf(&error,
SD_BUS_ERROR_SERVICE_UNKNOWN, "Destination %s not known",
m->destination);
else {
log_debug("Could not deliver message
to %s as destination is not known. Ignoring.", m->destination);
return 0;
}
}
"""

If A __returns__ a result to B, but B has already died (After sending
a "method call" message):

1. It will return ENXIO or ESRCH, right?
2. dispatch_wqueue(), bus_write_message()  and
bus_kernel_write_message() returns 0
3. Next time dispatch_wqueue() called, it will retry, but never
succeed - so, deadlocked?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH]fopen_temporary: close fd if fail

2015-07-08 Thread cee1
-- 
Regards,

- cee1


0001-basic-util.c-fopen_temporary-close-fd-if-failed.patch
Description: Binary data
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-28 Thread cee1
2015-06-15 0:43 GMT+08:00 Greg KH :
> On Sun, Jun 14, 2015 at 12:49:55PM -0300, Cristian Rodríguez wrote:
>>
>> El jun. 14, 2015 10:21, "cee1"  escribió:
>> >
>> > Hi all,
>> >
>> > Why we need to read/save random seed? Can it be read from /dev/random each
>> time?
>>
>> Because the kernel is borked and still is needs to be fed of entropy at 
>> system
>> startup by user space. Please read the random man page.
>>
>> I agree we shouldn't have to do this at all..
>
> Really?  And how do you suggest we "fix" the kernel when the hardware
> itself doesn't provide us with a proper random number "seed" in the
> first place?  What do you suggest we do instead?

It seems in 4.2, kernel will use the Jitter Entropy Random Number
Generator to seed other random number generator(s):
http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.2-Crypto-Akcipher-PKE

And from https://www.kernel.org/doc/ols/2014/ols2014-mueller.pdf, p24:
"""
The random number generator shall not require a seeding with data from
previous instances of the random number generator.
"""

That means we can get rid off systemd-random-seed.service, starting from 4.2.



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [ANNOUNCE] systemd v221

2015-06-20 Thread cee1
2015-06-20 2:06 GMT+08:00 Lennart Poettering :
> On Fri, 19.06.15 16:06, Lennart Poettering (lenn...@poettering.net) wrote:
>
>> Heya!
>>
>> It's primarily a bugfix release, but we also make sd-bus.h and
>> sd-event.h public. (A blog story on sd-bus and how to use it will
>> follow shortly.)
>
> The blog story is online now:
>
> http://0pointer.net/blog/the-new-sd-bus-api-of-systemd.html
>
> Enjoy,
Glad to see this :)

BTW, what about libabc? Would libsystemd be part of libabc? Also
libsystemd is a linux-specific library, will it further ports and
integrates some kernel libraries to libabc??



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Improve boot-time of systemd-based device, revisited

2015-06-20 Thread cee1
2015-06-19 15:34 GMT+08:00 Chaiken, Alison :
> cee1  writes:
>> 3.1 "consider disabling readahead collection in the shipped devices,
>> but leave readahead replay enabled."
>
>
> ceel, are you aware that readahead is deprecated in systemd and has not been
> included since about release 216?   Some of us in automotive are still
> working on it.   I have some patches here
>
> https://github.com/chaiken/systemd-hacks/tree/packfilelist
>
> against 215 that add various features.   We may soon be forward-porting
> these, along with readahead itself, to the latest version.
Glad to hear that :)

>
>> The readahead doesn't work very well on my experiment,
>
>
> I spent considerable time performing boot experiments on production
> hardware, including trying different I/O schedulers.My conclusion was
> that readahead provides benefits in boot-time only when large, monolithic
> binaries start. If these gigantic applications were rearchitected to be
> more modular and could load libraries dynamically when needed instead of all
> at once, I suspect that the speedup associated with readahead would vanish.
> Nonetheless, under the right conditions, readahead may speed up boot on real
> hardware in product-relevant conditions.
>
> The problem is actually quite complex in the case of eMMC boot devices,
> which have their own sophisticated embedded controllers.   To properly
> optimize the whole system, we need to know the behavior of that controller
> and model what happens at boot in the full system using different Linux I/O
> schedulers and readahead strategies.   Unfortunately we don't have all that
> information.   My suspicion is that we might actually boot faster from raw
> NAND flash, but then of course we have to perform our own wear-levelling and
> block sparing.
BTW, I wonder whether the F2FS helps, which seems very friendly to
flash storage.

>
>> The replaying sequence: A, B, C
>> The actual requesting sequence: C, B, A
>> If we can figure out the requesting sequence, it can achieve real read
>> "ahead"[1].
>
>
> I have verified in detail that readahead worked as intended: the degree to
> which the system was I/O-bound did decrease, even in cases where there was
> no net speedup.
Any idea why?


>> 4. Get rid of systemd-cgroups-agent. This requires introduction of a
>> new kernel interface to get notifications for cgroups running empty,
>> for example via fanotify() on cgroupfs.
>> Is there any related work in processing?
>
>
> Are you aware of "JoinControllers"?   You appear to have old versions of
> software, which doesn't garner much sympathy from developers.
So this option can reduce the times of invoking systemd-cgroups-agent?

Note the points list in my previous mail come from
http://freedesktop.org/wiki/Software/systemd/Optimizations/ and
https://wiki.tizen.org/wiki/Automotive_Fast_Boot_Optimization, they
seems interesting to me.


>
>> These makes it hard to use systemd in a customized system.
>
>
> The Linux services business benefits from churn in userspace code . . .
Kernel scheduler of an analogy - there's no kernel scheduler specific
for embedded device, nor a kernel scheduler specific for linux server,
but a scheduler for all the cases. So it should do with systemd,
right?

>> What I call
>> for is to make the cold boot logic "declarative", something like:
>> main.c:
>> log_to_A
>> mount_X
>> mount_Y
>
>
> Good news: you are free to choose SysVInit.
What I mean is the initialization stage of systemd, that's e.g.
mounting the "API filesystem", etc.

I expect a "declarative" expression of that, which will help to
customization and debugging(without going deep to the code)

>
>> I wonder whether a property system also makes sense in systemd's world?
>
>
> systemd unit files are already declarative lists of properties, right?
The property system is something likes a system preference system(i.e.
similar to a system dconf), IIRC, os x has a similar thing. My
question is do we need a similar thing in systemd world, since systemd
seems aiming to provide the basic infrastructure of a linux
distribution?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH][v1]random-seed: Save random seed as early as possible

2015-06-20 Thread cee1
Hi Lennart,

A second thought, splitting systemd-random-seed.service into:
1. systemd-random-seed-load.service: which only do the seed loading job
2. systemd-random-seed-save.service: which will save a new seed, and
is "After", but not "Requires" or "Wants"
systemd-random-seed-load.service

Both of the services use the same binary "systemd-random-seed" with
different argv[1]("load" vs "save")

Seems better than the patch v1, what do you think?



2015-06-19 21:34 GMT+08:00 cee1 :
> Hi all,
>
> As discussed at
> http://lists.freedesktop.org/archives/systemd-devel/2015-June/033075.html,
> this patch saves seed with ** good ** random number as early as
> possible, as opposed to the original behavior, which saves a random
> number when shutdown.
>
> Note:
> 1. If seed loading failed, it will not save a new seed. May not be the
> proper behavior?
> 2. The STATUS sent by the second and third sd_notify() are not shown
> in "systemctl status systemd-random-seed.service", need some kind of
> improvement.
>
> Please comment and give suggestions :)
>
>
>
> --
> Regards,
>
> - cee1



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH][v1]random-seed: Save random seed as early as possible

2015-06-19 Thread cee1
Hi all,

As discussed at
http://lists.freedesktop.org/archives/systemd-devel/2015-June/033075.html,
this patch saves seed with ** good ** random number as early as
possible, as opposed to the original behavior, which saves a random
number when shutdown.

Note:
1. If seed loading failed, it will not save a new seed. May not be the
proper behavior?
2. The STATUS sent by the second and third sd_notify() are not shown
in "systemctl status systemd-random-seed.service", need some kind of
improvement.

Please comment and give suggestions :)



-- 
Regards,

- cee1


0001-random-seed-Save-seed-with-a-good-random-number-earl.patch
Description: Binary data
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Improve boot-time of systemd-based device, revisited

2015-06-18 Thread cee1
2015-06-14 21:17 GMT+08:00 cee1 :
> Hi all,
>
> I've recently got another chance to improve the boot-time of a
> systemd-based device. I'd like to share the experience here, and some
> thoughts and questions.

Two more articles about boot optimization:
* http://freedesktop.org/wiki/Software/systemd/Optimizations/
* https://wiki.tizen.org/wiki/Automotive_Fast_Boot_Optimization

The interesting bits:
1. Improve a couple of algorithms in the unit dependency graph
calculation logic ...
Is there any work about this?

2. Add a kernel sockopt for AF_UNIX to increase the maximum datagram
queue length for SOCK_DGRAM sockets.
Avoid/delay blocking depending services in socket activation case

3. Since boot-up tends to be IO bound, some IO optimization:
3.1 "consider disabling readahead collection in the shipped devices,
but leave readahead replay enabled."
3.2 "Update the readahead logic to also precache directories"
3.3 "Compress readahead pack files with XZ or so"
3.4 "Make use of EXT4_IOC_MOVE_EXT in systemd's readahead implementation."

Point 3.4 seems interesting to me: I wonder whether it works on SSD?

The readahead doesn't work very well on my experiment, I guess there's
no chance to read "ahead" - doing replay may slow down the normal boot
up procedure, e.g.
   The replaying sequence: A, B, C
   The actual requesting sequence: C, B, A
If we can figure out the requesting sequence, it can achieve real read
"ahead"[1].

4. Get rid of systemd-cgroups-agent. This requires introduction of a
new kernel interface to get notifications for cgroups running empty,
for example via fanotify() on cgroupfs.
Is there any related work in processing?

BTW, tizen seems managed to start a systemd-based system in <5s(the
updated data is 3s, according to the wiki page). Going through the
bootchart, I saw several "mount" and other commands running in the
early boot stage(see the attachment).

Since exec systemd-cgroups-agent is inefficient, exec these "mounts"
or other commands are also inefficient, especially for a system aiming
to boot up in less than 3s.

5. Both articles suggest to clean up services not used on the devices.
In my case, dev-hugepages, systemd-binfmt, etc are not used on the arm board.

It looks like systemd will evolute to a minimal linux distribution(or
the tiny core of linux distribution), IMHO, it's better not shipped
with too many units, or providing some customization options -
nowadays, systemd is not only running on PC, but also on mobile
devices.

It also looks like systemd is very dynamic, but it also has a cold
boot stage or hard coded/implicit logic, e.g.
* Branches(and the related switches: environment varibles,
/proc/cmdline, name of argv[0] ...) for "system mode" / "system mode
in container" / "system mode that's reexec from initrd" / "user
mode"...

* The EFI stuffs are not used on ARM

* Special targets, units default depend on basic target

* Logging target changes in boot up: STDERR, KMSG_OR_JOURNAL, Console
..., if lost loggings, it's not easy to know why.

* The fallback logics(e.g. a new syscall is not supported) may not be
identical to normal ones, and it's not easy to figure out such
problems.

* ...

These makes it hard to use systemd in a customized system. What I call
for is to make the cold boot logic "declarative", something like:
main.c:
log_to_A
mount_X
mount_Y

Where log_to_A maybe a macro.

This makes it friendly for customizing/debugging.

6. Tizen applies services reordering for fastboot.
The boot procedure is divided into several stages, the first stage is
started up a user interactive environment as soon as possible. They
use path to notify a stage is finished.

BTW, here, the path target is a bit similar to the property system on android:
* Systemd: When a path exists, depending services fires.
* Systemd: When a property is set to the expected value, depending
services fires.

I wonder whether a property system also makes sense in systemd's world?



-- 
1. A readahead method of employing a special device mapper target:
https://www.google.com/patents/CN102520884A
2. https://wiki.tizen.org/w/images/2/28/Bootchart-fastboot.pdf


Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-17 Thread cee1
2015-06-17 23:38 GMT+08:00 Reindl Harald :
>
>
> Am 17.06.2015 um 17:08 schrieb cee1:
>>
>> 2015-06-17 22:03 GMT+08:00 Lennart Poettering :
>>>
>>> On Wed, 17.06.15 20:21, cee1 (fykc...@gmail.com) wrote:
>>>>
>>>>
>>>> What I means is:
>>>> 1. Load a saved seed to /dev/urandom.
>>>> 2. The service read /dev/random, which will block until kernel thinks
>>>> there's enough entropy - then the Random Number should be good?
>>>> 3. Save the random number returned in step 2 on disk.
>>>
>>>
>>> Blocking at boot for this doesn't really sound like an option. But the
>>> kernel does not provide us with any nice notifications about when the
>>> RNG pool is complete. If we want to do this kind of polishing, then
>>> that'd be great, but we'd need sane notifiers for that, blocking
>>> syscalls are not an option.
>>
>>
>> That don't mean blocking boot, but a service, let's say
>> systemd-random-seed.service:
>> 1. systemd-random-seed.service loads a seed from disk to /dev/urandom
>> 2. systemd-random-seed.service tells systemd "I'm ready" (sd_notify())
>> 3. Instead of quitting immediately, systemd-random-seed.service tries
>> to read /dev/random, and it blocks ...
>> 4. systemd-random-seed.service at last gets a 'good random number',
>> and saves it on disk
>
>
> * the purpose of systemd-random-seed.service is to seed
>   /dev/random realy at boot so that other services like
>   sshd, vpn, webservers have a random source

First it seeds /dev/urandom
Second, seed /dev/random will not increase the entropy without using
ioctl (please see
https://www.mail-archive.com/systemd-devel@lists.freedesktop.org/msg32555.html)

Though, some other services may read /dev/random, and the suggested
logic may exhaust the very little entropy, hence blocks "those other
services"?

May use getrandom(as mentioned in http://www.2uo.de/myths-about-urandom):
"""
This syscall does the right thing: blocking until it has gathered
enough initial entropy, and never blocking after point.
"""



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-17 Thread cee1
2015-06-17 23:15 GMT+08:00 Lennart Poettering :
>> That don't mean blocking boot, but a service, let's say
>> systemd-random-seed.service:
>> 1. systemd-random-seed.service loads a seed from disk to /dev/urandom
>> 2. systemd-random-seed.service tells systemd "I'm ready" (sd_notify())
>> 3. Instead of quitting immediately, systemd-random-seed.service tries
>> to read /dev/random, and it blocks ...
>> 4. systemd-random-seed.service at last gets a 'good random number',
>> and saves it on disk
>
> i'd be willing to take a patch for such a change.

The type of this systemd-random-seed.service should be "notify", right?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-17 Thread cee1
2015-06-17 22:03 GMT+08:00 Lennart Poettering :
> On Wed, 17.06.15 20:21, cee1 (fykc...@gmail.com) wrote:
>>
>> What I means is:
>> 1. Load a saved seed to /dev/urandom.
>> 2. The service read /dev/random, which will block until kernel thinks
>> there's enough entropy - then the Random Number should be good?
>> 3. Save the random number returned in step 2 on disk.
>
> Blocking at boot for this doesn't really sound like an option. But the
> kernel does not provide us with any nice notifications about when the
> RNG pool is complete. If we want to do this kind of polishing, then
> that'd be great, but we'd need sane notifiers for that, blocking
> syscalls are not an option.

That don't mean blocking boot, but a service, let's say
systemd-random-seed.service:
1. systemd-random-seed.service loads a seed from disk to /dev/urandom
2. systemd-random-seed.service tells systemd "I'm ready" (sd_notify())
3. Instead of quitting immediately, systemd-random-seed.service tries
to read /dev/random, and it blocks ...
4. systemd-random-seed.service at last gets a 'good random number',
and saves it on disk

This can save a seed as soon as possible, as suggested in the article
http://www.2uo.de/myths-about-urandom/:
"""
On Linux it isn't too bad, because Linux distributions save some
random numbers when booting up the system (but after they have
gathered some entropy, since the startup script doesn't run
immediately after switching on the machine) into a seed file that is
read next time the machine is booting.

Obviously that isn't as good as if you let the shutdown scripts write
out the seed, because in that case there would have been much more
time to gather entropy. The advantage is obviously that this does not
depend on a proper shutdown with execution of the shutdown scripts (in
case the computer crashes, for example).
"""



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-17 Thread cee1
2015-06-17 16:40 GMT+08:00 Reindl Harald :
>
> Am 17.06.2015 um 05:06 schrieb cee1:
>>
>> 2015-06-16 0:21 GMT+08:00 Lennart Poettering :
>>>
>>> On Mon, 15.06.15 23:33, cee1 (fykc...@gmail.com) wrote:
>>>>
>>>> Hi,
>>>>
>>>> I maybe got confused.
>>>>
>>>> First, systemd-random-seed.service will save a "seed" from
>>>> /dev/urandom when shutdown, and load that "seed" to /dev/urandom when
>>>> next boot up.
>>>>
>>>> My questions are:
>>>> 1. Can we not save a seed, but load a seed that is read from **
>>>> /dev/random ** to ** /dev/urandom **?
>>>
>>>
>>> The seed is used for both. Then you'd feed the stuff you got from the
>>> RNG back into the RNG which is a pointless excercise.
>>
>>
>> systemd-random-seed.service will load the "seed on disk" to
>> /dev/urandom, and save a "seed" to disk when shutdown, right?
>>
>> The article at http://www.2uo.de/myths-about-urandom/ suggests us
>> saving the seed as soon as there is enough entropy(means read from
>> /dev/random? if returns, there's enough entropy),
>
>
> well, so you read the seed and inject it to /dev/random followed by read
> /dev/random and overwrite the seed for the next boot - don't sounds that
> good

What I means is:
1. Load a saved seed to /dev/urandom.
2. The service read /dev/random, which will block until kernel thinks
there's enough entropy - then the Random Number should be good?
3. Save the random number returned in step 2 on disk.



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-16 Thread cee1
2015-06-16 0:21 GMT+08:00 Lennart Poettering :
> On Mon, 15.06.15 23:33, cee1 (fykc...@gmail.com) wrote:
>
>> Hi,
>>
>> I maybe got confused.
>>
>> First, systemd-random-seed.service will save a "seed" from
>> /dev/urandom when shutdown, and load that "seed" to /dev/urandom when
>> next boot up.
>>
>> My questions are:
>> 1. Can we not save a seed, but load a seed that is read from **
>> /dev/random ** to ** /dev/urandom **?
>
> The seed is used for both. Then you'd feed the stuff you got from the
> RNG back into the RNG which is a pointless excercise.

systemd-random-seed.service will load the "seed on disk" to
/dev/urandom, and save a "seed" to disk when shutdown, right?

The article at http://www.2uo.de/myths-about-urandom/ suggests us
saving the seed as soon as there is enough entropy(means read from
/dev/random? if returns, there's enough entropy),

Saving the seed early, make it more tolerant against the case of
system crashes - that means not shutdown properly(which maybe the case
on some mobile device such as STB.



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-15 Thread cee1
Hi,

I maybe got confused.

First, systemd-random-seed.service will save a "seed" from
/dev/urandom when shutdown, and load that "seed" to /dev/urandom when
next boot up.

My questions are:
1. Can we not save a seed, but load a seed that is read from **
/dev/random ** to ** /dev/urandom **?
2. Saving a seed on disk, and someone reads the content of it later,
will this make the "urandom" predictable?

Talking about /dev/random, it consumes an internal entropy pool, some
system events(disk reading/page fault, etc) enlarges this pool, am I
right?

And write to /dev/random will mix the input data into the pool, but
not enlarge it, right?  What benefits can it get when only mix data
but not enlarge the entropy pool?

3.16+ will mix data from HWRNG, does it also enlarges the entropy pool?


2015-06-15 8:40 GMT+08:00 Dax Kelson :
>
> On Jun 14, 2015 10:11 AM, "Cristian Rodríguez"
>  wrote:
>>
>> On Sun, Jun 14, 2015 at 1:43 PM, Greg KH 
>> wrote:
>> > On Sun, Jun 14, 2015 at 12:49:55PM -0300, Cristian Rodríguez wrote:
>>
>>
>> Las time I checked , it required this userspace help even when the
>> machine has rdrand/rdseed or when a virtual machine is fed from the
>> host using the virtio-rng driver.. (may take up to 60 seconds to
>> report
>> random: nonblocking pool is initialized) Any other possible solution
>> that I imagined involves either blocking and/or changes in the
>> behaviour visible to userspace and that is probably unacceptable
>> .
>
> I added the following text to Wikipedia's /dev/random page.
>
> "With Linux kernel 3.16 and newer, the kernel itself mixes data from
> hardware random number generators into/dev/random on a sliding scale based
> on the definable entropy estimation quality of the HWRNG. This means that no
> userspace daemon, such as rngd from rng-toolsis needed to do that job. With
> Linux kernel 3.17+, the VirtIO RNG was modified to have a default quality
> defined above 0, and as such, is currently the only HWRNG mixed into
> /dev/random by default."
>
>
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Why we need to read/save random seed?

2015-06-14 Thread cee1
Hi all,

Why we need to read/save random seed? Can it be read from /dev/random each time?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Improve boot-time of systemd-based device, revisited

2015-06-14 Thread cee1
Hi all,

I've recently got another chance to improve the boot-time of a
systemd-based device. I'd like to share the experience here, and some
thoughts and questions.

The first time I tried to improve the boot-time of systemd:
http://lists.freedesktop.org/archives/systemd-devel/2011-March/001707.html,
after that, we have systemd-bootchart and systemd-analyze, which help
a lot.

It seems the biggest challenge of reducing boot-time of the ARM board
at hand is taking are of the poor I/O performance:
* A single fgets() call may randomly cause 200-300ms
* A (big)service may spend 2-3s to complete its so loading - only
~100ms spent on CPU.

I tried to first delay services which are less important, to save the
I/O bandwidth in the early stage, and raise the priority of important
services to SCHED_RR/IOPRIO_CLASS_RT:
1. I need to find the "top I/O hunger" processes (and then delay them
if not important), but it's not straightforward to figure it out in
bootchart, so adding *** iotop feature *** in bootchart seems very
useful.

2. I think raising CPU scheduling priority works because it reduces
chances of issuing I/O requests from other processes. Some thoughts:
* The priority feature of I/O scheduler(CFQ) seems not work very well
- IDLE I/O can still slow down Normal/RT I/O [1]
* I don't know the detail of CFQ, but I wonder whether a "rate limit"
helps - may reduce the latency between issuing I/O command and full
filling the command?

Last, I tried some readahead(ureadahead), but not do the magic, I
guess it's because I/O is busy in the early stage, there's simply no
"ahead" chance.
What readahead helps, IMHO, is a snapshot of accessed disk blocks
during boot up, in the order of they're requested. Thus a linear
readahead against the snapshot will always read ahead of actual
requesting blocks.

BTW, systemd-bootchart has a option to chart entropy, how is the
entropy involved in boot up procedure?



---
1. 
http://linux-kernel.vger.kernel.narkive.com/0FC8rduf/ioprio-set-idle-class-doesn-t-work-as-its-name-suggests


Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] In what case will debugfs be mounted multi-times?

2015-06-13 Thread cee1
2015-06-09 18:10 GMT+08:00 Lennart Poettering :
> On Thu, 04.06.15 23:41, cee1 (fykc...@gmail.com) wrote:
>> So why the Debug File System is mounted multi-times here? Any idea?
>
> Hmm, my suspicion is that the file system might actually already be
> mounted by the kernel the second time we look at it, but systemd is
> doesn't notice that or so.
>
> Is it possible that your kernel has been built without
> name_to_handle_at() or so?
Yes, it returns NOSYS...

The strange thing is these only happens when a service (default
dependencies = no) failed to start.



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] In what case will debugfs be mounted multi-times?

2015-06-04 Thread cee1
Hi all,

I'm running systemd v219 on a ARM board, and find following suspicious
log message:
"""
Jan 01 08:00:01 localhost unknown: c3 1 (systemd) systemd[1]: Mounting
Debug File System...

Jan 01 08:00:01 localhost unknown: c3 1 (systemd) systemd[1]: Starting
Remount Root and Kernel File Systems...

Jan 01 08:00:01 localhost unknown: c3 1 (systemd) systemd[1]: Starting
Foo Service...

Jan 01 08:00:01 localhost unknown: c3 1 (systemd) systemd[1]: Mounted
Debug File System.

Jan 01 08:00:01 localhost systemd[1]: foo.service: main process
exited, code=exited, status=127/n/a

Jan 01 08:00:02 localhost systemd[1]: foo.service holdoff time over,
scheduling restart.

Jan 01 08:00:02 localhost systemd[1]: Started Remount Root and Kernel
File Systems.

Jan 01 08:00:02 localhost systemd[1]: Reached target Local File Systems (Pre).

Jan 01 08:00:02 localhost systemd[1]: Starting Local File Systems (Pre).

Jan 01 08:00:02 localhost systemd[1]: sys-kernel-debug.mount:
Directory /sys/kernel/debug to mount over is not empty, mounting
anyway.

Jan 01 08:00:02 localhost systemd[1]: Mounting Debug File System...

Jan 01 08:00:02 localhost systemd[1]: sys-kernel-debug.mount mount
process exited, code=exited status=32

Jan 01 08:00:02 localhost systemd[1]: Failed to mount Debug File System.
"""


foo.service is a service with DefaultDependencies=no and Conflicts and
Before shutdown.target

So why the Debug File System is mounted multi-times here? Any idea?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] How many times is the root mounted in boot up?

2015-06-01 Thread cee1
Hi all,

In case of no initrd and mounting the root by specifying
"root=/dev/sdaN" in kernel command line, how many time is the root
mounted in systemd?

I find:
1. systemd will generate a "-.mount" unit from /proc/self/mountsinfo
2. systemd will generate a "-.mount" unit by systemd-fstab-generator

Q:
* Which one takes priority?
* For 1, it will not do the mount action, but 2 will. Am I right? If
so, why we mount root here(again)?

And systemd-remount-fs.service will remount the root again, thus apply
options in fstab?

BTW, where are the units generated by generators?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] What does udevd do

2015-05-30 Thread cee1
2015-05-31 5:52 GMT+08:00 Lennart Poettering :
>> Then udevd is only responsible for
>> 1) Making nodes, and doing other device initialization stuffs.
>
> udev does not create device nodes. That's the job of devtmpfs in the
> kernel. udev will however apply access modes/acls/fix ownership create
> symlinks and so on.

That means ** all ** dev nodes are there when devtmpfs is mounted?
I thought devtmpfs makes a limited number of nodes for early boot-up.



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] What does udevd do

2015-05-30 Thread cee1
Hi all,

If a service wants to be notified when a device is plugged in, it
invokes routines of libudev which actually:
* Receive notifications from the kernel via NETLINK socket.
* Query the detailed info from /sys/...

Am I right?

Then udevd is only responsible for
1) Making nodes, and doing other device initialization stuffs.
2) Notifying systemd, let systemd start related daemons.

Is it right?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Questions about socket activated services

2015-05-30 Thread cee1
Hi all,

Which service type should a socket activated service be?
1. For systemd-udevd.service and systemd-journald.service, they are notify type
2. For dbus.service, it is simple type

Does socket activation handles the timeout case?
E.g. A.service connects to B.socket, but B.service spends a long time
to be ready, may cause A.service receives ETIMEDOUT?

When the service is activated, systemd will still listen to the socket
but do nothing for incoming data, right?

BTW, netstat -lp shows only systemd is listening to a socket, but not
show the one who also is listening to the socket, e.g.
"""
unix  2  [ ACC ] SEQPACKET  LISTENING 326  1/systemd
 /run/udev/control
"""
Curious why?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Reduce unit-loading time

2015-05-24 Thread cee1
2015-05-20 1:01 GMT+08:00 Martin Pitt :
> Hey cee1,
>
> cee1 [2015-05-18 23:52 +0800]:
>> At the first glance, I find ureadahead has some difference compared
>> with the readahead once in systemd, IIRC:
>
> Yes, for sure. systemd's was improved quite a bit. ureadahead is
> mostly unmaintained, but it works well enough so we didn't bother to
> put work into it until someone actually complains :-)
>
>> 1. ureadahead.service is in default.target, which means ureadahead
>> starts later than systemd's?
>
> That mostly means that it's not started with e. g. emergency or
> rescue. It still starts early (DefaultDependencies=false).
>
>> 2. The original systemd readahead has "collect" and "replay" two
>> services, and these are done in ureadahead.service?
>
> Yes.
>
>> 3. IIRC, The original systemd readahead uses madvise(); ureadahead
>> uses readahead()
>> 4. The original systemd readahead uses fanotify() to get the list of
>> accessed files; ureadahead use fsnotify
>
> I haven't verified these, but this sounds correct. ureadahead is
> really old, presumably the newer features like fanotify weren't
> available back then.

I tried ureadahead, but got following error:

"""write(2, "ureadahead: Error while tracing:"..., 59ureadahead: Error
while tracing: No such file or directory"""

Needs an out-of-tree kernel patch?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Problem when m->finish_timestamp is set before running manager_loop

2015-05-23 Thread cee1
2015-05-22 3:36 GMT+08:00 Lennart Poettering :
>
> Should be fixed in git. Please verify!

Confirmed, thanks!



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] fsckd needs to go

2015-05-22 Thread cee1
2015-05-22 20:23 GMT+08:00 Martin Pitt :
> Hello Lennart,
>
> sorry for the late answer, got stuck in different things in the past
> two weeks..
>
> Lennart Poettering [2015-04-28 17:33 +0200]:
>> On Fri, 03.04.15 14:58, Lennart Poettering (lenn...@poettering.net) wrote:
>>
>> > systemd-fsckd would try to connect to some AF_UNIX/SOCK_STREAM socket
>> > in the fs, after forking and before execing fsck in the child, and
>> > pass the connected socket to fsck via the -C switch. If the socket is
>> > not connectable it would avoid any -C switch. With this simple change
>> > you can make this work for you: simply write a daemon (outside of
>> > systemd) that listens on that sockets and reads the progress data from
>> > it. Using SO_PEERCRED you can query which fsck PID this is from and
>> > use it to kill it. You could even add this to ply natively if you
>> > wish, since it's kinda strange to bump this all off another daemon in
>> > the middle, unnecessarily.
>>
>> I implemented this now, and removed fsckd in the progress. The
>> progress data is now available on /run/systemd/fsck.progress which
>> should be an AF_UNIX/SOCK_STREAM socket.
>
> Great, thanks! This works fine, it's very similar to what Didier did
> before. I. e. fsckd essentially works almost unmodified (except for
> adjusting the socket path).
>
> So we'll maintain that patch downstream now. It makes maintaining
> translations harder, but so be it.
>
>> Please test this, I only did some artifical testing myself, since I
>> don't use file systems that require fsck anymore myself.
>
> Neither do I, but there's always test/mocks/fsck which works very
> nicely.
>
> Thanks,
>
> Martin
>
> --
> Martin Pitt| http://www.piware.de
> Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Hey,

Just mention it, we've implemented similar fsck progress report in
LOonux3[1] several years ago.

FYI:
* http://lists.freedesktop.org/archives/systemd-devel/2011-June/002654.html
* patch for systemd:
https://github.com/cee1/systemd/commit/c04c709880f0619434ff58580609300d892f281b
* patch for plymouth:
https://github.com/cee1/plymouth/commit/5be1bb7751b547fe5c125a42c3f2fe607568fa0f



--
1. http://dev.lemote.com/category/loonux3



Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Idea of splitting boot up logic from core

2015-05-20 Thread cee1
Hi all,

It seems too many branches in core:
* system mode in host
* system mode in container
* user mode

IMHO, the central concept of systemd is "units", hence it seems more
sensible to keep units related logic in core, and split boot up logic
in each "manager" running in:
1. system mode in host
2. system mode in container
3. user mode

This makes the boot up procedure ** more declarative **(no or less
branches, think about android's initrc)

And the central units logic is only for:
1. Tracking the dependencies among units
2. Schedule launch of units: confine them to meet the dependencies,
and *** make the best schedule decision ***, e.g. If A is a much more
dependent unit, do not start A with too many units in parallel



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Problem when m->finish_timestamp is set before running manager_loop

2015-05-19 Thread cee1
2015-05-20 1:27 GMT+08:00 Lennart Poettering :
> Hmm, can you provide a backtrace of the call chain when this happens,
> please?

The call chain is:
0xb6df1acc : 0xe1a07000
0xb6e9f744 : 0xe3a0
0xb6df4bb0 : 0xe59a0004
0xb6ded8e8 : 0xe2509000
0xb6de6080 : 0xe371

Related logs:
"""
Job mnt-data.mount/stop finished, result=canceled
"""


>
> I have now commited a patch to git, that might fix the issue, but I am
> not entirely sure, given the little information I have:
>
> http://cgit.freedesktop.org/systemd/systemd/commit/?id=aad1976ffa25fa6901f72c300b5980ada0ef44c5
>
> Would be cool if you could check if this patch already fixes the issue
> for you.

It doesn't work :-(



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Problem when m->finish_timestamp is set before running manager_loop

2015-05-18 Thread cee1
Hi all,

I found a "Startup finished in 155ms (userspace) = 155ms"(
which is of course incorrect) log on the board at hand, which is
caused by something likes:

"Job cache.mount/stop finished, result=canceled"

Following the code, I find m->finish_timestamp is set in
manager_check_finished(), which is in turn invoked in
job_finish_and_invalidate() -- All these happens before the
manager_loop running.


-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Reduce unit-loading time

2015-05-18 Thread cee1
Hi Martin,

At the first glance, I find ureadahead has some difference compared
with the readahead once in systemd, IIRC:

1. ureadahead.service is in default.target, which means ureadahead
starts later than systemd's?
2. The original systemd readahead has "collect" and "replay" two
services, and these are done in ureadahead.service?
3. IIRC, The original systemd readahead uses madvise(); ureadahead
uses readahead()
4. The original systemd readahead uses fanotify() to get the list of
accessed files; ureadahead use fsnotify
5. ureadahead has different readahead strategies for ssd and hhd:
5.1 For the former, initiate multi-threads to perform readahead, and
they are running at the lowest IO priority.
5.2 For the later, perform readahead for both inode and file content
at a very high CPU/IO priority (and only support extN filesystem ?)


2015-05-18 18:40 GMT+08:00 Martin Pitt :
> Hello cee1,
>
> cee1 [2015-05-18 18:24 +0800]:
>> Does the readahead-*.service shipped with systemd work for you?
>
> systemd dropped the builtin readahead in 217. It's reasonably easy to
> get back by reverting the "drop readahead" patches, but carrying that
> patch in packages is fairly intrusive. In Ubuntu we've had
> "ureadahead" [1] for many years which is "good enough" for things like
> phones or other ARM boards with slow MMC storage, so I just added
> systemd units to that. It's a separate project so that we don't need
> to install ureadahead everywhere, just where/when we actually need it.
>
> Martin
>
> [1] https://launchpad.net/ubuntu/+source/ureadahead
> --
> Martin Pitt| http://www.piware.de
> Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Reduce unit-loading time

2015-05-18 Thread cee1
2015-05-17 17:45 GMT+08:00 Martin Pitt :
> Hello cee,
>
> cee1 [2015-05-16  0:46 +0800]:
>> Thanks for the suggestion, it was other processes running in parallel
>> which presumably consuming lots of IO, after sending SIGSTOP at the
>> first (and SIGCONT later), the unit loading time is decreased to
>> ~100ms.
>
> You probably want to use some readahead solution. We found that it
> makes a significant improvement on ARM boards with slow MMC cards.
>
> Martin
> --
> Martin Pitt| http://www.piware.de
> Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)

Hey,

Thanks for the suggestion, IIRC, sequential read is also beneficial
for flash storage.

Does the readahead-*.service shipped with systemd work for you?


BTW,  some suggestions and questions :)

Suggestion:
I use the following command to figure out why my service is scheduled
at the time:
"systemctl list-dependencies --after target.service"
and expect it could output "timing info(unit starting time and
started time, etc)" of dependent units.

Question:
How does systemd schedule two services that can be launched in parallel?

I found that,  A and B, if specifies "Before=A", B will start first,
otherwise, B will start at a very late time.


-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Reduce unit-loading time

2015-05-15 Thread cee1
Hey,

Thanks for the suggestion, it was other processes running in parallel
which presumably consuming lots of IO, after sending SIGSTOP at the
first (and SIGCONT later), the unit loading time is decreased to
~100ms.


2015-05-13 19:38 GMT+08:00 Hoyer, Marko (ADITG/SW2) :
> Hi,
>
>> -Original Message-
>> From: systemd-devel [mailto:systemd-devel-
>> boun...@lists.freedesktop.org] On Behalf Of cee1
>> Sent: Wednesday, May 13, 2015 11:52 AM
>> To: systemd Mailing List
>> Subject: [systemd-devel] Reduce unit-loading time
>>
>> Hi all,
>>
>> We're trying systemd to boot up an ARM board, and find systemd uses
>> more than one second to load units.
>
> This sounds far a bit too long to me. Our systemd comes up in an arm based 
> system in about 200-300ms from executing init up to the first unit being 
> executed.
>
>>
>> Comparing with the init of Android on the same board, it manages to
>> boot the system very fast.
>>
>> We guess following factors are involved:
>> 1. systemd has a much bigger footprint than the init of Android: the
>> latter is static linked, and is about 1xxKB (systemd is about 1.4MB,
>> and is linked with libc/libcap/libpthread, etc)
>>
>> 2. systemd spends quiet a while to read/parse unit files.
>
> This depends on the numbers of units involved in the startup (finally 
> connected in the dependency tree that ends in the default.target root). In 
> our case, a comparable large unit set takes by about 40-60ms, not so long, 
> I'd say.
>
>>
>>
>> Any idea to reduce the unit-loading time?
>> e.g. one-single file contains all units descriptions, or a "compiled
>> version"(containing resolved dependencies, or even the boot up
>> sequence)
>
> Could you provide me some additional information about your system setup?
>
> - Version of systemd
> - Are you starting something in parallel to systemd which might take 
> significant IO?
> - Version of the kernel
> - The kernel ticker frequency
> - The enabled cgroups controllers
>
> The last three points might sound a bit far away, but I've an idea in mind ...
>
> Best regards
>
> Marko Hoyer
> Software Group II (ADITG/SW2)
>
> Tel. +49 5121 49 6948
>



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Reduce unit-loading time

2015-05-13 Thread cee1
Hi all,

We're trying systemd to boot up an ARM board, and find systemd uses
more than one second to load units.

Comparing with the init of Android on the same board, it manages to
boot the system very fast.

We guess following factors are involved:
1. systemd has a much bigger footprint than the init of Android: the
latter is static linked, and is about 1xxKB (systemd is about 1.4MB,
and is linked with libc/libcap/libpthread, etc)

2. systemd spends quiet a while to read/parse unit files.


Any idea to reduce the unit-loading time?
e.g. one-single file contains all units descriptions, or a "compiled
version"(containing resolved dependencies, or even the boot up
sequence)



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Callback of sd_bus_track: when it will be invoked

2015-01-29 Thread cee1
Hi all,

I notice in sd_bus_track_new, a callback can be specified, but when it
will be invoked?

It seems it will not be triggered when a name in track is removed.


-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] util.c: ignore pollfd.revent for loop_read/loop_write

2013-09-28 Thread cee1
2013/9/26 Zbigniew Jędrzejewski-Szmek :
> On Sun, Sep 22, 2013 at 09:10:47PM +0800, cee1 wrote:
>> Let read()/write() report any error/EOF.
> This look OK, but can you provide a bit of motivation?
It's a re-sent patch, the original thread is at
http://lists.freedesktop.org/archives/systemd-devel/2013-September/013092.html



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] util.c: ignore pollfd.revent for loop_read/loop_write

2013-09-22 Thread cee1
Let read()/write() report any error/EOF.
---
 src/shared/util.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/src/shared/util.c b/src/shared/util.c
index 2009553..3c08650 100644
--- a/src/shared/util.c
+++ b/src/shared/util.c
@@ -2186,8 +2186,10 @@ ssize_t loop_read(int fd, void *buf, size_t
nbytes, bool do_poll) {
 return n > 0 ? n : -errno;
 }

-if (pollfd.revents != POLLIN)
-return n > 0 ? n : -EIO;
+/* We knowingly ignore the revents value here,
+ * and expect that any error/EOF is reported
+ * via read()/write()
+ */

 continue;
 }
@@ -2234,8 +2236,10 @@ ssize_t loop_write(int fd, const void *buf,
size_t nbytes, bool do_poll) {
 return n > 0 ? n : -errno;
 }

-if (pollfd.revents != POLLOUT)
-return n > 0 ? n : -EIO;
+/* We knowingly ignore the revents value here,
+ * and expect that any error/EOF is reported
+ * via read()/write()
+ */

 continue;
 }
-- 
1.8.3.1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Some thoughts about loop_read/loop_write in util.c

2013-09-12 Thread cee1
2013/9/12 Lennart Poettering :
> On Thu, 12.09.13 09:43, cee1 (fykc...@gmail.com) wrote:
>
>> What about the following patch? It simply do read/write again if poll
>> returns, and let read/write report error if something is wrong.
>
> I guess that patch makes sense, but could you change it to not just
> comment but delete the old lines? Also, could you add a comment there:
>
> /* We knowingly ignore the revents value here, and expect that any
>error/EOF is reported via read()/write() */
OK, see the attachment.


-- 
Regards,

- cee1


0001-util.c-ignore-pollfd.revent-for-loop_read-loop_write.patch
Description: Binary data
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Some thoughts about loop_read/loop_write in util.c

2013-09-11 Thread cee1
2013/9/11 Lennart Poettering :
>> loop_read/loop_write:
>> http://cgit.freedesktop.org/systemd/systemd/tree/src/shared/util.c#n2179
>>
>> In a scenario of pipes, loop_read on read side, if the write side is
>> closed, loop_read will return 0 if do_poll is false(let's assume no
>> data available to read). When do_poll is true, it will return:
>> 1) 0, if write side is closed while loop_read is just doing a read
>> 2) or -EIO when poll returns pollfd.revent with POLLHUP flag set
>>
>> The behavior is not very consistent.
>> IMHO, it's preferred loop_read follows read behavior as much as
>> possible -- returns 0 to indicate end of a file here, e.g. We can try
>> to read 0 bytes when pollfd.revents != POLLIN.
>
> EOF and temporarily not being able to read more data is something very
> different.
>
> It might make sense to return EAGAIN if POLLHUP is set though.
>
> (But note that POLLHUP has more complex semantics when combined with
> shutdown() and half-open connections...)
>
>> The same with loop_write.
>
> EOF doesn't exist for loop_write(), so this is even weirder
Sorry for not make myself clear, I mean EPIPE.

What about the following patch? It simply do read/write again if poll
returns, and let read/write report error if something is wrong.

>From 3b83e839ebfc161565db76ce8d0e1dd4da1b0afc Mon Sep 17 00:00:00 2001
From: Chen Jie 
Date: Thu, 12 Sep 2013 09:21:41 +0800
Subject: [PATCH] util.c: not check pollfd.revent for loop_read/loop_write

If an error occurred, let read/write report it.
---
 src/shared/util.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/src/shared/util.c b/src/shared/util.c
index 1dde8af..e08ec44 100644
--- a/src/shared/util.c
+++ b/src/shared/util.c
@@ -2206,8 +2206,10 @@ ssize_t loop_read(int fd, void *buf, size_t
nbytes, bool do_poll) {
 return n > 0 ? n : -errno;
 }

+/*
 if (pollfd.revents != POLLIN)
 return n > 0 ? n : -EIO;
+ */

 continue;
 }
@@ -2254,8 +2256,10 @@ ssize_t loop_write(int fd, const void *buf,
size_t nbytes, bool do_poll) {
 return n > 0 ? n : -errno;
 }

+/*
 if (pollfd.revents != POLLOUT)
 return n > 0 ? n : -EIO;
+ */

 continue;
 }


-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Some thoughts about loop_read/loop_write in util.c

2013-09-11 Thread cee1
Hi all,

loop_read/loop_write:
http://cgit.freedesktop.org/systemd/systemd/tree/src/shared/util.c#n2179

In a scenario of pipes, loop_read on read side, if the write side is
closed, loop_read will return 0 if do_poll is false(let's assume no
data available to read). When do_poll is true, it will return:
1) 0, if write side is closed while loop_read is just doing a read
2) or -EIO when poll returns pollfd.revent with POLLHUP flag set

The behavior is not very consistent.
IMHO, it's preferred loop_read follows read behavior as much as
possible -- returns 0 to indicate end of a file here, e.g. We can try
to read 0 bytes when pollfd.revents != POLLIN.

The same with loop_write.


-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to debug crash of systemd

2013-02-13 Thread cee1
在 2013年2月13日星期三,Lennart Poettering 写道:

> On Tue, 12.02.13 13:43, cee1 (fykc...@gmail.com ) wrote:
>
> > Hi all,
> >
> > systemd will call crash() if received fatal signals. It will produce a
> > core dump for analysis.
> > However, it seems signal handler has a separated stack, so can't back
> > trace the place where exactly the fatal signal triggered.
>
> Nowadays gdb should be good enough to follow the stack trace further up
> than just the signal handler. Which arch is this? If it's not x86 this
> sounds like something to fix in gdb for that arch. I have multiple times
> used the coredumps and they always worked fine, and where not confused
> by the signal stack...
>

It's on a mips machine, seems I have to figure it out manually.
Thanks for the advices.



Regards,
-- cee1


-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] How to debug crash of systemd

2013-02-11 Thread cee1
Hi all,

systemd will call crash() if received fatal signals. It will produce a
core dump for analysis.
However, it seems signal handler has a separated stack, so can't back
trace the place where exactly the fatal signal triggered.

Any idea?

BTW,
* It will be helpful if could print /proc/1/maps, more:
https://github.com/cee1/systemd/commit/89d049507734746f6f1100218ca97cc829b05e0a
* Has anyone tried the crash shell? I added a custom sysrq which will
send SEGV to init, hence triggered the crash(). According to the log,
crash shell was called and exited immediately.



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Idea about improving boot-up status report

2013-02-11 Thread cee1
Hi all,

Systemd will report "Starting/Started ..."  of units when boot-up, but
it doesn't tell users what's happening when the boot-up process
blocks(e.g. waiting for timeout, my personal story was swap partition
changed but forgot to modify fstab, which caused a hangup each
boot-up).

It would be better if Systemd could timely report which units are
still starting.
E.g. we limit the max starting units meanwhile(which may improve boot
performance). When blocked, tell plymouth which units are started and
which units are still starting of the current batch.

Any ideas?

-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Question about plymouth-quit.service

2011-10-10 Thread cee1
Hi all,

It seems plymouth-quit.service hasn't ever been activated on my Fedora15:
systemctl status plymouth-quit.service
plymouth-quit.service - Terminate Plymouth Boot Screen
  Loaded: loaded (/lib/systemd/system/plymouth-quit.service)
  Active: inactive (dead)
  CGroup: name=systemd:/system/plymouth-quit.service


Then, how does it notify plymouth boot splash to quit?

Also I noticed prefdm.service has relations with plymouth-quit.service:
Conflicts=plymouth-quit.service
After=plymouth-quit.service
What does that mean?



-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel