[SyncEvolution] syncevo-dbus-server + forking processes

2011-11-16 Thread Patrick Ohly
Hello!

Chris is going to work on running syncs inside a process forked from the
main syncevo-dbus-server. Before diving into the planning and
implementation, let me outline the background.

At the moment, syncevo-dbus-server is one process (when ignoring local
sync for a second) which handles D-Bus calls and syncs at the same time.
A sync involves operations which can block for a long time, so keeping
the D-Bus part alive and responsive is challenging. It involves hacks
like entering and leaving the main event loop, checking it inside
libsoup, etc.

Local sync already uses a second forked process so that it can run
client and server engines concurrently.

Getting rid of the current hacks is one motivation for running syncs in
a forked process. The main process would enter the main loop once and
never return unless it was asked to shut down.

The other is that (eventually) it'll allow to run a "clean" main process
which never loads any of the backend modules. That may help to keep the
permanent memory footprint smaller than it is today. Right now this is
not possible because merely registering a backend requires loading it.
Fixing that is outside of the current scope.

Resource leaks inside a sync become less problematic when the process
running the sync can shut down.

Here are some requirements for "sync in forked process":
  * should be disabled when running outside of the D-Bus daemon
(syncevolution command line in non-daemon mode, client-test
testing)
  * interactive password requests through D-Bus must continue to
work
  * abort/suspend requests via D-Bus must work
  * the forked process should shut down cleanly, so that "valgrind
--follow-child=yes --leak-check=yes" provides sane results (the
forked process in a local sync currently fails to do that,
because it is forked in a less controlled environment and
therefore just calls _exit())
  * logging:
  * the main syncevo-dbus-server process should use syslog
logging for its own messages, without redirecting
stdout/stderr
  * the forked process should start redirecting
stdout/stderr into the normal logging mechanism (so that
library error messages are visible when running the
syncevolution command line as client of the D-Bus server
and, later, they appear in log files) and switch to a
session log when the session is ready

Somewhat related to this work is support for push sync. I suspect that
this will also use forked processes to monitor remote and local
databases without interfering with the main process. It would be good if
the mechanism for forking and communication between process was generic
and not tied to syncing alone.


-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-11-16 Thread Chris Kühl
On Wed, Nov 16, 2011 at 12:22 PM, Patrick Ohly  wrote:
> Hello!
>
> Chris is going to work on running syncs inside a process forked from the
> main syncevo-dbus-server. Before diving into the planning and
> implementation, let me outline the background.
>
> At the moment, syncevo-dbus-server is one process (when ignoring local
> sync for a second) which handles D-Bus calls and syncs at the same time.
> A sync involves operations which can block for a long time, so keeping
> the D-Bus part alive and responsive is challenging. It involves hacks
> like entering and leaving the main event loop, checking it inside
> libsoup, etc.
>
> Local sync already uses a second forked process so that it can run
> client and server engines concurrently.
>
> Getting rid of the current hacks is one motivation for running syncs in
> a forked process. The main process would enter the main loop once and
> never return unless it was asked to shut down.
>
> The other is that (eventually) it'll allow to run a "clean" main process
> which never loads any of the backend modules. That may help to keep the
> permanent memory footprint smaller than it is today. Right now this is
> not possible because merely registering a backend requires loading it.
> Fixing that is outside of the current scope.
>
> Resource leaks inside a sync become less problematic when the process
> running the sync can shut down.
>
> Here are some requirements for "sync in forked process":
>      * should be disabled when running outside of the D-Bus daemon
>        (syncevolution command line in non-daemon mode, client-test
>        testing)
>      * interactive password requests through D-Bus must continue to
>        work
>      * abort/suspend requests via D-Bus must work
>      * the forked process should shut down cleanly, so that "valgrind
>        --follow-child=yes --leak-check=yes" provides sane results (the
>        forked process in a local sync currently fails to do that,
>        because it is forked in a less controlled environment and
>        therefore just calls _exit())
>      * logging:
>              * the main syncevo-dbus-server process should use syslog
>                logging for its own messages, without redirecting
>                stdout/stderr
>              * the forked process should start redirecting
>                stdout/stderr into the normal logging mechanism (so that
>                library error messages are visible when running the
>                syncevolution command line as client of the D-Bus server
>                and, later, they appear in log files) and switch to a
>                session log when the session is ready
>

Thanks for the list of requirements.  I'll take these into
consideration. Some in there that we hadn't talked about before.

> Somewhat related to this work is support for push sync. I suspect that
> this will also use forked processes to monitor remote and local
> databases without interfering with the main process. It would be good if
> the mechanism for forking and communication between process was generic
> and not tied to syncing alone.
>

I'm not really well informed on the plans for push sync other that
what I found in [1], but the idea that I had in mind was for the
forking process to initiate a d-bus peer-to-peer ("one-to-one" in the
d-bus spec) connection by forking the process with the d-bus address
as a command line argument. The client then uses that for
communication. This seems pretty generic to me. The only differences
would be the d-bus interface used. ...but maybe I'm misunderstanding
what you mean.

[1] https://bugs.meego.com/show_bug.cgi?id=23762
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-11-16 Thread Patrick Ohly
On Mi, 2011-11-16 at 13:55 +0100, Chris Kühl wrote:
> On Wed, Nov 16, 2011 at 12:22 PM, Patrick Ohly  wrote:
> > Here are some requirements for "sync in forked process":
> >  * should be disabled when running outside of the D-Bus daemon
> >(syncevolution command line in non-daemon mode, client-test
> >testing)
> >  * interactive password requests through D-Bus must continue to
> >work
> >  * abort/suspend requests via D-Bus must work
> >  * the forked process should shut down cleanly, so that "valgrind
> >--follow-child=yes --leak-check=yes" provides sane results (the
> >forked process in a local sync currently fails to do that,
> >because it is forked in a less controlled environment and
> >therefore just calls _exit())
> >  * logging:
> >  * the main syncevo-dbus-server process should use syslog
> >logging for its own messages, without redirecting
> >stdout/stderr
> >  * the forked process should start redirecting
> >stdout/stderr into the normal logging mechanism (so that
> >library error messages are visible when running the
> >syncevolution command line as client of the D-Bus server
> >and, later, they appear in log files) and switch to a
> >session log when the session is ready
> >
> 
> Thanks for the list of requirements.  I'll take these into
> consideration. Some in there that we hadn't talked about before.

I know, that's why I'd rather discuss this upfront. Some of these
requirements are implied by "change must not cause
regressions" (interactive passwords, abort/suspend), others are open
questions (startup/shutdown of the forked process, logging).

> > Somewhat related to this work is support for push sync. I suspect that
> > this will also use forked processes to monitor remote and local
> > databases without interfering with the main process. It would be good if
> > the mechanism for forking and communication between process was generic
> > and not tied to syncing alone.
> >
> 
> I'm not really well informed on the plans for push sync other that
> what I found in [1],

There are no more specific plans yet.

>  but the idea that I had in mind was for the
> forking process to initiate a d-bus peer-to-peer ("one-to-one" in the
> d-bus spec) connection by forking the process with the d-bus address
> as a command line argument. The client then uses that for
> communication. This seems pretty generic to me. The only differences
> would be the d-bus interface used. ...but maybe I'm misunderstanding
> what you mean.

No, that's what I was looking for. When you say "command line argument",
does that mean that there will be an exec() involved? Would that start
the same binary or something else?

The advantage of the exec() is that the new process is much cleaner. The
downside is that it is harder to implement (How does the forking process
find the "right" executable? How does injecting valgrind into the new
binary work?) and a bit more expensive (must load all libraries again).

It might be easier to avoid the exec() and instead return to main() of
the syncevo-dbus-server in the forked process (including freeing of all
parent process resources, which must be done anyway), then go down a
different path as if the binary had just been started.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-11-28 Thread Chris Kühl
On Wed, Nov 16, 2011 at 2:39 PM, Patrick Ohly  wrote:
> On Mi, 2011-11-16 at 13:55 +0100, Chris Kühl wrote:
>> On Wed, Nov 16, 2011 at 12:22 PM, Patrick Ohly  
>> wrote:
>> > Here are some requirements for "sync in forked process":
>> >      * should be disabled when running outside of the D-Bus daemon
>> >        (syncevolution command line in non-daemon mode, client-test
>> >        testing)
>> >      * interactive password requests through D-Bus must continue to
>> >        work
>> >      * abort/suspend requests via D-Bus must work
>> >      * the forked process should shut down cleanly, so that "valgrind
>> >        --follow-child=yes --leak-check=yes" provides sane results (the
>> >        forked process in a local sync currently fails to do that,
>> >        because it is forked in a less controlled environment and
>> >        therefore just calls _exit())
>> >      * logging:
>> >              * the main syncevo-dbus-server process should use syslog
>> >                logging for its own messages, without redirecting
>> >                stdout/stderr
>> >              * the forked process should start redirecting
>> >                stdout/stderr into the normal logging mechanism (so that
>> >                library error messages are visible when running the
>> >                syncevolution command line as client of the D-Bus server
>> >                and, later, they appear in log files) and switch to a
>> >                session log when the session is ready
>> >
>>
>> Thanks for the list of requirements.  I'll take these into
>> consideration. Some in there that we hadn't talked about before.
>
> I know, that's why I'd rather discuss this upfront. Some of these
> requirements are implied by "change must not cause
> regressions" (interactive passwords, abort/suspend), others are open
> questions (startup/shutdown of the forked process, logging).
>
>> > Somewhat related to this work is support for push sync. I suspect that
>> > this will also use forked processes to monitor remote and local
>> > databases without interfering with the main process. It would be good if
>> > the mechanism for forking and communication between process was generic
>> > and not tied to syncing alone.
>> >
>>
>> I'm not really well informed on the plans for push sync other that
>> what I found in [1],
>
> There are no more specific plans yet.
>
>>  but the idea that I had in mind was for the
>> forking process to initiate a d-bus peer-to-peer ("one-to-one" in the
>> d-bus spec) connection by forking the process with the d-bus address
>> as a command line argument. The client then uses that for
>> communication. This seems pretty generic to me. The only differences
>> would be the d-bus interface used. ...but maybe I'm misunderstanding
>> what you mean.
>
> No, that's what I was looking for. When you say "command line argument",
> does that mean that there will be an exec() involved? Would that start
> the same binary or something else?
>

Yes this was going to be my proposal.

> The advantage of the exec() is that the new process is much cleaner. The
> downside is that it is harder to implement (How does the forking process
> find the "right" executable?

I believe the place for the "helper" sync binary would be in the
libexec directory: /usr/libexec/syncevolution/sync-session for
example.

How does injecting valgrind into the new
> binary work?)

Valgrind can follow child processes with --trace-children=yes and
create a log-file per process by inserting %p in the log-file name.

and a bit more expensive (must load all libraries again).
>

This will definitely require a small bit more resources to be used and
take a few microseconds longer to start but these are resources that
will be mostly loaded from memory (and probably cache) after the first
sync session is started and be freed when the process disappears.
Also, the time a fork-exec pair takes over just a fork is
insignificant compared to how long a sync session takes.

> It might be easier to avoid the exec() and instead return to main() of
> the syncevo-dbus-server in the forked process (including freeing of all
> parent process resources, which must be done anyway),

The potential problem I see is that some of the resources we may not
be able to free because they are not under our control: libraries,
etc.

then go down a
> different path as if the binary had just been started.
>

This is how I originally intended to implement this till I started
looking into it a little more and asking around. To me, the potential
problems seem to outweigh the advantages.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-11-28 Thread Gapps + local IMAP
On Mo, 2011-11-28 at 18:00 +0100, Chris Kühl wrote:
> On Wed, Nov 16, 2011 at 2:39 PM, Patrick Ohly  wrote:
> > No, that's what I was looking for. When you say "command line argument",
> > does that mean that there will be an exec() involved? Would that start
> > the same binary or something else?
> >
> 
> Yes this was going to be my proposal.
> 
> > The advantage of the exec() is that the new process is much cleaner. The
> > downside is that it is harder to implement (How does the forking process
> > find the "right" executable?
> 
> I believe the place for the "helper" sync binary would be in the
> libexec directory: /usr/libexec/syncevolution/sync-session for
> example.

Please also add an env variable which overrides that default location.
The nightly testing needs that because it runs with an installation that
was made with DESTDIR and thus doesn't have the files in the hard-coded
places.

> > How does injecting valgrind into the new
> > binary work?)
> 
> Valgrind can follow child processes with --trace-children=yes and
> create a log-file per process by inserting %p in the log-file name.

Duh, of course.

[fork + function call from main()]

> This is how I originally intended to implement this till I started
> looking into it a little more and asking around. To me, the potential
> problems seem to outweigh the advantages.

Your concerns mirror my own concerns, so agreed, let's do the fork+exec.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-11-29 Thread Chris Kühl
On Tue, Nov 29, 2011 at 8:40 AM, Gapps + local IMAP
 wrote:
> On Mo, 2011-11-28 at 18:00 +0100, Chris Kühl wrote:
>> On Wed, Nov 16, 2011 at 2:39 PM, Patrick Ohly  wrote:
>> > No, that's what I was looking for. When you say "command line argument",
>> > does that mean that there will be an exec() involved? Would that start
>> > the same binary or something else?
>> >
>>
>> Yes this was going to be my proposal.
>>
>> > The advantage of the exec() is that the new process is much cleaner. The
>> > downside is that it is harder to implement (How does the forking process
>> > find the "right" executable?
>>
>> I believe the place for the "helper" sync binary would be in the
>> libexec directory: /usr/libexec/syncevolution/sync-session for
>> example.
>
> Please also add an env variable which overrides that default location.
> The nightly testing needs that because it runs with an installation that
> was made with DESTDIR and thus doesn't have the files in the hard-coded
> places.
>
>> > How does injecting valgrind into the new
>> > binary work?)
>>
>> Valgrind can follow child processes with --trace-children=yes and
>> create a log-file per process by inserting %p in the log-file name.
>
> Duh, of course.
>
> [fork + function call from main()]
>
>> This is how I originally intended to implement this till I started
>> looking into it a little more and asking around. To me, the potential
>> problems seem to outweigh the advantages.
>
> Your concerns mirror my own concerns, so agreed, let's do the fork+exec.
>

Ok, I'll proceed in that direction and post a more detailed plan once
I do a little more poking.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-05 Thread Chris Kühl
On Wed, Nov 16, 2011 at 12:22 PM, Patrick Ohly  wrote:
> Hello!
>
> Chris is going to work on running syncs inside a process forked from the
> main syncevo-dbus-server. Before diving into the planning and
> implementation, let me outline the background.
>
> At the moment, syncevo-dbus-server is one process (when ignoring local
> sync for a second) which handles D-Bus calls and syncs at the same time.
> A sync involves operations which can block for a long time, so keeping
> the D-Bus part alive and responsive is challenging. It involves hacks
> like entering and leaving the main event loop, checking it inside
> libsoup, etc.
>
> Local sync already uses a second forked process so that it can run
> client and server engines concurrently.
>
> Getting rid of the current hacks is one motivation for running syncs in
> a forked process. The main process would enter the main loop once and
> never return unless it was asked to shut down.
>
> The other is that (eventually) it'll allow to run a "clean" main process
> which never loads any of the backend modules. That may help to keep the
> permanent memory footprint smaller than it is today. Right now this is
> not possible because merely registering a backend requires loading it.
> Fixing that is outside of the current scope.
>
> Resource leaks inside a sync become less problematic when the process
> running the sync can shut down.
>
> Here are some requirements for "sync in forked process":
>      * should be disabled when running outside of the D-Bus daemon
>        (syncevolution command line in non-daemon mode, client-test
>        testing)

This seems pretty straightforward because when in non-daemon mode none
of the src/dbus/server code is touched anyway. I'm hoping to disturb
the code in src/syncevo as little as possible. But I've not touched
src/syncevolution.cpp much so maybe I'm missing something.

>      * interactive password requests through D-Bus must continue to
>        work
>      * abort/suspend requests via D-Bus must work
>      * the forked process should shut down cleanly, so that "valgrind
>        --follow-child=yes --leak-check=yes" provides sane results (the
>        forked process in a local sync currently fails to do that,
>        because it is forked in a less controlled environment and
>        therefore just calls _exit())

Using fork then exec should mostly relieve us of this issue.

>      * logging:
>              * the main syncevo-dbus-server process should use syslog
>                logging for its own messages, without redirecting
>                stdout/stderr
>              * the forked process should start redirecting
>                stdout/stderr into the normal logging mechanism (so that
>                library error messages are visible when running the
>                syncevolution command line as client of the D-Bus server
>                and, later, they appear in log files) and switch to a
>                session log when the session is ready
>

I plan to do this work in 3 steps:

1) The first step, which I've started, is to decouple the session and
server from each other and also modify objects that require both the
session and server objects as needed. Once this is done the server
should function as it does now and all tests should pass in order to
move on to step 2.
2) Move session into separate binary using a one-to-one dbus
connection for communicating between the 2 processes. At this stage
only one session is run at a time and all tests should pass as before.
3) Enable running multiple concurrent sync sessions. At this stage
we'll probably need to add/modify some tests in order to deal with the
new functionality.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-05 Thread Patrick Ohly
On Mon, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
> On Wed, Nov 16, 2011 at 12:22 PM, Patrick Ohly  wrote:
> > Here are some requirements for "sync in forked process":
> >  * should be disabled when running outside of the D-Bus daemon
> >(syncevolution command line in non-daemon mode, client-test
> >testing)
> 
> This seems pretty straightforward because when in non-daemon mode none
> of the src/dbus/server code is touched anyway. I'm hoping to disturb
> the code in src/syncevo as little as possible. But I've not touched
> src/syncevolution.cpp much so maybe I'm missing something.

Yes, this should work. libsyncevolution (aka src/syncevo) communicates
with the front-end via ConfigUserInterface (implemented differently for
command line and D-Bus server) and several virtual methods in
SyncContext (display*(), for example displaySyncProgress()).

In particular the password request mechanism is very convoluted. I
haven't written that part myself and each time I look at it, I need to
trace through the various methods before I can piece together how they
interact.

I suspect that this will have to be cleaned up. Perhaps SyncContext
should be parameterized with instances of ConfigUserIterface and
SyncUserInterface (new class, would hold the display* functionality)
instead of deriving from it?

> >  * interactive password requests through D-Bus must continue to
> >work
> >  * abort/suspend requests via D-Bus must work
> >  * the forked process should shut down cleanly, so that "valgrind
> >--follow-child=yes --leak-check=yes" provides sane results (the
> >forked process in a local sync currently fails to do that,
> >because it is forked in a less controlled environment and
> >therefore just calls _exit())
> 
> Using fork then exec should mostly relieve us of this issue.

Agreed.

> >  * logging:
> >  * the main syncevo-dbus-server process should use syslog
> >logging for its own messages, without redirecting
> >stdout/stderr
> >  * the forked process should start redirecting
> >stdout/stderr into the normal logging mechanism (so that
> >library error messages are visible when running the
> >syncevolution command line as client of the D-Bus server
> >and, later, they appear in log files) and switch to a
> >session log when the session is ready
> >
> 
> I plan to do this work in 3 steps:
> 
> 1) The first step, which I've started, is to decouple the session and
> server from each other and also modify objects that require both the
> session and server objects as needed. Once this is done the server
> should function as it does now and all tests should pass in order to
> move on to step 2.
> 2) Move session into separate binary using a one-to-one dbus
> connection for communicating between the 2 processes. At this stage
> only one session is run at a time and all tests should pass as before.

Please try to keep this flexible. I'd like to use the separate binary
not only for running sessions, but also for monitoring local and remote
databases for changes. The main D-Bus server then would get a signal
that a specific config requires a sync run and would kick of that
operation when suitable.

> 3) Enable running multiple concurrent sync sessions. At this stage
> we'll probably need to add/modify some tests in order to deal with the
> new functionality.

Multiple concurrent sync sessions are a can of worms, so careful here.
The main question is: which sync sessions are allowed to run in
parallel?

For example, syncing against the same local database in parallel is
asking for (unnecessary) trouble. If the instance controlling sync
sessions wants to prevent that, how does it identify the "same
database"? Backend "foo" + database "xxx" might access the same data as
backend "bar" + database "yyy" (in theory, at least).

The original idea was that each database would only be configured once,
so that source "addressbook" in context "@default" would be unique. But
this has never been enforced (it can't be, due to the problem above) and
has not been clearly documented either.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-05 Thread Patrick Ohly
On Mon, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
> On Wed, Nov 16, 2011 at 12:22 PM, Patrick Ohly  wrote:
> >  * logging:
> >  * the main syncevo-dbus-server process should use syslog
> >logging for its own messages, without redirecting
> >stdout/stderr
> >  * the forked process should start redirecting
> >stdout/stderr into the normal logging mechanism (so that
> >library error messages are visible when running the
> >syncevolution command line as client of the D-Bus server
> >and, later, they appear in log files) and switch to a
> >session log when the session is ready
> >
> 
> I plan to do this work in 3 steps:
> 
> 1) The first step, which I've started, is to decouple the session and
> server from each other and also modify objects that require both the
> session and server objects as needed. Once this is done the server
> should function as it does now and all tests should pass in order to
> move on to step 2.
> 2) Move session into separate binary using a one-to-one dbus
> connection for communicating between the 2 processes. At this stage
> only one session is run at a time and all tests should pass as before.

One more question: what effect should SIGINT and/or SIGTERM have on the
syncevo-dbus-server?

I think both should be treated as an urgent request to shut down
operation. If a sync is running, it should be aborted and only after
that has worked should the main syncevo-dbus-server process quit. The
return code should be 0 in all cases, to distinguish from errors which
force the daemon to quit.

This is slightly different from the current behavior where SIGINT
requests a suspend and, if repeated quickly, only then forces an abort.
Users should never have relied on that behavior in the
syncevo-dbus-server and preserving it is simply not worth the extra
complication.

It should only be preserved in the command line tool, where the user
also gets immediate feedback on the first CTRL-C ("suspend requested,
CTRL-C again to abort").


-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-05 Thread Chris Kühl
On Mon, Dec 5, 2011 at 2:12 PM, Patrick Ohly  wrote:
> On Mon, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
>> On Wed, Nov 16, 2011 at 12:22 PM, Patrick Ohly  
>> wrote:
>> > Here are some requirements for "sync in forked process":
>> >      * should be disabled when running outside of the D-Bus daemon
>> >        (syncevolution command line in non-daemon mode, client-test
>> >        testing)
>>
>> This seems pretty straightforward because when in non-daemon mode none
>> of the src/dbus/server code is touched anyway. I'm hoping to disturb
>> the code in src/syncevo as little as possible. But I've not touched
>> src/syncevolution.cpp much so maybe I'm missing something.
>
> Yes, this should work. libsyncevolution (aka src/syncevo) communicates
> with the front-end via ConfigUserInterface (implemented differently for
> command line and D-Bus server) and several virtual methods in
> SyncContext (display*(), for example displaySyncProgress()).
>
> In particular the password request mechanism is very convoluted. I
> haven't written that part myself and each time I look at it, I need to
> trace through the various methods before I can piece together how they
> interact.
>
> I suspect that this will have to be cleaned up. Perhaps SyncContext
> should be parameterized with instances of ConfigUserIterface and
> SyncUserInterface (new class, would hold the display* functionality)
> instead of deriving from it?
>
>> >      * interactive password requests through D-Bus must continue to
>> >        work
>> >      * abort/suspend requests via D-Bus must work
>> >      * the forked process should shut down cleanly, so that "valgrind
>> >        --follow-child=yes --leak-check=yes" provides sane results (the
>> >        forked process in a local sync currently fails to do that,
>> >        because it is forked in a less controlled environment and
>> >        therefore just calls _exit())
>>
>> Using fork then exec should mostly relieve us of this issue.
>
> Agreed.
>
>> >      * logging:
>> >              * the main syncevo-dbus-server process should use syslog
>> >                logging for its own messages, without redirecting
>> >                stdout/stderr
>> >              * the forked process should start redirecting
>> >                stdout/stderr into the normal logging mechanism (so that
>> >                library error messages are visible when running the
>> >                syncevolution command line as client of the D-Bus server
>> >                and, later, they appear in log files) and switch to a
>> >                session log when the session is ready
>> >
>>
>> I plan to do this work in 3 steps:
>>
>> 1) The first step, which I've started, is to decouple the session and
>> server from each other and also modify objects that require both the
>> session and server objects as needed. Once this is done the server
>> should function as it does now and all tests should pass in order to
>> move on to step 2.
>> 2) Move session into separate binary using a one-to-one dbus
>> connection for communicating between the 2 processes. At this stage
>> only one session is run at a time and all tests should pass as before.
>
> Please try to keep this flexible. I'd like to use the separate binary
> not only for running sessions, but also for monitoring local and remote
> databases for changes. The main D-Bus server then would get a signal
> that a specific config requires a sync run and would kick of that
> operation when suitable.
>

When you say 'flexible' I'm assuming you are referring to both the
means by which a new process is spawned and how the communication
channel (peer-to-peer dbus connection) is set up. I'll try to wrap
this up in a way that you can easily make use of it.

>> 3) Enable running multiple concurrent sync sessions. At this stage
>> we'll probably need to add/modify some tests in order to deal with the
>> new functionality.
>
> Multiple concurrent sync sessions are a can of worms, so careful here.
> The main question is: which sync sessions are allowed to run in
> parallel?
>

Yes, I was wondering how such resource contention should be dealt with.

> For example, syncing against the same local database in parallel is
> asking for (unnecessary) trouble. If the instance controlling sync
> sessions wants to prevent that, how does it identify the "same
> database"? Backend "foo" + database "xxx" might access the same data as
> backend "bar" + database "yyy" (in theory, at least).
>
> The original idea was that each database would only be configured once,
> so that source "addressbook" in context "@default" would be unique. But
> this has never been enforced (it can't be, due to the problem above) and
> has not been clearly documented either.
>

Ideally we'd be able to detect a unique source and build a source
blacklist based on the already running syncs. If a sync wants to use a
source that is already being used *and* that source is not able to
handle concurrent syncs reliably then it gets pla

Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-05 Thread Chris Kühl
On Mon, Dec 5, 2011 at 2:49 PM, Patrick Ohly  wrote:
> On Mon, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
>> On Wed, Nov 16, 2011 at 12:22 PM, Patrick Ohly  
>> wrote:
>> >      * logging:
>> >              * the main syncevo-dbus-server process should use syslog
>> >                logging for its own messages, without redirecting
>> >                stdout/stderr
>> >              * the forked process should start redirecting
>> >                stdout/stderr into the normal logging mechanism (so that
>> >                library error messages are visible when running the
>> >                syncevolution command line as client of the D-Bus server
>> >                and, later, they appear in log files) and switch to a
>> >                session log when the session is ready
>> >
>>
>> I plan to do this work in 3 steps:
>>
>> 1) The first step, which I've started, is to decouple the session and
>> server from each other and also modify objects that require both the
>> session and server objects as needed. Once this is done the server
>> should function as it does now and all tests should pass in order to
>> move on to step 2.
>> 2) Move session into separate binary using a one-to-one dbus
>> connection for communicating between the 2 processes. At this stage
>> only one session is run at a time and all tests should pass as before.
>
> One more question: what effect should SIGINT and/or SIGTERM have on the
> syncevo-dbus-server?
>
> I think both should be treated as an urgent request to shut down
> operation. If a sync is running, it should be aborted and only after
> that has worked should the main syncevo-dbus-server process quit.

Yes, cleanly exiting the running sync sessions before the server exits
is essential.

I'll also be handling SIGCHILD in the server to make sure the server
can log a sync session ending unexpectedly and do any cleanup that's
needed.

The
> return code should be 0 in all cases, to distinguish from errors which
> force the daemon to quit.
>

Ok.

> This is slightly different from the current behavior where SIGINT
> requests a suspend and, if repeated quickly, only then forces an abort.
> Users should never have relied on that behavior in the
> syncevo-dbus-server and preserving it is simply not worth the extra
> complication.
>

Ok.

> It should only be preserved in the command line tool, where the user
> also gets immediate feedback on the first CTRL-C ("suspend requested,
> CTRL-C again to abort").
>

Ok, noted.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-13 Thread Patrick Ohly
On Mo, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
> I plan to do this work in 3 steps:
> 
> 1) The first step, which I've started, is to decouple the session and
> server from each other and also modify objects that require both the
> session and server objects as needed. Once this is done the server
> should function as it does now and all tests should pass in order to
> move on to step 2.
> 2) Move session into separate binary using a one-to-one dbus
> connection for communicating between the 2 processes. At this stage
> only one session is run at a time and all tests should pass as before.

I hate to interrupt your ongoing work, but can we reverse the steps so
that we first introduce the helper executable and a way to start it,
including the direct D-Bus communication with it?

I discovered end of last week that ActiveSync no longer works:
apparently fork without exec leaves the forked process in a state where
GIO gdbus is just blocking the caller without ever reaching activesyncd
over D-Bus. That confirms that fork+exec is indeed needed.

My plan is to use the helper binary as soon as possible for local sync.
Thus the desire to have it up and running rather sooner than later,
independent of refactoring the syncevo-dbus-server to use it.

You mentioned catching SIGCHILD. Please don't do that. It has the
disadvantage that there can be only one signal handler. Instead rely on
the specific connection to a helper process to detect premature exits.
My expectation is that this will be reported as a "peer has
disconnected" error at the D-Bus level (for open calls) resp. signal
(for watching the peer).

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-14 Thread Chris Kühl
On Tue, Dec 13, 2011 at 4:04 PM, Patrick Ohly  wrote:
> On Mo, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
>> I plan to do this work in 3 steps:
>>
>> 1) The first step, which I've started, is to decouple the session and
>> server from each other and also modify objects that require both the
>> session and server objects as needed. Once this is done the server
>> should function as it does now and all tests should pass in order to
>> move on to step 2.
>> 2) Move session into separate binary using a one-to-one dbus
>> connection for communicating between the 2 processes. At this stage
>> only one session is run at a time and all tests should pass as before.
>
> I hate to interrupt your ongoing work, but can we reverse the steps so
> that we first introduce the helper executable and a way to start it,
> including the direct D-Bus communication with it?
>

Ok, I'm looking into this. The original plan was that Step 1 was a
dependency of Step 2 but I've been reconsidering my approach. I'll
look into what you're asking and get back to you soon.

> I discovered end of last week that ActiveSync no longer works:
> apparently fork without exec leaves the forked process in a state where
> GIO gdbus is just blocking the caller without ever reaching activesyncd
> over D-Bus. That confirms that fork+exec is indeed needed.
>

If this is only happening with this combination, maybe it should be
disallowed for a very brief time until this is sorted.

> My plan is to use the helper binary as soon as possible for local sync.
> Thus the desire to have it up and running rather sooner than later,
> independent of refactoring the syncevo-dbus-server to use it.
>
> You mentioned catching SIGCHILD. Please don't do that. It has the
> disadvantage that there can be only one signal handler. Instead rely on
> the specific connection to a helper process to detect premature exits.
> My expectation is that this will be reported as a "peer has
> disconnected" error at the D-Bus level (for open calls) resp. signal
> (for watching the peer).
>

Yes, there is a closed signal for GDBusConnections.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-14 Thread Patrick Ohly
On Mi, 2011-12-14 at 13:27 +0100, Chris Kühl wrote:
> On Tue, Dec 13, 2011 at 4:04 PM, Patrick Ohly  wrote:
> > On Mo, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
> >> I plan to do this work in 3 steps:
> >>
> >> 1) The first step, which I've started, is to decouple the session and
> >> server from each other and also modify objects that require both the
> >> session and server objects as needed. Once this is done the server
> >> should function as it does now and all tests should pass in order to
> >> move on to step 2.
> >> 2) Move session into separate binary using a one-to-one dbus
> >> connection for communicating between the 2 processes. At this stage
> >> only one session is run at a time and all tests should pass as before.
> >
> > I hate to interrupt your ongoing work, but can we reverse the steps so
> > that we first introduce the helper executable and a way to start it,
> > including the direct D-Bus communication with it?
> >
> 
> Ok, I'm looking into this. The original plan was that Step 1 was a
> dependency of Step 2 but I've been reconsidering my approach. I'll
> look into what you're asking and get back to you soon.

One more thought about the various classes in the server and there
inter-dependencies: would it make sense to introduce signals+slots as a
way to connect the classes?

Boost.Signals would be the obvious candidate.

> > I discovered end of last week that ActiveSync no longer works:
> > apparently fork without exec leaves the forked process in a state where
> > GIO gdbus is just blocking the caller without ever reaching activesyncd
> > over D-Bus. That confirms that fork+exec is indeed needed.
> >
> 
> If this is only happening with this combination, maybe it should be
> disallowed for a very brief time until this is sorted.

It's the usage of GIO gdbus in activesyncd which is causing that, not
the GIO gdbus usage in SyncEvolution. activesyncd and its client
libraries switched unconditionally, so now SyncEvolution must adapt.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-14 Thread Patrick Ohly
On Mi, 2011-12-14 at 13:27 +0100, Chris Kühl wrote:
> On Tue, Dec 13, 2011 at 4:04 PM, Patrick Ohly  wrote:
> > On Mo, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
> >> I plan to do this work in 3 steps:
> >>
> >> 1) The first step, which I've started, is to decouple the session and
> >> server from each other and also modify objects that require both the
> >> session and server objects as needed. Once this is done the server
> >> should function as it does now and all tests should pass in order to
> >> move on to step 2.
> >> 2) Move session into separate binary using a one-to-one dbus
> >> connection for communicating between the 2 processes. At this stage
> >> only one session is run at a time and all tests should pass as before.
> >
> > I hate to interrupt your ongoing work, but can we reverse the steps so
> > that we first introduce the helper executable and a way to start it,
> > including the direct D-Bus communication with it?
> >
> 
> Ok, I'm looking into this. The original plan was that Step 1 was a
> dependency of Step 2 but I've been reconsidering my approach. I'll
> look into what you're asking and get back to you soon.

I can also do that part myself, if you give me some pointers to relevant
technical documentation. I was looking for information on bootstrapping
a D-Bus connection over a pair of connected file descriptors, without
much luck.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-14 Thread Chris Kühl
On Wed, Dec 14, 2011 at 2:26 PM, Patrick Ohly  wrote:
> On Mi, 2011-12-14 at 13:27 +0100, Chris Kühl wrote:
>> On Tue, Dec 13, 2011 at 4:04 PM, Patrick Ohly  wrote:
>> > On Mo, 2011-12-05 at 12:12 +0100, Chris Kühl wrote:
>> >> I plan to do this work in 3 steps:
>> >>
>> >> 1) The first step, which I've started, is to decouple the session and
>> >> server from each other and also modify objects that require both the
>> >> session and server objects as needed. Once this is done the server
>> >> should function as it does now and all tests should pass in order to
>> >> move on to step 2.
>> >> 2) Move session into separate binary using a one-to-one dbus
>> >> connection for communicating between the 2 processes. At this stage
>> >> only one session is run at a time and all tests should pass as before.
>> >
>> > I hate to interrupt your ongoing work, but can we reverse the steps so
>> > that we first introduce the helper executable and a way to start it,
>> > including the direct D-Bus communication with it?
>> >
>>
>> Ok, I'm looking into this. The original plan was that Step 1 was a
>> dependency of Step 2 but I've been reconsidering my approach. I'll
>> look into what you're asking and get back to you soon.
>
> One more thought about the various classes in the server and there
> inter-dependencies: would it make sense to introduce signals+slots as a
> way to connect the classes?
>
> Boost.Signals would be the obvious candidate.
>

Well, that wouldn't work for what i was trying to do. I was attempting
to move the entire session along with it's dbus interface into the
helper binary. This was not going very well as the coupling was too
great. I'm now looking at doing the fork  DBusTransportAgent  similar
to how it's done in LocalTransportAgent but with the exec of course.
This looks more likely to work out and makes the helper leaner.

>> > I discovered end of last week that ActiveSync no longer works:
>> > apparently fork without exec leaves the forked process in a state where
>> > GIO gdbus is just blocking the caller without ever reaching activesyncd
>> > over D-Bus. That confirms that fork+exec is indeed needed.
>> >
>>
>> If this is only happening with this combination, maybe it should be
>> disallowed for a very brief time until this is sorted.
>
> It's the usage of GIO gdbus in activesyncd which is causing that, not
> the GIO gdbus usage in SyncEvolution. activesyncd and its client
> libraries switched unconditionally, so now SyncEvolution must adapt.
>

Ah, ok.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-14 Thread Patrick Ohly
On Mi, 2011-12-14 at 15:44 +0100, Chris Kühl wrote:
> >> Ok, I'm looking into this. The original plan was that Step 1 was a
> >> dependency of Step 2 but I've been reconsidering my approach. I'll
> >> look into what you're asking and get back to you soon.
> >
> > I can also do that part myself, if you give me some pointers to
> relevant
> > technical documentation. I was looking for information on
> bootstrapping
> > a D-Bus connection over a pair of connected file descriptors,
> without
> > much luck.
> >
> 
> Well, in GIO GDBus you can create a connection using a GIOStream using
> g_dbus_connection_new_sync(). Not sure if that would really work the
> way you want, though.

Looks promising.

How about the same thing for libdbus?

> I've no problem doing this, though. 

I don't care either way, as long as we can get it done soon, like this
week ;-)

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-14 Thread Chris Kühl
On Wed, Dec 14, 2011 at 4:25 PM, Patrick Ohly  wrote:
> On Mi, 2011-12-14 at 15:44 +0100, Chris Kühl wrote:
>> >> Ok, I'm looking into this. The original plan was that Step 1 was a
>> >> dependency of Step 2 but I've been reconsidering my approach. I'll
>> >> look into what you're asking and get back to you soon.
>> >
>> > I can also do that part myself, if you give me some pointers to
>> relevant
>> > technical documentation. I was looking for information on
>> bootstrapping
>> > a D-Bus connection over a pair of connected file descriptors,
>> without
>> > much luck.
>> >
>>
>> Well, in GIO GDBus you can create a connection using a GIOStream using
>> g_dbus_connection_new_sync(). Not sure if that would really work the
>> way you want, though.
>
> Looks promising.
>
> How about the same thing for libdbus?
>

I just looked but don't see anything. If you are referring to setting
up a peer-to-peer dbus connection, that's done using the (G)DBusServer
class. There is an example at [1].

Also, I intend to use g_spawn [2] for the fork and exec'ing. Should
offer enough control and make for less code to maintain.

>> I've no problem doing this, though.
>
> I don't care either way, as long as we can get it done soon, like this
> week ;-)
>

That might be a little tight. Tomorrow is a short day for me; Xmas
party at the Kindergarten my kids attend. So, maybe it's better you
tackle this if you need it that fast.

Cheers,
Chris

[1] http://developer.gnome.org/gio/2.28/GDBusServer.html#GDBusServer.signals
[2] http://developer.gnome.org/glib/2.28/glib-Spawning-Processes.html
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-14 Thread Patrick Ohly
On Mi, 2011-12-14 at 16:52 +0100, Chris Kühl wrote:
> On Wed, Dec 14, 2011 at 4:25 PM, Patrick Ohly  wrote:
> > On Mi, 2011-12-14 at 15:44 +0100, Chris Kühl wrote:
> >> >> Ok, I'm looking into this. The original plan was that Step 1 was a
> >> >> dependency of Step 2 but I've been reconsidering my approach. I'll
> >> >> look into what you're asking and get back to you soon.
> >> >
> >> > I can also do that part myself, if you give me some pointers to
> >> relevant
> >> > technical documentation. I was looking for information on
> >> bootstrapping
> >> > a D-Bus connection over a pair of connected file descriptors,
> >> without
> >> > much luck.
> >> >
> >>
> >> Well, in GIO GDBus you can create a connection using a GIOStream using
> >> g_dbus_connection_new_sync(). Not sure if that would really work the
> >> way you want, though.
> >
> > Looks promising.
> >
> > How about the same thing for libdbus?
> >
> 
> I just looked but don't see anything.

We need it in the case where GIO GDBus is not yet available.

dbus_server_listen() and dbus_connection_open_private() probably can be
used.

> If you are referring to setting
> up a peer-to-peer dbus connection, that's done using the (G)DBusServer
> class. There is an example at [1].

That's a starting point, although it still needs to be done differently
for libdbus+GDBusCXX.

> Also, I intend to use g_spawn [2] for the fork and exec'ing. Should
> offer enough control and make for less code to maintain.

Okay.

> >> I've no problem doing this, though.
> >
> > I don't care either way, as long as we can get it done soon, like this
> > week ;-)
> >
> 
> That might be a little tight. Tomorrow is a short day for me; Xmas
> party at the Kindergarten my kids attend. So, maybe it's better you
> tackle this if you need it that fast.

Yes, let me see how far I get. That'll allow you to continue with your
original plan, too.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-16 Thread Patrick Ohly
On Mi, 2011-12-14 at 16:52 +0100, Chris Kühl wrote:
> I just looked but don't see anything. If you are referring to setting
> up a peer-to-peer dbus connection, that's done using the (G)DBusServer
> class. There is an example at [1].

I have that same example working based on GDBus + libdbus. What I don't
like about it is that the connection is based on listen() + connect().
This makes it harder for the parent to detect that the child died: if
the parent did the usual trick (create socket pair, fork+exec child,
close its own side of the socket pair), then a closed socket would
indicate that the child died. With listen() + connect() it has to check
for existence of the child periodically. SIGCHILD does not integrate
into the main loop well, but I see that g_spawn() has a GChildWatch -
that should do.

While working on this I got unhappy with some aspects of the current
D-Bus C++ wrapper. The result is the "dbus-cleanup" branch. Chris, what
do you think about that? Note that your example code closed the
connection while the rest of SyncEvolution didn't. This might be
something that still needs to be fixed.

While testing that with GDBus GIO I noticed that I get segfaults when
compiling on Debian Testing and --with-gio-dbus. That happens with and
without my clean up patches - Chris, can you check that? If you cannot
reproduce it, then please let me know and I will check it. It's a bit
strange that I haven't seen that in the nightly testing.

The signal callback seems to get an invalid std::string:

0x724c479b in std::basic_string, 
std::allocator >::basic_string(std::string const&) () from 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6
(gdb) where
#0  0x724c479b in std::basic_string, 
std::allocator >::basic_string(std::string const&) () from 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x006e0d11 in __gnu_cxx::new_allocator::construct 
(this=0x7fffdf08, __p=0xe28550, 
__val=...) at /usr/include/c++/4.6/ext/new_allocator.h:108
#2  0x006e0e95 in std::vector 
>::_M_insert_aux (
this=0x7fffdf08, __position=..., __x=...) at 
/usr/include/c++/4.6/bits/vector.tcc:335
#3  0x006db20c in std::vector 
>::push_back (this=0x7fffdf08, 
__x=...) at /usr/include/c++/4.6/bits/stl_vector.h:834
#4  0x006d3ee6 in SyncEvo::RemoteDBusServer::updateSessions 
(this=0x7fffdec0, session=..., active=true)
at /home/pohly/syncevolution/syncevolution/src/syncevolution.cpp:956
#5  0x006d26b0 in SyncEvo::RemoteDBusServer::sessionChangedCb 
(this=0x7fffdec0, object=..., active=true)
at /home/pohly/syncevolution/syncevolution/src/syncevolution.cpp:720
#6  0x006f170b in boost::_mfi::mf2::operator() (this=0x7fffdf78, 
p=0x7fffdec0, a1=..., a2=true) at 
/usr/include/boost/bind/mem_fn_template.hpp:280
#7  0x006ef722 in 
boost::_bi::list3, boost::arg<1>, 
boost::arg<2> >::operator(), boost::_bi::list2 > (this=0x7fffdf88, f=..., a=...)
at /usr/include/boost/bind/bind.hpp:392
#8  0x006ecf74 in boost::_bi::bind_t, 
boost::_bi::list3, boost::arg<1>, 
boost::arg<2> > >::operator() 
(this=0x7fffdf78, a1=..., a2=@0x7fffd5df: true)
at /usr/include/boost/bind/bind_template.hpp:102
#9  0x006ea742 in 
boost::detail::function::void_function_obj_invoker2, boost::_bi::list3 to 
continue, or q  to quit---
::RemoteDBusServer*>, boost::arg<1>, boost::arg<2> > >, void, 
GDBusCXX::DBusObject_t const&, bool const&>::invoke (
function_obj_ptr=..., a0=..., a1=@0x7fffd5df: true) at 
/usr/include/boost/function/function_template.hpp:153
#10 0x006e5db4 in boost::function2::operator() (
this=0x7fffdf70, a0=..., a1=@0x7fffd5df: true) at 
/usr/include/boost/function/function_template.hpp:1013
#11 0x006def3b in GDBusCXX::SignalWatch2::internalCallback (conn=0xe1b810, 
sender=0x7fffe80011c0 ":1.534", path=0x7fffe8000ee0 
"/org/syncevolution/Server", 
interface=0x7fffe8000af0 "org.syncevolution.Server", signal=0x7fffe80010a0 
"SessionChanged", params=0xe2ab00, 
data=0x7fffdf58) at 
/home/pohly/syncevolution/syncevolution/src/gdbusxx/gdbus-cxx-bridge.h:4221
#12 0x733299ae in emit_signal_instance_in_idle_cb (data=0x7fffe8002330)
at /tmp/buildd/glib2.0-2.30.2/./gio/gdbusconnection.c:3613
#13 0x7276c0cf in g_main_dispatch (context=0xe16670) at 
/tmp/buildd/glib2.0-2.30.2/./glib/gmain.c:2442
#14 g_main_context_dispatch (context=0xe16670) at 
/tmp/buildd/glib2.0-2.30.2/./glib/gmain.c:2998
#15 0x7276c8c8 in g_main_context_iterate (context=0xe16670, 
block=, dispatch=1, 
self=) at /tmp/buildd/glib2.0-2.30.2/./glib/gmain.c:3076
#16 0x7276ce02 in g_main_loop_run (loop=0xe16540) at 
/tmp/buildd/glib2.0-2.30.2/./glib/gmain.c:3284
#17 0x006d296b in SyncEvo::RemoteDBusServer::execute 
(this=0x7fffdec0, args=..., peer=..., 
runSync=false) at 
/home/pohly/syncevolution/syncevolution/src/syncevolution.cpp:773
#18 0x006d1254 in Syn

Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-19 Thread Chris Kühl
On Fri, Dec 16, 2011 at 3:41 PM, Patrick Ohly  wrote:
> On Mi, 2011-12-14 at 16:52 +0100, Chris Kühl wrote:
>> I just looked but don't see anything. If you are referring to setting
>> up a peer-to-peer dbus connection, that's done using the (G)DBusServer
>> class. There is an example at [1].
>
> I have that same example working based on GDBus + libdbus. What I don't
> like about it is that the connection is based on listen() + connect().
> This makes it harder for the parent to detect that the child died: if
> the parent did the usual trick (create socket pair, fork+exec child,
> close its own side of the socket pair), then a closed socket would
> indicate that the child died. With listen() + connect() it has to check
> for existence of the child periodically. SIGCHILD does not integrate
> into the main loop well, but I see that g_spawn() has a GChildWatch -
> that should do.
>

Yes, looking at the source for the GChildWatch related code it blocks
SIGCHLD. When the mainloop's worker routine is run it checks if any of
the blocked signals are pending and then iterates over the
GChildWatchSources checking if the process has exited. So by checking
the return value of g_spawn_async_with_pipes and setting up the
GChildWatch I don't see a way to miss the childs exit, either.

> While working on this I got unhappy with some aspects of the current
> D-Bus C++ wrapper. The result is the "dbus-cleanup" branch. Chris, what
> do you think about that?

This looks fine. Moving the connection info to DBusObject definitely
makes sense and reduces the code a bit. And, yes, I should have made
better use of DBusConnectionPtr.

Note that your example code closed the
> connection while the rest of SyncEvolution didn't. This might be
> something that still needs to be fixed.
>

It seems that this is dependent on how the connection was created. Let
me take a look at this more closely.

> While testing that with GDBus GIO I noticed that I get segfaults when
> compiling on Debian Testing and --with-gio-dbus. That happens with and
> without my clean up patches - Chris, can you check that? If you cannot
> reproduce it, then please let me know and I will check it. It's a bit
> strange that I haven't seen that in the nightly testing.
>
> The signal callback seems to get an invalid std::string:
>

My testing involved running the test-dbus.py script. I guess this
didn't cover all cases. I'll look into this as well.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-19 Thread Patrick Ohly
On Mo, 2011-12-19 at 10:57 +0100, Chris Kühl wrote:
> On Fri, Dec 16, 2011 at 3:41 PM, Patrick Ohly  wrote:
> > Note that your example code closed the
> > connection while the rest of SyncEvolution didn't. This might be
> > something that still needs to be fixed.
> >
> 
> It seems that this is dependent on how the connection was created. Let
> me take a look at this more closely.

Private connections need to be closed, shared must not be closed (?).

Note that the libdbus based code uses private connections for
everything. I think that this was done to avoid some interoperability
issues with EDS using GIO - but memory is hazy on that. Going forward
the old-style code should continue to use private connections while the
new GIO code can be tested with shared connections.

So perhaps the parameter for that can be removed and made a static
choice in the dbus_get_bus_connection() implementations?

> > While testing that with GDBus GIO I noticed that I get segfaults when
> > compiling on Debian Testing and --with-gio-dbus. That happens with and
> > without my clean up patches - Chris, can you check that? If you cannot
> > reproduce it, then please let me know and I will check it. It's a bit
> > strange that I haven't seen that in the nightly testing.
> >
> > The signal callback seems to get an invalid std::string:
> >
> 
> My testing involved running the test-dbus.py script. I guess this
> didn't cover all cases. I'll look into this as well.

Right, another test-dbus.py run passed fine this weekend.

So let me be more precise: a "syncevolution --version" invocation
segfaults.

This is indeed a gap in the nightly testing. I need to add real tests
for syncevolution <-> syncev-dbus-server.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-19 Thread Chris Kühl
On Mon, Dec 19, 2011 at 12:08 PM, Patrick Ohly  wrote:
> On Mo, 2011-12-19 at 10:57 +0100, Chris Kühl wrote:
>> On Fri, Dec 16, 2011 at 3:41 PM, Patrick Ohly  wrote:
>> > Note that your example code closed the
>> > connection while the rest of SyncEvolution didn't. This might be
>> > something that still needs to be fixed.
>> >
>>
>> It seems that this is dependent on how the connection was created. Let
>> me take a look at this more closely.
>
> Private connections need to be closed, shared must not be closed (?).

Yes this is what the libdbus docs tell us.

>
> Note that the libdbus based code uses private connections for
> everything. I think that this was done to avoid some interoperability
> issues with EDS using GIO - but memory is hazy on that. Going forward
> the old-style code should continue to use private connections while the
> new GIO code can be tested with shared connections.
>

I was wondering why these are all private. Now I've at least got a hint.

> So perhaps the parameter for that can be removed and made a static
> choice in the dbus_get_bus_connection() implementations?
>

I'm about half way through test-dbus.py and things are looking fine
using the singleton connection acquired by g_bus_get_sync(). so, if my
interpretation of what you're saying is correct, we can just remove
the unshared (btw, using a negative for a boolean value is confusing.)
option and make the gio gdbus always be a shared, singleton connection
and the libdbus connection always be private. Correct?

Cheers,
Chris

>> > While testing that with GDBus GIO I noticed that I get segfaults when
>> > compiling on Debian Testing and --with-gio-dbus. That happens with and
>> > without my clean up patches - Chris, can you check that? If you cannot
>> > reproduce it, then please let me know and I will check it. It's a bit
>> > strange that I haven't seen that in the nightly testing.
>> >
>> > The signal callback seems to get an invalid std::string:
>> >
>>
>> My testing involved running the test-dbus.py script. I guess this
>> didn't cover all cases. I'll look into this as well.
>
> Right, another test-dbus.py run passed fine this weekend.
>
> So let me be more precise: a "syncevolution --version" invocation
> segfaults.
>

Hmm. I get SyncEvolution 1.2.1+20111218+SE+3f4f71a (pre-release) when
running that on my Debian Testing install. But we have had different
results in the past on Deban Testing so I'm becoming less and less
surprised by this.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-19 Thread Patrick Ohly
On Mo, 2011-12-19 at 16:47 +0100, Chris Kühl wrote:
> On Mon, Dec 19, 2011 at 12:08 PM, Patrick Ohly  wrote:
> > So perhaps the parameter for that can be removed and made a static
> > choice in the dbus_get_bus_connection() implementations?
> >
> 
> I'm about half way through test-dbus.py and things are looking fine
> using the singleton connection acquired by g_bus_get_sync(). so, if my
> interpretation of what you're saying is correct, we can just remove
> the unshared (btw, using a negative for a boolean value is confusing.)
> option and make the gio gdbus always be a shared, singleton connection
> and the libdbus connection always be private. Correct?

Right.

> > So let me be more precise: a "syncevolution --version" invocation
> > segfaults.
> >
> 
> Hmm. I get SyncEvolution 1.2.1+20111218+SE+3f4f71a (pre-release) when
> running that on my Debian Testing install. But we have had different
> results in the past on Deban Testing so I'm becoming less and less
> surprised by this.

Well, Testing is a moving target ;-) We might also be compiling
differently, or there is a 32/64 bit difference (I'm using 64 bit).

Anyway, as I am to reproduce it, finding the root cause wasn't that
difficult:

commit b7b17159d9e89c0452e3df99a35668a468e347d7
Author: Patrick Ohly 
Date:   Mon Dec 19 17:57:20 2011 +0100

GIO GDBus: fixed memory corruption

g_variant_get() does not work for C++ bool directly because at least
on x86_64, C++ bool uses 1 byte while gboolean uses 4. This caused
random stack corruption when unpacking four bytes into an address
which only had room for one. Caused "syncevolution --version" to segfault
when using a corrupted std::string later.

diff --git a/src/gdbusxx/gdbus-cxx-bridge.h b/src/gdbusxx/gdbus-cxx-bridge.h
index 106b216..568642b 100644
--- a/src/gdbusxx/gdbus-cxx-bridge.h
+++ b/src/gdbusxx/gdbus-cxx-bridge.h
@@ -1170,12 +1170,37 @@ template<> struct dbus_traits :
 static std::string getReply() { return ""; }
 };
 
-template<> struct dbus_traits :
-public basic_marshal< bool, VariantTypeBoolean >
+template<> struct dbus_traits
+// cannot use basic_marshal because VariantTypeBoolean packs/unpacks
+// a gboolean, which is not a C++ bool (4 bytes vs 1 on x86_64)
+// public basic_marshal< bool, VariantTypeBoolean >
 {
 static std::string getType() { return "b"; }
 static std::string getSignature() {return getType(); }
 static std::string getReply() { return ""; }
+
+typedef bool host_type;
+typedef bool arg_type;
+
+static void get(GDBusConnection *conn, GDBusMessage *msg,
+GVariantIter &iter, bool &value)
+{
+GVariant *var = g_variant_iter_next_value(&iter);
+if (var == NULL || !g_variant_type_equal(g_variant_get_type(var), 
VariantTypeBoolean::getVariantType())) {
+throw std::runtime_error("invalid argument");
+}
+gboolean buffer;
+g_variant_get(var, g_variant_get_type_string(var), &buffer);
+value = buffer;
+g_variant_unref(var);
+}
+
+static void append(GVariantBuilder &builder, bool value)
+{
+const gchar *typeStr = 
g_variant_type_dup_string(VariantTypeBoolean::getVariantType());
+g_variant_builder_add(&builder, typeStr, (gboolean)value);
+g_free((gpointer)typeStr);
+}
 };

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-20 Thread Chris Kühl
On Mon, Dec 19, 2011 at 6:02 PM, Patrick Ohly  wrote:
> On Mo, 2011-12-19 at 16:47 +0100, Chris Kühl wrote:
>> On Mon, Dec 19, 2011 at 12:08 PM, Patrick Ohly  
>> wrote:
>> > So perhaps the parameter for that can be removed and made a static
>> > choice in the dbus_get_bus_connection() implementations?
>> >
>>
>> I'm about half way through test-dbus.py and things are looking fine
>> using the singleton connection acquired by g_bus_get_sync(). so, if my
>> interpretation of what you're saying is correct, we can just remove
>> the unshared (btw, using a negative for a boolean value is confusing.)
>> option and make the gio gdbus always be a shared, singleton connection
>> and the libdbus connection always be private. Correct?
>
> Right.
>
>> > So let me be more precise: a "syncevolution --version" invocation
>> > segfaults.
>> >
>>
>> Hmm. I get SyncEvolution 1.2.1+20111218+SE+3f4f71a (pre-release) when
>> running that on my Debian Testing install. But we have had different
>> results in the past on Deban Testing so I'm becoming less and less
>> surprised by this.
>
> Well, Testing is a moving target ;-) We might also be compiling
> differently, or there is a 32/64 bit difference (I'm using 64 bit).
>
> Anyway, as I am to reproduce it, finding the root cause wasn't that
> difficult:
>
> commit b7b17159d9e89c0452e3df99a35668a468e347d7
> Author: Patrick Ohly 
> Date:   Mon Dec 19 17:57:20 2011 +0100
>
>    GIO GDBus: fixed memory corruption
>
>    g_variant_get() does not work for C++ bool directly because at least
>    on x86_64, C++ bool uses 1 byte while gboolean uses 4. This caused
>    random stack corruption when unpacking four bytes into an address
>    which only had room for one. Caused "syncevolution --version" to segfault
>    when using a corrupted std::string later.
>
> diff --git a/src/gdbusxx/gdbus-cxx-bridge.h b/src/gdbusxx/gdbus-cxx-bridge.h
> index 106b216..568642b 100644
> --- a/src/gdbusxx/gdbus-cxx-bridge.h
> +++ b/src/gdbusxx/gdbus-cxx-bridge.h
> @@ -1170,12 +1170,37 @@ template<> struct dbus_traits :
>     static std::string getReply() { return ""; }
>  };
>
> -template<> struct dbus_traits :
> -    public basic_marshal< bool, VariantTypeBoolean >
> +template<> struct dbus_traits
> +// cannot use basic_marshal because VariantTypeBoolean packs/unpacks
> +// a gboolean, which is not a C++ bool (4 bytes vs 1 on x86_64)
> +// public basic_marshal< bool, VariantTypeBoolean >
>  {
>     static std::string getType() { return "b"; }
>     static std::string getSignature() {return getType(); }
>     static std::string getReply() { return ""; }
> +
> +    typedef bool host_type;
> +    typedef bool arg_type;
> +
> +    static void get(GDBusConnection *conn, GDBusMessage *msg,
> +                    GVariantIter &iter, bool &value)
> +    {
> +        GVariant *var = g_variant_iter_next_value(&iter);
> +        if (var == NULL || !g_variant_type_equal(g_variant_get_type(var), 
> VariantTypeBoolean::getVariantType())) {
> +            throw std::runtime_error("invalid argument");
> +        }
> +        gboolean buffer;
> +        g_variant_get(var, g_variant_get_type_string(var), &buffer);
> +        value = buffer;
> +        g_variant_unref(var);
> +    }
> +
> +    static void append(GVariantBuilder &builder, bool value)
> +    {
> +        const gchar *typeStr = 
> g_variant_type_dup_string(VariantTypeBoolean::getVariantType());
> +        g_variant_builder_add(&builder, typeStr, (gboolean)value);
> +        g_free((gpointer)typeStr);
> +    }
>  };
>

Oops, I recall removing that code.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-20 Thread Patrick Ohly
On Mo, 2011-12-19 at 10:57 +0100, Chris Kühl wrote:
> On Fri, Dec 16, 2011 at 3:41 PM, Patrick Ohly  wrote:
> > I have that same example working based on GDBus + libdbus. What I don't
> > like about it is that the connection is based on listen() + connect().
> > This makes it harder for the parent to detect that the child died: if
> > the parent did the usual trick (create socket pair, fork+exec child,
> > close its own side of the socket pair), then a closed socket would
> > indicate that the child died. With listen() + connect() it has to check
> > for existence of the child periodically. SIGCHILD does not integrate
> > into the main loop well, but I see that g_spawn() has a GChildWatch -
> > that should do.
> >
> 
> Yes, looking at the source for the GChildWatch related code it blocks
> SIGCHLD. When the mainloop's worker routine is run it checks if any of
> the blocked signals are pending and then iterates over the
> GChildWatchSources checking if the process has exited. So by checking
> the return value of g_spawn_async_with_pipes and setting up the
> GChildWatch I don't see a way to miss the childs exit, either.

Attached is a first API proposal for the fork/exec helper class and how
it could be used in a small test program. Does that look sane?

I haven't started implementing any of the new methods, but it should be
fairly straightforward (famous last words).

> > While working on this I got unhappy with some aspects of the current
> > D-Bus C++ wrapper. The result is the "dbus-cleanup" branch. Chris, what
> > do you think about that?
> 
> This looks fine. Moving the connection info to DBusObject definitely
> makes sense and reduces the code a bit.

Okay, merged into master.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.

/*
 * Copyright (C) 2011 Intel Corporation
 *
 * This library is free software; you can redistribute it and/or
 * modify it under the terms of the GNU Lesser General Public
 * License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) version 3.
 *
 * This library is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * Lesser General Public License for more details.
 *
 * You should have received a copy of the GNU Lesser General Public
 * License along with this library; if not, write to the Free Software
 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
 * 02110-1301  USA
 */

#ifndef INCL_FORK_EXEC
# define INCL_FORK_EXEC

#ifdef HAVE_CONFIG_H
# include "config.h"
#endif

#ifdef HAVE_GLIB

#include 
#include 
#include 

#include 

SE_BEGIN_CXX

/**
 * Utility class which starts a specific helper binary in a second
 * process. The helper binary is identified via its base name like
 * "syncevo-dbus-helper", exact location is then determined
 * automatically, or via an absolute path.
 *
 * Direct D-Bus communication is set up automatically. For this to
 * work, the helper must use ForkExecChild::connect().
 *
 * Progress (like "client connected") and failures ("client disconnected")
 * are reported via boost::signal2 signals. To make progess, the user of
 * this class must run a glib event loop.
 *
 * Note that failures encountered inside the class methods themselves
 * will be reported via exceptions. Only asynchronous errors encountered
 * inside the event loop are reported via the failure signal.
 */

class ForkExec : private boost::noncopyable {
 public:
/**
 * the glib main loop passed to create() or connect() in one of
 * the derived classes
 */
GMainLoopPtr getLoop() const;

/**
 * Called when the D-Bus connection is up and running. It is ready
 * to register objects that the peer might need. It is
 * guaranteed that any objects registered now will be ready before
 * the helper gets a chance to make D-Bus calls.
 */
typedef boost::signals2::signal OnConnect;
OnConnect m_onConnect;

 protected:
ForkExec(const GMainLoopPtr &loop);
};

/**
 * The parent side of a fork/exec.
 */
class ForkExecParent : public ForkExec
{
 public:
/**
 * A ForkExecParent instance must be created via this factory
 * method and then be tracked in a shared pointer. This method
 * will not start the helper yet: first connect your slots, then
 * call start().
 */
static boost::shared_ptr create(const GMainLoopPtr &loop,
const std::string &helper);

/**
 * the helper string passed to create()
 */
std::string getHelper() const;

/**
 * run the helper executable in the parent process
 */
void start();

/**
 * request 

Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-20 Thread Chris Kühl
On Mon, Dec 19, 2011 at 6:02 PM, Patrick Ohly  wrote:
> On Mo, 2011-12-19 at 16:47 +0100, Chris Kühl wrote:
>> On Mon, Dec 19, 2011 at 12:08 PM, Patrick Ohly  
>> wrote:
>> > So perhaps the parameter for that can be removed and made a static
>> > choice in the dbus_get_bus_connection() implementations?
>> >
>>
>> I'm about half way through test-dbus.py and things are looking fine
>> using the singleton connection acquired by g_bus_get_sync(). so, if my
>> interpretation of what you're saying is correct, we can just remove
>> the unshared (btw, using a negative for a boolean value is confusing.)
>> option and make the gio gdbus always be a shared, singleton connection
>> and the libdbus connection always be private. Correct?
>
> Right.
>
>> > So let me be more precise: a "syncevolution --version" invocation
>> > segfaults.
>> >
>>
>> Hmm. I get SyncEvolution 1.2.1+20111218+SE+3f4f71a (pre-release) when
>> running that on my Debian Testing install. But we have had different
>> results in the past on Deban Testing so I'm becoming less and less
>> surprised by this.
>
> Well, Testing is a moving target ;-) We might also be compiling
> differently, or there is a 32/64 bit difference (I'm using 64 bit).
>
> Anyway, as I am to reproduce it, finding the root cause wasn't that
> difficult:
>
> commit b7b17159d9e89c0452e3df99a35668a468e347d7
> Author: Patrick Ohly 
> Date:   Mon Dec 19 17:57:20 2011 +0100
>
>    GIO GDBus: fixed memory corruption
>
>    g_variant_get() does not work for C++ bool directly because at least
>    on x86_64, C++ bool uses 1 byte while gboolean uses 4. This caused
>    random stack corruption when unpacking four bytes into an address
>    which only had room for one. Caused "syncevolution --version" to segfault
>    when using a corrupted std::string later.
>
> diff --git a/src/gdbusxx/gdbus-cxx-bridge.h b/src/gdbusxx/gdbus-cxx-bridge.h
> index 106b216..568642b 100644
> --- a/src/gdbusxx/gdbus-cxx-bridge.h
> +++ b/src/gdbusxx/gdbus-cxx-bridge.h
> @@ -1170,12 +1170,37 @@ template<> struct dbus_traits :
>     static std::string getReply() { return ""; }
>  };
>
> -template<> struct dbus_traits :

Should this not be the following?

template<> struct dbus_traits : public dbus_traits_base

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-20 Thread Patrick Ohly
On Di, 2011-12-20 at 15:16 +0100, Chris Kühl wrote:
> Should this not be the following?
> 
> template<> struct dbus_traits : public dbus_traits_base

Hmm, it compiled okay as I had it in my patch. Probably no code was ever
expanded which checked for dbus_traits_base::asynchronous. Will fix
that.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-21 Thread Chris Kühl
On Tue, Dec 20, 2011 at 2:23 PM, Patrick Ohly  wrote:
> On Mo, 2011-12-19 at 10:57 +0100, Chris Kühl wrote:
>> On Fri, Dec 16, 2011 at 3:41 PM, Patrick Ohly  wrote:
>> > I have that same example working based on GDBus + libdbus. What I don't
>> > like about it is that the connection is based on listen() + connect().
>> > This makes it harder for the parent to detect that the child died: if
>> > the parent did the usual trick (create socket pair, fork+exec child,
>> > close its own side of the socket pair), then a closed socket would
>> > indicate that the child died. With listen() + connect() it has to check
>> > for existence of the child periodically. SIGCHILD does not integrate
>> > into the main loop well, but I see that g_spawn() has a GChildWatch -
>> > that should do.
>> >
>>
>> Yes, looking at the source for the GChildWatch related code it blocks
>> SIGCHLD. When the mainloop's worker routine is run it checks if any of
>> the blocked signals are pending and then iterates over the
>> GChildWatchSources checking if the process has exited. So by checking
>> the return value of g_spawn_async_with_pipes and setting up the
>> GChildWatch I don't see a way to miss the childs exit, either.
>
> Attached is a first API proposal for the fork/exec helper class and how
> it could be used in a small test program. Does that look sane?
>
> I haven't started implementing any of the new methods, but it should be
> fairly straightforward (famous last words).
>

Overall it looks like it makes sense. A couple questions though.

* Where would you be using g_child_watch here? I'm assuming it would
be set up internally in ForkExecParent and then trigger the OnQuit
signal.
* Should the forkexec argument set opt_fork_exec rather than opt_server.
* Is there a way to avoid passing in the MainLoop?

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-21 Thread Patrick Ohly
On Mi, 2011-12-21 at 14:17 +0100, Chris Kühl wrote:
> Overall it looks like it makes sense.

Good. I already went ahead and have a working implementation now, too.

> A couple questions though.
> 
> * Where would you be using g_child_watch here? I'm assuming it would
> be set up internally in ForkExecParent and then trigger the OnQuit
> signal.

Right.

> * Should the forkexec argument set opt_fork_exec rather than opt_server.

Yes.

> * Is there a way to avoid passing in the MainLoop?

Hmm, yes. After looking at the current implementation I see that the
loop is never actually used. The *context* of the loop is used at one
point, and NULL would be valid for that. Even that could be avoided by
using the default context. I'll remove all of that until there is a
need.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2011-12-21 Thread Patrick Ohly
On Mi, 2011-12-21 at 14:47 +0100, Patrick Ohly wrote:
> On Mi, 2011-12-21 at 14:17 +0100, Chris Kühl wrote:
> > Overall it looks like it makes sense.
> 
> Good. I already went ahead and have a working implementation now, too.

I pushed the complete implementation to the "fork-exec" branch. If (or
when ;-) you are working, please have a look at the changes. I'll not
rebase that branch anymore, feel free to use it as base of your work.

In the meantime I'll start rewriting the local sync to use D-Bus as
communication instead of the current fork+pipes. The downside will be
that local sync starts to depend on glib and D-Bus, which wasn't the
case before. More a theoretic problem, I don't think anyone really
needed local sync without also having (and using) the other two.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-09 Thread Patrick Ohly
On Mi, 2011-12-21 at 15:44 +0100, Patrick Ohly wrote:
> On Mi, 2011-12-21 at 14:47 +0100, Patrick Ohly wrote:
> > On Mi, 2011-12-21 at 14:17 +0100, Chris Kühl wrote:
> > > Overall it looks like it makes sense.
> > 
> > Good. I already went ahead and have a working implementation now, too.
> 
> I pushed the complete implementation to the "fork-exec" branch. If (or
> when ;-) you are working, please have a look at the changes. I'll not
> rebase that branch anymore, feel free to use it as base of your work.

More changes pushed to "fork-exec":

commit 075003d907a94e821189591bf36bff2b52a0d9ca
Author: Patrick Ohly 
Date:   Mon Jan 9 14:40:34 2012 +0100

fork/exec: added state tracking

ForkExecParent is now keeping track of the current state:
IDLE,   /**< instance constructed, but start() not called 
yet */
STARTING,   /**< start() called */
CONNECTED,  /**< child has connected, D-Bus connection 
established */
TERMINATED  /**< child has quit */

Was needed for local sync and might be useful also elsewhere, thus
implemented in the core fork/exec code itself.

commit bf293d0b10e60d3c269a41b4f2b51aea0c54943b
Author: Patrick Ohly 
Date:   Mon Jan 9 14:38:49 2012 +0100

fork/exec: implemented stop() and kill()

Sending the signals was missing.

> In the meantime I'll start rewriting the local sync to use D-Bus as
> communication instead of the current fork+pipes. The downside will be
> that local sync starts to depend on glib and D-Bus, which wasn't the
> case before. More a theoretic problem, I don't think anyone really
> needed local sync without also having (and using) the other two.

I pushed my initial local sync with fork+exec to the "fork-local-sync"
branch.

Chris, do you want me to take a stab at adapting GDBus GIO to the recent
GDBus libdbus changes (local D-Bus connection, API
changes/improvements)?

We also need to finalize the support for launching helper binaries.
Here's what I do at the moment:

commit 25db81431a7ac5c63ff40548c4b77ee68cfa9c8a
Author: Patrick Ohly 
Date:   Mon Jan 9 16:36:28 2012 +0100

local sync: start 'syncevolution' as helper binary

Must be done manually with a symlink called 'syncevo-local-sync'.
A bit hackish at the moment and incomplete: how do we find the
right "syncevolution"? Do we actually install symlinks or copies
of the binary?

Advantage of using 'syncevolution' instead of linking a separate
helper: linking during compilation is slow. When static linking,
the binaries become quite large (contain libsyncevolution).

Disadvantage of using 'syncevolution': the D-Bus client code in
syncevolution.cpp is dead code when running the helper. Minor issue?!

What do you think about this?

I prefer installing 'syncevolution' only once (i.e. no changes in the
installation + packaging) and adding a "run binary xyz with argv[0] set
to abc feature to fork/exec". The actual binary would be found by
looking in these places:
- SYNCEVOLUTION_BINARY env variable
- argv[0] basedir of parent + "./syncevolution"
- standard PATH

That should take care of nightly testing (can set env variable if the
other approaches fail), running inside "src" (because that's where all
binaries reside in the build dir) and full installation.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-09 Thread Chris Kühl
On Mon, Jan 9, 2012 at 4:54 PM, Patrick Ohly  wrote:
> On Mi, 2011-12-21 at 15:44 +0100, Patrick Ohly wrote:
>> On Mi, 2011-12-21 at 14:47 +0100, Patrick Ohly wrote:
>> > On Mi, 2011-12-21 at 14:17 +0100, Chris Kühl wrote:
>> > > Overall it looks like it makes sense.
>> >
>> > Good. I already went ahead and have a working implementation now, too.
>>
>> I pushed the complete implementation to the "fork-exec" branch. If (or
>> when ;-) you are working, please have a look at the changes. I'll not
>> rebase that branch anymore, feel free to use it as base of your work.
>
> More changes pushed to "fork-exec":
>
> commit 075003d907a94e821189591bf36bff2b52a0d9ca
> Author: Patrick Ohly 
> Date:   Mon Jan 9 14:40:34 2012 +0100
>
>    fork/exec: added state tracking
>
>    ForkExecParent is now keeping track of the current state:
>            IDLE,           /**< instance constructed, but start() not called 
> yet */
>            STARTING,       /**< start() called */
>            CONNECTED,      /**< child has connected, D-Bus connection 
> established */
>            TERMINATED      /**< child has quit */
>
>    Was needed for local sync and might be useful also elsewhere, thus
>    implemented in the core fork/exec code itself.
>

I've got a commit which adds a getChildPid() method to ForkExecParent
in my branch. I'm using this as a quick and dirty unique idea that the
server and child session share right now.

> commit bf293d0b10e60d3c269a41b4f2b51aea0c54943b
> Author: Patrick Ohly 
> Date:   Mon Jan 9 14:38:49 2012 +0100
>
>    fork/exec: implemented stop() and kill()
>
>    Sending the signals was missing.
>
>> In the meantime I'll start rewriting the local sync to use D-Bus as
>> communication instead of the current fork+pipes. The downside will be
>> that local sync starts to depend on glib and D-Bus, which wasn't the
>> case before. More a theoretic problem, I don't think anyone really
>> needed local sync without also having (and using) the other two.
>
> I pushed my initial local sync with fork+exec to the "fork-local-sync"
> branch.

For my part, I've been moving forward with attempting to place the
session and connection objects into the helper binary. The public dbus
interface still must reside in the server process in order to keep the
path and objects on the same name (org.syncevolution). So, most
session information is relayed from the helper process to the server
process via the p2p dbus connection.

My thinking is that this simplifies the server and would make it
easier to eventually get rid of the g_main_loop manipulation that's
prevalent now. I'm not 100% convinced this will work in the end. Do
you see any show-stopper to this approach?

>
> Chris, do you want me to take a stab at adapting GDBus GIO to the recent
> GDBus libdbus changes (local D-Bus connection, API
> changes/improvements)?
>

In order to avoid distraction, I was intending to do this only once
concurrent session is working with the libdbus implementation.
However, I can see that if you want to go ahead and merge you local
sync code into master that would need to be done first. So, feel free
to do that.

> We also need to finalize the support for launching helper binaries.
> Here's what I do at the moment:
>
> commit 25db81431a7ac5c63ff40548c4b77ee68cfa9c8a
> Author: Patrick Ohly 
> Date:   Mon Jan 9 16:36:28 2012 +0100
>
>    local sync: start 'syncevolution' as helper binary
>
>    Must be done manually with a symlink called 'syncevo-local-sync'.
>    A bit hackish at the moment and incomplete: how do we find the
>    right "syncevolution"? Do we actually install symlinks or copies
>    of the binary?
>
>    Advantage of using 'syncevolution' instead of linking a separate
>    helper: linking during compilation is slow.

On the other hand, if you make changes only in code that the helper
depends on then the server lib doesn't need to be recompiled.

When static linking,
>    the binaries become quite large (contain libsyncevolution).
>

To give this some numbers...

using --disable-shared --enable-static the binary sizes are:
-rwxrwxr-x.  1 chris chris  27M Jan  9 18:16 syncevo-dbus-helper
-rwxrwxr-x.  1 chris chris  32M Jan  9 18:16 syncevo-dbus-server

using --enable-shared --disable-static it's:
-rwxrwxr-x.  1 chris chris 8.7K Jan  9 18:28 syncevo-dbus-helper
-rwxrwxr-x.  1 chris chris 8.6K Jan  9 18:28 syncevo-dbus-server

>    Disadvantage of using 'syncevolution': the D-Bus client code in
>    syncevolution.cpp is dead code when running the helper. Minor issue?!
>

Yes, minor I'd say.

> What do you think about this?
>
> I prefer installing 'syncevolution' only once (i.e. no changes in the
> installation + packaging) and adding a "run binary xyz with argv[0] set
> to abc feature to fork/exec". The actual binary would be found by
> looking in these places:
> - SYNCEVOLUTION_BINARY env variable
> - argv[0] basedir of parent + "./syncevolution"
> - standard PATH
>

I like the cleanliness of the separate binary myself. And act

Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-09 Thread Patrick Ohly
On Mo, 2012-01-09 at 18:59 +0100, Chris Kühl wrote:
> On Mon, Jan 9, 2012 at 4:54 PM, Patrick Ohly  wrote:
> > commit bf293d0b10e60d3c269a41b4f2b51aea0c54943b
> > Author: Patrick Ohly 
> > Date:   Mon Jan 9 14:38:49 2012 +0100
> >
> >fork/exec: implemented stop() and kill()
> >
> >Sending the signals was missing.
> >
> >> In the meantime I'll start rewriting the local sync to use D-Bus as
> >> communication instead of the current fork+pipes. The downside will be
> >> that local sync starts to depend on glib and D-Bus, which wasn't the
> >> case before. More a theoretic problem, I don't think anyone really
> >> needed local sync without also having (and using) the other two.
> >
> > I pushed my initial local sync with fork+exec to the "fork-local-sync"
> > branch.
> 
> For my part, I've been moving forward with attempting to place the
> session and connection objects into the helper binary. The public dbus
> interface still must reside in the server process in order to keep the
> path and objects on the same name (org.syncevolution). So, most
> session information is relayed from the helper process to the server
> process via the p2p dbus connection.
> 
> My thinking is that this simplifies the server and would make it
> easier to eventually get rid of the g_main_loop manipulation that's
> prevalent now. I'm not 100% convinced this will work in the end. Do
> you see any show-stopper to this approach?

The main syncevo-dbus-server definitely should get rewritten that it is
completely asynchronous and runs g_main_loop() only once. Anything that
runs a sync session (i.e. the helper) will have to continue doing the
g_main_loop dance or become multithreaded, because the libsynthesis APIs
are not asynchronous.

I'm not sure which of the two is the lesser evil: multithreading or
entering/leaving the main loop repeatedly.

> > Chris, do you want me to take a stab at adapting GDBus GIO to the recent
> > GDBus libdbus changes (local D-Bus connection, API
> > changes/improvements)?
> >
> 
> In order to avoid distraction, I was intending to do this only once
> concurrent session is working with the libdbus implementation.
> However, I can see that if you want to go ahead and merge you local
> sync code into master that would need to be done first. So, feel free
> to do that.

Okay, will do.

> > We also need to finalize the support for launching helper binaries.
> > Here's what I do at the moment:
> >
> > commit 25db81431a7ac5c63ff40548c4b77ee68cfa9c8a
> > Author: Patrick Ohly 
> > Date:   Mon Jan 9 16:36:28 2012 +0100
> >
> >local sync: start 'syncevolution' as helper binary
> >
> >Must be done manually with a symlink called 'syncevo-local-sync'.
> >A bit hackish at the moment and incomplete: how do we find the
> >right "syncevolution"? Do we actually install symlinks or copies
> >of the binary?
> >
> >Advantage of using 'syncevolution' instead of linking a separate
> >helper: linking during compilation is slow.
> 
> On the other hand, if you make changes only in code that the helper
> depends on then the server lib doesn't need to be recompiled.

True.

> When static linking,
> >the binaries become quite large (contain libsyncevolution).
> >
> 
> To give this some numbers...
> 
> using --disable-shared --enable-static the binary sizes are:
> -rwxrwxr-x.  1 chris chris  27M Jan  9 18:16 syncevo-dbus-helper
> -rwxrwxr-x.  1 chris chris  32M Jan  9 18:16 syncevo-dbus-server

I wouldn't want that kind of code duplication in a production version,
but as this is just for development I don't think it matters. My concern
about link times is also unfounded - at least on my new laptop ;-) 3
seconds to link is fine. It used to be longer.

> > What do you think about this?
> >
> > I prefer installing 'syncevolution' only once (i.e. no changes in the
> > installation + packaging) and adding a "run binary xyz with argv[0] set
> > to abc feature to fork/exec". The actual binary would be found by
> > looking in these places:
> > - SYNCEVOLUTION_BINARY env variable
> > - argv[0] basedir of parent + "./syncevolution"
> > - standard PATH
> >
> 
> I like the cleanliness of the separate binary myself. And actually I'm
> already using a separate binary in my branch[1]. I guess when you said
> "I'd like to use the separate binary..."[2] I took that as a green
> light to build a separate binary. ;) I can change that if you insist,
> though.

No, let's do it with separate binaries. I'll follow suit with local
sync.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-11 Thread Chris Kühl
On Tue, Jan 10, 2012 at 8:31 AM, Patrick Ohly  wrote:
> On Mo, 2012-01-09 at 18:59 +0100, Chris Kühl wrote:
>> On Mon, Jan 9, 2012 at 4:54 PM, Patrick Ohly  wrote:
>> > commit bf293d0b10e60d3c269a41b4f2b51aea0c54943b
>> > Author: Patrick Ohly 
>> > Date:   Mon Jan 9 14:38:49 2012 +0100
>> >
>> >    fork/exec: implemented stop() and kill()
>> >
>> >    Sending the signals was missing.
>> >

I've run into an issue when trying to use signals between the child
and parent. When activating the signal it tries to use
dbus_bus_add_match which tries in turn to call AddMatch on the
org.fredesktop.DBus interface which is not available. I've modified
the dbus-client-connection.cpp example to expose this[1] and am
looking into fixing it.

I'm thinking this will not be an issue with the GIO GDBus wrapper.

[1] 
https://meego.gitorious.org/meego-middleware/syncevolution/commit/2b39a0410d5b8f362e3453ede17f980f494e0c10

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-11 Thread Chris Kühl
On Wed, Jan 11, 2012 at 1:28 PM, Chris Kühl  wrote:
> On Tue, Jan 10, 2012 at 8:31 AM, Patrick Ohly  wrote:
>> On Mo, 2012-01-09 at 18:59 +0100, Chris Kühl wrote:
>>> On Mon, Jan 9, 2012 at 4:54 PM, Patrick Ohly  wrote:
>>> > commit bf293d0b10e60d3c269a41b4f2b51aea0c54943b
>>> > Author: Patrick Ohly 
>>> > Date:   Mon Jan 9 14:38:49 2012 +0100
>>> >
>>> >    fork/exec: implemented stop() and kill()
>>> >
>>> >    Sending the signals was missing.
>>> >
>
> I've run into an issue when trying to use signals between the child
> and parent. When activating the signal it tries to use
> dbus_bus_add_match which tries in turn to call AddMatch on the
> org.fredesktop.DBus interface which is not available. I've modified
> the dbus-client-connection.cpp example to expose this[1] and am
> looking into fixing it.
>
> I'm thinking this will not be an issue with the GIO GDBus wrapper.

This will very much be an issue with GIO GDBus as it uses the same
mechanism. Looking that the souce of libdbus and gdbus leads me to
believe using signals on a non-bus connection doesn't really make
sense. I just use method calls in this case.

Cheers,
Chris

>
> [1] 
> https://meego.gitorious.org/meego-middleware/syncevolution/commit/2b39a0410d5b8f362e3453ede17f980f494e0c10
>
> Cheers,
> Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-11 Thread Patrick Ohly
On Mi, 2012-01-11 at 14:59 +0100, Chris Kühl wrote:
> This will very much be an issue with GIO GDBus as it uses the same
> mechanism. Looking that the souce of libdbus and gdbus leads me to
> believe using signals on a non-bus connection doesn't really make
> sense. I just use method calls in this case.

Indeed, registering signal subscribers isn't very useful when there is
always exactly one recipient.

Support for one-way method invocations (= don't care about return value)
would be almost identical to such a signal. We don't support those at
the moment, as far as I remember: the caller must use synchronous method
calls (which wait for a result) or use asynchronous with callback. It
could use a callback which doesn't do anything.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-13 Thread Patrick Ohly
On Mo, 2012-01-09 at 18:59 +0100, Chris Kühl wrote:
> > Chris, do you want me to take a stab at adapting GDBus GIO to the
> recent
> > GDBus libdbus changes (local D-Bus connection, API
> > changes/improvements)?
> >
> 
> In order to avoid distraction, I was intending to do this only once
> concurrent session is working with the libdbus implementation.
> However, I can see that if you want to go ahead and merge you local
> sync code into master that would need to be done first. So, feel free
> to do that.

I got distracted this week with SyncEvolution maintenance work (there
might be a 1.2.2 soon) and battled with a cold - I'm finally winning :-0

Today I finally got around to it. See updated "fork-exec" branch and in
particular this commit:

commit 22b8e3286451b43ac9914eafde725e5d8a45fe27
Author: Patrick Ohly 
Date:   Fri Jan 13 14:19:51 2012 +0100

GDBus GIO: implemented client/server

This pretty much follows the example from:
http://developer.gnome.org/gio/2.28/GDBusServer.html#GDBusServer.description

However, there seems to be a race condition, perhaps related to
multithreading (messages are processed in a second thread by GIO):
"valgrind test/dbus-client-server --forkexec" works, whereas without
valgrind the call either times out or fails with "No methods
registered with this name" in the server's MethodHandler::handler()
(occurs rarely).

Not usable at the moment.

I'm stuck on that. Unless someone has a better idea, I'll have to start
compiling a debug version of glib and look into GIO GDBus.

"strace -f" (yes, you need to watched more than one process) on the
server side shows that the method call is received in thread #1, but it
never shows up in thread #0 (the main one where the connection was
established).


-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-15 Thread Chris Kühl
On Wed, Jan 11, 2012 at 3:08 PM, Patrick Ohly  wrote:
> On Mi, 2012-01-11 at 14:59 +0100, Chris Kühl wrote:
>> This will very much be an issue with GIO GDBus as it uses the same
>> mechanism. Looking that the souce of libdbus and gdbus leads me to
>> believe using signals on a non-bus connection doesn't really make
>> sense. I just use method calls in this case.
>
> Indeed, registering signal subscribers isn't very useful when there is
> always exactly one recipient.
>

I've got SignalWatch activation working with one-to-one connections.
Just have to forgo adding the match rule. See
https://meego.gitorious.org/meego-middleware/syncevolution/commit/7dc817671409e670dfa4819dcb23d8fb2cba5a6c

Cheers,
Chris

> Support for one-way method invocations (= don't care about return value)
> would be almost identical to such a signal. We don't support those at
> the moment, as far as I remember: the caller must use synchronous method
> calls (which wait for a result) or use asynchronous with callback. It
> could use a callback which doesn't do anything.
>
> --
> Best Regards, Patrick Ohly
>
> The content of this message is my personal opinion only and although
> I am an employee of Intel, the statements I make here in no way
> represent Intel's position on the issue, nor am I authorized to speak
> on behalf of Intel on this matter.
>
>
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-16 Thread Patrick Ohly
On So, 2012-01-15 at 12:09 +0100, Chris Kühl wrote:
> On Wed, Jan 11, 2012 at 3:08 PM, Patrick Ohly  wrote:
> > On Mi, 2012-01-11 at 14:59 +0100, Chris Kühl wrote:
> >> This will very much be an issue with GIO GDBus as it uses the same
> >> mechanism. Looking that the souce of libdbus and gdbus leads me to
> >> believe using signals on a non-bus connection doesn't really make
> >> sense. I just use method calls in this case.
> >
> > Indeed, registering signal subscribers isn't very useful when there is
> > always exactly one recipient.
> >
> 
> I've got SignalWatch activation working with one-to-one connections.
> Just have to forgo adding the match rule.

Is that also going to be possible with GDBus GIO? If not, then we will
have a problem once code depends on signals and we want to switch to
GDBus GIO.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-16 Thread Chris Kühl
On Mon, Jan 16, 2012 at 9:50 AM, Patrick Ohly  wrote:
> On So, 2012-01-15 at 12:09 +0100, Chris Kühl wrote:
>> On Wed, Jan 11, 2012 at 3:08 PM, Patrick Ohly  wrote:
>> > On Mi, 2012-01-11 at 14:59 +0100, Chris Kühl wrote:
>> >> This will very much be an issue with GIO GDBus as it uses the same
>> >> mechanism. Looking that the souce of libdbus and gdbus leads me to
>> >> believe using signals on a non-bus connection doesn't really make
>> >> sense. I just use method calls in this case.
>> >
>> > Indeed, registering signal subscribers isn't very useful when there is
>> > always exactly one recipient.
>> >
>>
>> I've got SignalWatch activation working with one-to-one connections.
>> Just have to forgo adding the match rule.
>
> Is that also going to be possible with GDBus GIO? If not, then we will
> have a problem once code depends on signals and we want to switch to
> GDBus GIO.
>

I've not gone and implemented it yet, but I'd say most definitely. I
think it's as simple as not calling
g_dbus_connection_signal_subscribe.

All messages sent from the interface provider are received by the
interface consumer. The signal is simply a message of type
G_DBUS_MESSAGE_TYPE_SIGNAL which is checked in the isMatch method. So
things should work pretty easily.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-16 Thread Chris Kühl
On Mon, Jan 16, 2012 at 10:17 AM, Chris Kühl  wrote:
> On Mon, Jan 16, 2012 at 9:50 AM, Patrick Ohly  wrote:
>> On So, 2012-01-15 at 12:09 +0100, Chris Kühl wrote:
>>> On Wed, Jan 11, 2012 at 3:08 PM, Patrick Ohly  
>>> wrote:
>>> > On Mi, 2012-01-11 at 14:59 +0100, Chris Kühl wrote:
>>> >> This will very much be an issue with GIO GDBus as it uses the same
>>> >> mechanism. Looking that the souce of libdbus and gdbus leads me to
>>> >> believe using signals on a non-bus connection doesn't really make
>>> >> sense. I just use method calls in this case.
>>> >
>>> > Indeed, registering signal subscribers isn't very useful when there is
>>> > always exactly one recipient.
>>> >
>>>
>>> I've got SignalWatch activation working with one-to-one connections.
>>> Just have to forgo adding the match rule.
>>
>> Is that also going to be possible with GDBus GIO? If not, then we will
>> have a problem once code depends on signals and we want to switch to
>> GDBus GIO.
>>
>
> I've not gone and implemented it yet, but I'd say most definitely. I
> think it's as simple as not calling
> g_dbus_connection_signal_subscribe.
>

Actually it's not quite that easy. You'll have to set up a filter and
fire the callback.  But it's still straight forward.

> All messages sent from the interface provider are received by the
> interface consumer. The signal is simply a message of type
> G_DBUS_MESSAGE_TYPE_SIGNAL which is checked in the isMatch method. So
> things should work pretty easily.
>
> Cheers,
> Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-16 Thread Patrick Ohly
On Fr, 2012-01-13 at 14:29 +0100, Patrick Ohly wrote:
> commit 22b8e3286451b43ac9914eafde725e5d8a45fe27
> Author: Patrick Ohly 
> Date:   Fri Jan 13 14:19:51 2012 +0100
> 
> GDBus GIO: implemented client/server
> 
> This pretty much follows the example from:
> 
> http://developer.gnome.org/gio/2.28/GDBusServer.html#GDBusServer.description
> 
> However, there seems to be a race condition, perhaps related to
> multithreading (messages are processed in a second thread by GIO):
> "valgrind test/dbus-client-server --forkexec" works, whereas without
> valgrind the call either times out or fails with "No methods
> registered with this name" in the server's MethodHandler::handler()
> (occurs rarely).
> 
> Not usable at the moment.
> 
> I'm stuck on that. Unless someone has a better idea, I'll have to start
> compiling a debug version of glib and look into GIO GDBus.

Duh! Turned out that I wasn't returning a proper return boolean in the
"new-connection" handler. The method returned void, which ended up being
a random value in the caller - that explains why it worked sometimes.

Fixed now. After also fixing the handling of asynchronous method
implementation in GDBus GIO, local sync works with it.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-17 Thread Patrick Ohly
On Mo, 2012-01-16 at 17:19 +0100, Patrick Ohly wrote:
> After also fixing the handling of asynchronous method
> implementation in GDBus GIO, local sync works with it.

After fixing some more regressions the D-Bus tests ran well
(http://syncev.meego.com/2012-01-17-13-27_testing_gio-gdbus_dbus/), so I
pushed all of the changes for gdbus C++ and fork/exec to the master
branch.

Chris, this is different from the previous fork/exec branch, so please
do a "git rebase -i" and pick just your own changes when moving to the
latest code.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-17 Thread Chris Kühl
On Tue, Jan 17, 2012 at 4:03 PM, Patrick Ohly  wrote:
> On Mo, 2012-01-16 at 17:19 +0100, Patrick Ohly wrote:
>> After also fixing the handling of asynchronous method
>> implementation in GDBus GIO, local sync works with it.
>
> After fixing some more regressions the D-Bus tests ran well
> (http://syncev.meego.com/2012-01-17-13-27_testing_gio-gdbus_dbus/), so I
> pushed all of the changes for gdbus C++ and fork/exec to the master
> branch.
>
> Chris, this is different from the previous fork/exec branch, so please
> do a "git rebase -i" and pick just your own changes when moving to the
> latest code.
>

Ok, I'll take a look after I've pushed the changes I'm working on now
to get the full one-to-one dbus interface implemented.

Btw, similar to the way the code in syncevolution.cpp makes a dbus
method call then runs a mainloop till the call back has returned, I
had to create such a hack[1] but slightly differently. I've
implemented this by polling the dbusconnection's fd and on any fd
activity, do a context iteration, then seeing if the callback has
finished. Like I said it's a hack and I'll see how much mileage this
gets me but I'm not convinced it's any worse than what's being done in
syncevolution.cpp. Feel free to convince me, though. ;)

Cheers,
Chris

[1] 
https://meego.gitorious.org/meego-middleware/syncevolution/commit/7623d38b047c86b68b1481db7c8620d9352e5d72
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes

2012-01-18 Thread Chris Kühl
On Tue, Jan 17, 2012 at 5:18 PM, Chris Kühl  wrote:
> On Tue, Jan 17, 2012 at 4:03 PM, Patrick Ohly  wrote:
>> On Mo, 2012-01-16 at 17:19 +0100, Patrick Ohly wrote:
>>> After also fixing the handling of asynchronous method
>>> implementation in GDBus GIO, local sync works with it.
>>
>> After fixing some more regressions the D-Bus tests ran well
>> (http://syncev.meego.com/2012-01-17-13-27_testing_gio-gdbus_dbus/), so I
>> pushed all of the changes for gdbus C++ and fork/exec to the master
>> branch.
>>
>> Chris, this is different from the previous fork/exec branch, so please
>> do a "git rebase -i" and pick just your own changes when moving to the
>> latest code.
>>
>
> Ok, I'll take a look after I've pushed the changes I'm working on now
> to get the full one-to-one dbus interface implemented.

I've renamed my branch to concurrent-sync-sessions and rebased onto
master. I'm now going through and making required changes to get tests
to work.

Cheers,
Chris

>
> Btw, similar to the way the code in syncevolution.cpp makes a dbus
> method call then runs a mainloop till the call back has returned, I
> had to create such a hack[1] but slightly differently. I've
> implemented this by polling the dbusconnection's fd and on any fd
> activity, do a context iteration, then seeing if the callback has
> finished. Like I said it's a hack and I'll see how much mileage this
> gets me but I'm not convinced it's any worse than what's being done in
> syncevolution.cpp. Feel free to convince me, though. ;)
>
> Cheers,
> Chris
>
> [1] 
> https://meego.gitorious.org/meego-middleware/syncevolution/commit/7623d38b047c86b68b1481db7c8620d9352e5d72
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-19 Thread Patrick Ohly
On Mi, 2012-01-18 at 16:55 +0100, Chris Kühl wrote:
> I've renamed my branch to concurrent-sync-sessions and rebased onto
> master. I'm now going through and making required changes to get tests
> to work.

Note that I pushed another change onto a new "signal-handling" branch.
This affects you because syncevo-dbus-server will have to relay
suspend/abort requests. The helper process will have to deal with
signals similar to syncevo-local-sync helper in that branch.

Do these changes make sense?

commit 9b4e87732ef4e9f540016022473e5ab381a21534
Author: Patrick Ohly 
Date:   Thu Jan 19 16:11:22 2012 +0100

rewrote signal handling

Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.

Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).

The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).

Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
  the need to poll at regular intervals; polling is still possible
  (and necessary) in those transports which do not integrate with
  the event loop (CurlTransport) while it can be removed from
  others (SoupTransport, OBEXTransport)

A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".

The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).

Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.

The new class is used as follows:
- syncevolution command line without daemon uses it to control
  suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
  connects to the state change signal and relays it to the
  syncevo-dbus-server session via D-Bus; now all operations
  are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
  and SIGTERM and tries to shut down when either of them
  is received. SuspendFlags then doesn't activate its own
  handler. Instead that handler is invoked by the
  syncevo-dbus-server niam() handler, to suspend or abort
  a running sync. Once syncs run in a separate process, the
  syncevo-dbus-server should request that these processes
  suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
  has started. It would receive that signal when forked by
  syncevolution in non-daemon mode and the user presses
  CTRL-C. Now the signal is only handled in the parent
  process, which suspends as part of its own side of
  the SyncML session and aborts by sending a SIGTERM+SIGINT
  to syncevo-local-sync. SIGTERM in syncevo-local-sync is
  handled by SuspendFlags and is meant to abort whatever
  is going on there at the moment (see below).

Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-20 Thread Chris Kühl
On Thu, Jan 19, 2012 at 4:19 PM, Patrick Ohly  wrote:
> On Mi, 2012-01-18 at 16:55 +0100, Chris Kühl wrote:
>> I've renamed my branch to concurrent-sync-sessions and rebased onto
>> master. I'm now going through and making required changes to get tests
>> to work.
>
> Note that I pushed another change onto a new "signal-handling" branch.
> This affects you because syncevo-dbus-server will have to relay
> suspend/abort requests. The helper process will have to deal with
> signals similar to syncevo-local-sync helper in that branch.
>
> Do these changes make sense?
>

Yes, this looks fine.

btw, I noticed when looking through this the the copyright dates for
SuspendFlags.cpp are old.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-20 Thread Patrick Ohly
On Fr, 2012-01-20 at 10:53 +0100, Chris Kühl wrote:
> On Thu, Jan 19, 2012 at 4:19 PM, Patrick Ohly  wrote:
> > On Mi, 2012-01-18 at 16:55 +0100, Chris Kühl wrote:
> >> I've renamed my branch to concurrent-sync-sessions and rebased onto
> >> master. I'm now going through and making required changes to get tests
> >> to work.
> >
> > Note that I pushed another change onto a new "signal-handling" branch.
> > This affects you because syncevo-dbus-server will have to relay
> > suspend/abort requests. The helper process will have to deal with
> > signals similar to syncevo-local-sync helper in that branch.
> >
> > Do these changes make sense?
> >
> 
> Yes, this looks fine.
> 
> btw, I noticed when looking through this the the copyright dates for
> SuspendFlags.cpp are old.

Already updated ;-)

BTW, more complete nightly testing found some autotools issue in some
configurations related to the new libsyncevolution->libgdbussyncevo
dependency. Fixes will be in master soon.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-20 Thread Patrick Ohly
On Fr, 2012-01-20 at 10:53 +0100, Chris Kühl wrote:
> On Thu, Jan 19, 2012 at 4:19 PM, Patrick Ohly  wrote:
> > On Mi, 2012-01-18 at 16:55 +0100, Chris Kühl wrote:
> >> I've renamed my branch to concurrent-sync-sessions and rebased onto
> >> master. I'm now going through and making required changes to get tests
> >> to work.
> >
> > Note that I pushed another change onto a new "signal-handling" branch.
> > This affects you because syncevo-dbus-server will have to relay
> > suspend/abort requests. The helper process will have to deal with
> > signals similar to syncevo-local-sync helper in that branch.
> >
> > Do these changes make sense?
> >
> 
> Yes, this looks fine.

They are on their way towards master. Running one more test now.

After looking into what it would take to have all backends react to
abort requests in a timely manner I have come to the conclusion that it
will be much simpler to just let the process running them die when
receiving a signal. Teaching libs with a synchronous API, like libneon
and the activesyncd client libraries, how to watch additional events
will be hard.

For syncevo-local-sync I have implemented that and intend to merge it
soon, see for-master/signal-handling branch.

Another patch modifies the syncevo-dbus-server. It's a step towards
gettting rid of SyncContext::checkForAbort/Suspend(). This is not
absolutely necessary at the moment and probably conflicts with your
server rewrite, so I intend to keep it on a branch for now.

Comments welcome, though:

Author: Patrick Ohly   2012-01-20 18:20:29
Committer: Patrick Ohly   2012-01-20 20:43:31
Parent: 1fa8a5d94e8352f6ccd7574d275bc88102a09ed3 (local sync: better error 
reporting when child finds problem)
Branches: remotes/origin/signal-handling, signal-handling
Follows: syncevolution-1-2-2
Precedes: 

D-Bus server: set global suspend/abort state

The D-Bus server implemented suspend/abort of a session only in its
checkForSuspend/Abort() implementation my looking at the sessions
current state. The downside is that polling is necessary to detect
that state change.

This commit introduces the possibility to temporarily change the
global SuspendFlags state to abort resp. suspend. The duration of that
change is determined by the caller, which needs to keep a shared
pointer alive as long as it wants to remain in that
state. SuspendFlags itself doesn't keep the shared pointer; it merely
checks via weak pointers whether the shared ones still exist.

This approach ensures that if the caller gets destructed, the global
state automatically returns what it was before. This is necessary in
the D-Bus server because it runs one session after the other and
aborting one session should never affect another.

However, sessions stay around longer than they are active (= own the
global state). Therefore it is necessary to never modify the global
state before the session is active and restore it as soon as the
session becomes inactive again. The redundancy between session state
and global state is encapsulated in the new SyncStateOwner class which
replaces a mere enum. Now the state assignment triggers additional
code which mirrors the session's state in SuspendFlags.

It can also do additional sanity checks, like preventing a transition
from "abort" to "running". Previously, the abort or suspend request on
a not yet running session most likely was ignored by overwriting it
with SYNC_RUNNING in run().

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-21 Thread Patrick Ohly
On Fr, 2012-01-20 at 21:21 +0100, Patrick Ohly wrote:
> On Fr, 2012-01-20 at 10:53 +0100, Chris Kühl wrote:
> > On Thu, Jan 19, 2012 at 4:19 PM, Patrick Ohly  
> > wrote:
> > > On Mi, 2012-01-18 at 16:55 +0100, Chris Kühl wrote:
> > >> I've renamed my branch to concurrent-sync-sessions and rebased onto
> > >> master. I'm now going through and making required changes to get tests
> > >> to work.
> > >
> > > Note that I pushed another change onto a new "signal-handling" branch.
> > > This affects you because syncevo-dbus-server will have to relay
> > > suspend/abort requests. The helper process will have to deal with
> > > signals similar to syncevo-local-sync helper in that branch.
> > >
> > > Do these changes make sense?
> > >
> > 
> > Yes, this looks fine.
> 
> They are on their way towards master. Running one more test now.
> 
> After looking into what it would take to have all backends react to
> abort requests in a timely manner I have come to the conclusion that it
> will be much simpler to just let the process running them die when
> receiving a signal. Teaching libs with a synchronous API, like libneon
> and the activesyncd client libraries, how to watch additional events
> will be hard.
> 
> For syncevo-local-sync I have implemented that and intend to merge it
> soon, see for-master/signal-handling branch.
> 
> Another patch modifies the syncevo-dbus-server. It's a step towards
> gettting rid of SyncContext::checkForAbort/Suspend(). This is not
> absolutely necessary at the moment and probably conflicts with your
> server rewrite, so I intend to keep it on a branch for now.

After looking at the only regression reported by testing for the
for-master/fork-local-sync branch it turned out that this final patch
*is* needed: when not applied, TestLocalSync.testConcurrency fails
because the Session.Abort() call has no effect while the local transport
still waits for the child to start. session.cpp sets the SYNC_ABORT
status and quits the main loop with the goal getting the SyncContext to
react to the abort. Instead the fork/exec local transport restarts the
loop and the sync session starts normally.

This is partly a failure in the local transport (it would have to do a
checkForAbort()), partly in the session (as mentioned in the commit
message for my latest commit, it allowed a session to go from "aborted"
to "running").

Either way, with the latest commit applied, the transport gets canceled
as soon as Abort() is called and the session aborts as intended.

Chris, I hope I'm not disrupting your work too much by merging the
entire "signal-handling" into master.


-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-23 Thread Chris Kühl
On Sat, Jan 21, 2012 at 9:23 PM, Patrick Ohly  wrote:
> On Fr, 2012-01-20 at 21:21 +0100, Patrick Ohly wrote:
>> On Fr, 2012-01-20 at 10:53 +0100, Chris Kühl wrote:
>> > On Thu, Jan 19, 2012 at 4:19 PM, Patrick Ohly  
>> > wrote:
>> > > On Mi, 2012-01-18 at 16:55 +0100, Chris Kühl wrote:
>> > >> I've renamed my branch to concurrent-sync-sessions and rebased onto
>> > >> master. I'm now going through and making required changes to get tests
>> > >> to work.
>> > >
>> > > Note that I pushed another change onto a new "signal-handling" branch.
>> > > This affects you because syncevo-dbus-server will have to relay
>> > > suspend/abort requests. The helper process will have to deal with
>> > > signals similar to syncevo-local-sync helper in that branch.
>> > >
>> > > Do these changes make sense?
>> > >
>> >
>> > Yes, this looks fine.
>>
>> They are on their way towards master. Running one more test now.
>>
>> After looking into what it would take to have all backends react to
>> abort requests in a timely manner I have come to the conclusion that it
>> will be much simpler to just let the process running them die when
>> receiving a signal. Teaching libs with a synchronous API, like libneon
>> and the activesyncd client libraries, how to watch additional events
>> will be hard.
>>
>> For syncevo-local-sync I have implemented that and intend to merge it
>> soon, see for-master/signal-handling branch.
>>
>> Another patch modifies the syncevo-dbus-server. It's a step towards
>> gettting rid of SyncContext::checkForAbort/Suspend(). This is not
>> absolutely necessary at the moment and probably conflicts with your
>> server rewrite, so I intend to keep it on a branch for now.
>
> After looking at the only regression reported by testing for the
> for-master/fork-local-sync branch it turned out that this final patch
> *is* needed: when not applied, TestLocalSync.testConcurrency fails
> because the Session.Abort() call has no effect while the local transport
> still waits for the child to start. session.cpp sets the SYNC_ABORT
> status and quits the main loop with the goal getting the SyncContext to
> react to the abort. Instead the fork/exec local transport restarts the
> loop and the sync session starts normally.
>
> This is partly a failure in the local transport (it would have to do a
> checkForAbort()), partly in the session (as mentioned in the commit
> message for my latest commit, it allowed a session to go from "aborted"
> to "running").
>
> Either way, with the latest commit applied, the transport gets canceled
> as soon as Abort() is called and the session aborts as intended.
>
> Chris, I hope I'm not disrupting your work too much by merging the
> entire "signal-handling" into master.
>
>

No please go ahead and merge. This fixes I was seeing on the horizon
anyway. I'd also rather have any fixes and changes you've made early
rather than later.

Also, I was reminded by the TestDBusServerTerm.testTermConnection test
that I still needed to implement the Connection interface in the
helper. I've pushed some code to that end although I'm still working
out some kinks in the dbus communication between the server and helper
at the moment.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-24 Thread Chris Kühl
On Mon, Jan 23, 2012 at 12:55 PM, Chris Kühl  wrote:
> On Sat, Jan 21, 2012 at 9:23 PM, Patrick Ohly  wrote:
>> On Fr, 2012-01-20 at 21:21 +0100, Patrick Ohly wrote:
>>> On Fr, 2012-01-20 at 10:53 +0100, Chris Kühl wrote:
>>> > On Thu, Jan 19, 2012 at 4:19 PM, Patrick Ohly  
>>> > wrote:
>>> > > On Mi, 2012-01-18 at 16:55 +0100, Chris Kühl wrote:
>>> > >> I've renamed my branch to concurrent-sync-sessions and rebased onto
>>> > >> master. I'm now going through and making required changes to get tests
>>> > >> to work.
>>> > >
>>> > > Note that I pushed another change onto a new "signal-handling" branch.
>>> > > This affects you because syncevo-dbus-server will have to relay
>>> > > suspend/abort requests. The helper process will have to deal with
>>> > > signals similar to syncevo-local-sync helper in that branch.
>>> > >
>>> > > Do these changes make sense?
>>> > >
>>> >
>>> > Yes, this looks fine.
>>>
>>> They are on their way towards master. Running one more test now.
>>>
>>> After looking into what it would take to have all backends react to
>>> abort requests in a timely manner I have come to the conclusion that it
>>> will be much simpler to just let the process running them die when
>>> receiving a signal. Teaching libs with a synchronous API, like libneon
>>> and the activesyncd client libraries, how to watch additional events
>>> will be hard.
>>>
>>> For syncevo-local-sync I have implemented that and intend to merge it
>>> soon, see for-master/signal-handling branch.
>>>
>>> Another patch modifies the syncevo-dbus-server. It's a step towards
>>> gettting rid of SyncContext::checkForAbort/Suspend(). This is not
>>> absolutely necessary at the moment and probably conflicts with your
>>> server rewrite, so I intend to keep it on a branch for now.
>>
>> After looking at the only regression reported by testing for the
>> for-master/fork-local-sync branch it turned out that this final patch
>> *is* needed: when not applied, TestLocalSync.testConcurrency fails
>> because the Session.Abort() call has no effect while the local transport
>> still waits for the child to start. session.cpp sets the SYNC_ABORT
>> status and quits the main loop with the goal getting the SyncContext to
>> react to the abort. Instead the fork/exec local transport restarts the
>> loop and the sync session starts normally.
>>
>> This is partly a failure in the local transport (it would have to do a
>> checkForAbort()), partly in the session (as mentioned in the commit
>> message for my latest commit, it allowed a session to go from "aborted"
>> to "running").
>>
>> Either way, with the latest commit applied, the transport gets canceled
>> as soon as Abort() is called and the session aborts as intended.
>>
>> Chris, I hope I'm not disrupting your work too much by merging the
>> entire "signal-handling" into master.
>>
>>
>
> No please go ahead and merge. This fixes I was seeing on the horizon
> anyway. I'd also rather have any fixes and changes you've made early
> rather than later.
>
> Also, I was reminded by the TestDBusServerTerm.testTermConnection test
> that I still needed to implement the Connection interface in the
> helper. I've pushed some code to that end although I'm still working
> out some kinks in the dbus communication between the server and helper
> at the moment.
>

This was done yesterday (or 2 days ago now). It my be good to give a
little more detailed update at this point.

The last couple days I've been fixing various issues when running
test-dbus.py. Quickly I found that there was an issue in how the
session's dbus interface was being initialized. The interface was not
immediately available after Session.StartSessionWithFlags returned.
This was because the interface was not being made available until the
onConnect callback was invoked. I've now changed this to spawn the
helper then wait for the helper to indicate that its DBus interface is
ready to be used. Included in this change is a move of all the
ForkExec code in Session and Connection out of the constructors and
into an init() method.

In order to provide the helper binary with additional information I've
created an addEnvVar(name, value) method to the ForkExecParent which
obviously adds an environment variable. I'm using this to indicate
whether the helper should start the Session or Connection DBus
interface.

With the above changes, the initialization of and communication with
the helper binary seems to work well. I'm continuing to work my way
through the tests. Right now I'm dealing with getting autosync to work
properly. This is requiring some additional communication between the
helper and server binaries as well as reworking some code I changed
when removing the priority queues in the server.

On the more distant horizon is reworking logging to use syslog in the
server binary and providing adequate debugging in the helper binary.
The other items on your initial list should be worked out while going
through the tests.

Cheers,
Chris
___

Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-25 Thread Patrick Ohly
On Mi, 2012-01-25 at 02:32 +0100, Chris Kühl wrote:
> This was done yesterday (or 2 days ago now). It my be good to give a
> little more detailed update at this point.

Thanks, that's very useful.

> With the above changes, the initialization of and communication with
> the helper binary seems to work well. I'm continuing to work my way
> through the tests. Right now I'm dealing with getting autosync to work
> properly. This is requiring some additional communication between the
> helper and server binaries as well as reworking some code I changed
> when removing the priority queues in the server.

I wonder whether we should just leave it broken for now. I find the
whole autosync implementation unintuitive and already planned to rewrite
it from scratch when adding "sync when changes are detected". The
problem is that I don't know yet when I will have time for that. If I
don't find the time soon, it would block merging your work because
regressions in the master branch are not acceptable.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.


___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution


Re: [SyncEvolution] syncevo-dbus-server + forking processes + signal handling

2012-01-25 Thread Chris Kühl
On Wed, Jan 25, 2012 at 11:11 AM, Patrick Ohly  wrote:
> On Mi, 2012-01-25 at 02:32 +0100, Chris Kühl wrote:
>> This was done yesterday (or 2 days ago now). It my be good to give a
>> little more detailed update at this point.
>
> Thanks, that's very useful.
>
>> With the above changes, the initialization of and communication with
>> the helper binary seems to work well. I'm continuing to work my way
>> through the tests. Right now I'm dealing with getting autosync to work
>> properly. This is requiring some additional communication between the
>> helper and server binaries as well as reworking some code I changed
>> when removing the priority queues in the server.
>
> I wonder whether we should just leave it broken for now. I find the
> whole autosync implementation unintuitive and already planned to rewrite
> it from scratch when adding "sync when changes are detected". The
> problem is that I don't know yet when I will have time for that. If I
> don't find the time soon, it would block merging your work because
> regressions in the master branch are not acceptable.
>

Ok, I'll keep that in mind. Looking at the AutoSync stuff is leading
me to fix other related issues (shutdown) so I'll see how far this
takes me.

Cheers,
Chris
___
SyncEvolution mailing list
SyncEvolution@syncevolution.org
http://lists.syncevolution.org/listinfo/syncevolution