Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-24 Thread Kohei KaiGai
2012/11/21 Alvaro Herrera :
> Alvaro Herrera escribió:
>> FWIW I have pushed this to github; see
>> https://github.com/alvherre/postgres/compare/bgworker
>>
>> It's also attached.
>>
>> The UnBlockSig stuff is the main stumbling block as I see it because it
>> precludes compilation on Windows.  Maybe we should fix that by providing
>> another function that the module is to call after initialization is done
>> and before it gets ready to work ... but having a function that only
>> calls PG_SETMASK() feels somewhat useless to me; and I don't see what
>> else we could do in there.
>
> I cleaned up some more stuff and here's another version.  In particular
> I added wrapper functions to block and unblock signals, so that this
> doesn't need exported UnBlockSig.
>
> I also changed ServerLoop so that it sleeps until a crashed bgworker
> needs to be restarted -- if a worker terminates, and it has requested
> (say) 2s restart time, don't have it wait until the full 60s postmaster
> sleep time has elapsed.
>
> The sample code has been propped up somewhat, too.
>
I checked the v7 patch.

I didn't find out some major problems being remained in this version.
Even though the timing to unblock signals is still under discussion,
it seems to me BackgroundWorkerUnblockSignals() is a reasonable
solution. At least, no tangible requirement to override signal handlers
except for SIGTERM & SIGINT supported by framework.

Some minor comments here:

If we provide background worker a support routine of sigsetjmp()
block with transaction rollback, does it make sense to mimic the
block in PostgresMain()? It reports the raised error and calls
AbortCurrentTransaction() to release overall resources.

How about to drop auth_counter module? I don't think we need
to have two different example modules in contrib.
My preference is worker_spi rather than auth_counter, because
it also provides example to handle transactions.

At SubPostmasterMain(), it checks whether argv[1] has
"--forkbgworker" using strncmp() on the first 14 byte. It should include
the "=" character to be placed on 15th bytes, to avoid unexpected
match.

Also, "cookie" is picked up using atoi(). In case when "--forkbgworker="
takes non-numerical characters, atoi() returns 0. Even though it might
be paranoia, the initial value of BackgroundWorkerCookie should be
1, in spite of 0.

At BackgroundWorkerInitializeConnection(),
+   /* XXX is this the right errcode? */
+   if (!(worker->bgw_flags & BGWORKER_BACKEND_DATABASE_CONNECTION))
+   ereport(FATAL,
+   (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+errmsg("database connection requirement not indicated
during registration")));

It only happen when extension calls this function under incorrect
performing mode. So, it is a situation of Assert, not ereport.

So, I'd like to hand over committers this patch near future.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-23 Thread Tom Lane
Alvaro Herrera  writes:
> If the bgworker developer gets really tense about this stuff (or
> anything at all, really), they can create a completely new sigmask and
> do sigaddset() etc.  Since this is all C code, we cannot keep them from
> doing anything, really; I think what we need to provide here is just a
> framework to ease development of simple cases.

An important point here is that if a bgworker does need to do its own
signal manipulation --- for example, installing custom signal handlers
--- it would be absolutely catastrophic for us to unblock signals before
reaching worker-specific code; signals might arrive before the process
had a chance to fix their handling.  So I'm against Heikki's auto-unblock
proposal.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-22 Thread Alvaro Herrera
Heikki Linnakangas escribió:
> On 22.11.2012 19:18, Alvaro Herrera wrote:
> >Heikki Linnakangas escribió:
> >>On 21.11.2012 23:29, Alvaro Herrera wrote:
> >>>Alvaro Herrera escribió:
> The UnBlockSig stuff is the main stumbling block as I see it because it
> precludes compilation on Windows.  Maybe we should fix that by providing
> another function that the module is to call after initialization is done
> and before it gets ready to work ... but having a function that only
> calls PG_SETMASK() feels somewhat useless to me; and I don't see what
> else we could do in there.
> >>>
> >>>I cleaned up some more stuff and here's another version.  In particular
> >>>I added wrapper functions to block and unblock signals, so that this
> >>>doesn't need exported UnBlockSig.
> >>
> >>Could you just unblock the signals before calling into the
> >>background worker's main() function?
> >
> >Yes, but what if a daemon wants to block/unblock signals later?
> 
> Ok. Can you think of an example of a daemon that would like to do that?

Not really, but I don't know what crazy stuff people might be able to
come up with.  Somebody was talking about using a worker to do parallel
computation (of queries?) using FPUs, or something along those lines.  I
don't have enough background on that sort of thing but it wouldn't
surprise me that they wanted to block signals for a while when the
daemons are busy talking to the FPU, say.

> Grepping the backend for "BlockSig", the only thing it seems to be
> currenlty used for is to block nested signals in the SIGQUIT handler
> (see bg_quickdie() for an example). The patch provides a built-in
> SIGQUIT handler for the background workers, so I don't think you
> need BlockSig for that. Or do you envision that it would be OK for a
> background worker to replace the SIGQUIT handler with a custom one?

I wasn't really considering that the SIGQUIT handler would be replaced.
Not really certain that the SIGTERM handler needs to fiddle with the
sigmask ...

> Even if we provide the BackgroundWorkerBlock/UnblockSignals()
> functions, I think it would still make sense to unblock the signals
> before calling the bgworker's main loop. One less thing for the
> background worker to worry about that way. Or are there some
> operations that can't be done safely after unblocking the signals?

Yes, that's probably a good idea.  I don't see anything that would need
to run with signals blocked in the supplied sample code (but then they
are pretty simplistic).

> Also, I note that some worker processes call sigdelset(&BlockSig,
> SIGQUITE); that remains impossible to do in a background worker on
> Windows, the BackgroundWorkerBlock/UnblockSignals() wrapper
> functions don't help with that.

Hmm.  Not really sure about that.  Maybe we should keep SIGQUIT
unblocked at all times, so postmaster.c needs to remove it from BlockSig
before invoking the bgworker's main function.

The path of least resistance seems to be to export BlockSig and
UnBlockSig, but I hesitate to do it.

> Some documentation on what a worker is allowed to do would be helpful
> here..

I will see about it.

If the bgworker developer gets really tense about this stuff (or
anything at all, really), they can create a completely new sigmask and
do sigaddset() etc.  Since this is all C code, we cannot keep them from
doing anything, really; I think what we need to provide here is just a
framework to ease development of simple cases.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-22 Thread Heikki Linnakangas

On 22.11.2012 19:18, Alvaro Herrera wrote:

Heikki Linnakangas escribió:

On 21.11.2012 23:29, Alvaro Herrera wrote:

Alvaro Herrera escribió:

The UnBlockSig stuff is the main stumbling block as I see it because it
precludes compilation on Windows.  Maybe we should fix that by providing
another function that the module is to call after initialization is done
and before it gets ready to work ... but having a function that only
calls PG_SETMASK() feels somewhat useless to me; and I don't see what
else we could do in there.


I cleaned up some more stuff and here's another version.  In particular
I added wrapper functions to block and unblock signals, so that this
doesn't need exported UnBlockSig.


Could you just unblock the signals before calling into the
background worker's main() function?


Yes, but what if a daemon wants to block/unblock signals later?


Ok. Can you think of an example of a daemon that would like to do that?

Grepping the backend for "BlockSig", the only thing it seems to be 
currenlty used for is to block nested signals in the SIGQUIT handler 
(see bg_quickdie() for an example). The patch provides a built-in 
SIGQUIT handler for the background workers, so I don't think you need 
BlockSig for that. Or do you envision that it would be OK for a 
background worker to replace the SIGQUIT handler with a custom one?


Even if we provide the BackgroundWorkerBlock/UnblockSignals() functions, 
I think it would still make sense to unblock the signals before calling 
the bgworker's main loop. One less thing for the background worker to 
worry about that way. Or are there some operations that can't be done 
safely after unblocking the signals? Also, I note that some worker 
processes call sigdelset(&BlockSig, SIGQUITE); that remains impossible 
to do in a background worker on Windows, the 
BackgroundWorkerBlock/UnblockSignals() wrapper functions don't help with 
that.


Some documentation on what a worker is allowed to do would be helpful here..

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-22 Thread Alvaro Herrera
Heikki Linnakangas escribió:
> On 21.11.2012 23:29, Alvaro Herrera wrote:
> >Alvaro Herrera escribió:
> >>FWIW I have pushed this to github; see
> >>https://github.com/alvherre/postgres/compare/bgworker
> >>
> >>It's also attached.
> >>
> >>The UnBlockSig stuff is the main stumbling block as I see it because it
> >>precludes compilation on Windows.  Maybe we should fix that by providing
> >>another function that the module is to call after initialization is done
> >>and before it gets ready to work ... but having a function that only
> >>calls PG_SETMASK() feels somewhat useless to me; and I don't see what
> >>else we could do in there.
> >
> >I cleaned up some more stuff and here's another version.  In particular
> >I added wrapper functions to block and unblock signals, so that this
> >doesn't need exported UnBlockSig.
> 
> Could you just unblock the signals before calling into the
> background worker's main() function?

Yes, but what if a daemon wants to block/unblock signals later?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-22 Thread Heikki Linnakangas

On 21.11.2012 23:29, Alvaro Herrera wrote:

Alvaro Herrera escribió:

FWIW I have pushed this to github; see
https://github.com/alvherre/postgres/compare/bgworker

It's also attached.

The UnBlockSig stuff is the main stumbling block as I see it because it
precludes compilation on Windows.  Maybe we should fix that by providing
another function that the module is to call after initialization is done
and before it gets ready to work ... but having a function that only
calls PG_SETMASK() feels somewhat useless to me; and I don't see what
else we could do in there.


I cleaned up some more stuff and here's another version.  In particular
I added wrapper functions to block and unblock signals, so that this
doesn't need exported UnBlockSig.


Could you just unblock the signals before calling into the background 
worker's main() function?


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-19 Thread Tom Lane
Alvaro Herrera  writes:
> Kohei KaiGai escribió:
>> StartOneBackgroundWorker always scan the BackgroundWorkerList from
>> the head. Isn't it available to save the current position at static variable?
>> If someone tries to manage thousand of bgworkers, it makes a busy loop. :(

> Seems messy; we would have to get into the guts of slist_foreach (unroll
> the macro and make the iterator static).  I prefer not to go that path,
> at least not for now.

Thousands of bgworkers seems like a pretty unsupportable scenario anyway
--- it'd presumably have most of the same problems as thousands of
backends.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-19 Thread Alvaro Herrera
Kohei KaiGai escribió:
> 2012/10/22 Alvaro Herrera :
> > Here's an updated version of this patch, which also works in
> > an EXEC_BACKEND environment.  (I haven't tested this at all on Windows,
> > but I don't see anything that would create a portability problem there.)
> >
> I also tried to check the latest patch "briefly".
> Let me comment on several points randomly.
> 
> Once bgworker process got crashed, postmaster tries to restart it about 60
> seconds later. My preference is this interval being configurable with initial
> registration parameter; that includes "never restart again" option.
> Probably, some extensions don't want to restart again on unexpected crash.

I changed this.

> Stop is one other significant event for bgworker, not only start-up.
> This interface has no way to specify when we want to stop bgworker.
> Probably, we can offer two options. Currently, bgworkers are simultaneously
> terminated with other regular backends. In addition, I want an option to make
> sure bgworker being terminated after all the regular backends exit.
> It is critical issue for me, because I try to implement parallel
> calculation server
> with bgworker, thus, it should not be terminated earlier than regular backend.

I am not really sure about this.  Each new behavior we want to propose
requires careful attention, because we need to change postmaster's
shutdown sequence in pmdie(), and also reaper() and
PostmasterStateMachine().  After looking into it, I am hesitant to
change this too much unless we can reach a very detailed agreement of
exactly what we want to happen.

Also note that there are two types of workers: those that require a
database connection, and those that do not.  The former are signalled
via SignalSomeChildren(BACKEND_TYPE_BGWORKER); the latter are signalled
via SignalUnconnectedWorker().  Would this distinction be enough for
you?

Delivering more precise shutdown signals is tricky; we would have to
abandon SignalSomeChildren() and code something new to scan the workers
list checking the stop time for each one.  (Except in emergency
situations, of course, in which we would continue to rely on
SignalSomeChildren).

> How about to move bgw_name field into tail of BackgroundWorker structure?
> It makes simplifies the logic in RegisterBackgroundWorker(), if it is
> located as:
> typedef struct BackgroundWorker
> {
> int bgw_flags;
> BgWorkerStartTime bgw_start_time;
> bgworker_main_type  bgw_main;
> void   *bgw_main_arg;
> bgworker_sighdlr_type bgw_sighup;
> bgworker_sighdlr_type bgw_sigterm;
> charbgw_name[1];  <== (*)
> } BackgroundWorker;

This doesn't work; or rather, it works and it makes
RegisterBackgroundWorker() code somewhat nicer, but according to Larry
Wall's Conservation of Cruft Principle, the end result is that the
module code to register a new worker is a lot messier; it can no longer
use a struct in the stack, but requires malloc() or similar.  I don't
see that this is a win overall.

> StartOneBackgroundWorker always scan the BackgroundWorkerList from
> the head. Isn't it available to save the current position at static variable?
> If someone tries to manage thousand of bgworkers, it makes a busy loop. :(

Seems messy; we would have to get into the guts of slist_foreach (unroll
the macro and make the iterator static).  I prefer not to go that path,
at least not for now.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-16 Thread Alvaro Herrera
Kohei KaiGai escribió:
> 2012/10/22 Alvaro Herrera :
> > Here's an updated version of this patch, which also works in
> > an EXEC_BACKEND environment.  (I haven't tested this at all on Windows,
> > but I don't see anything that would create a portability problem there.)
> >
> I also tried to check the latest patch "briefly".
> Let me comment on several points randomly.

Thanks.

> Once bgworker process got crashed, postmaster tries to restart it about 60
> seconds later. My preference is this interval being configurable with initial
> registration parameter; that includes "never restart again" option.
> Probably, some extensions don't want to restart again on unexpected crash.

The main issue with specifying a crash-restart policy, I thought, was
that after a postmaster restart cycle there is no reasonable way to know
whether any bgworker should restart or not: had any particular bgworker
crashed?  Remember that any backend crash, and some bgworker crashes,
will cause the postmaster to reinitialize the whole system.  So I don't
see that it will be handled consistently; if we offer the option, module
developers might be led into thinking that a daemon that stops is never
to run again, but that's not going to be the case.

But I'm not really wedded to this behavior.

> Stop is one other significant event for bgworker, not only start-up.
> This interface has no way to specify when we want to stop bgworker.
> Probably, we can offer two options. Currently, bgworkers are simultaneously
> terminated with other regular backends. In addition, I want an option to make
> sure bgworker being terminated after all the regular backends exit.
> It is critical issue for me, because I try to implement parallel
> calculation server
> with bgworker, thus, it should not be terminated earlier than regular backend.

I can do that.  I considered stop time as well, but couldn't think of
any case in which I would want the worker to persist beyond backends
exit, so I ended up not adding the option.  But I think it's reasonably
easy to add.

> Regarding to process restart, this interface allows to specify the timing to
> start bgworker using BgWorkerStart_* label. On the other hand,
> bgworker_should_start_now() checks correspondence between pmState
> and the configured start-time using equal matching. It can make such a
> scenarios, for instance, a bgworker launched at PM_INIT state, then it got
> crashed. Author expected it shall be restarted, however, pmState was
> already progressed to PM_RUN, so nobody can restart this bgworker again.
> How about an idea to provide the start time with bitmap? It allows extensions
> to specify multiple candidates to launch bgworker.

No, I think you must be misreading the code.  If a daemon specified
BgWorkerStart_PostmasterStart then it will start whenever pmState is
PM_INIT *or any later state*.  This is why the switch has all those
"fall throughs".

> do_start_bgworker() initializes process state according to normal manner,
> then it invokes worker->bgw_main(). Once ERROR is raised inside of the
> main routine, the control is backed to the sigsetjmp() on do_start_bgworker(),
> then the bgworker is terminated.
> I'm not sure whether it is suitable for bgworker that tries to connect 
> database,
> because it may raise an error everywhere, such as zero division and so on...
> I think it is helpful to provide a utility function in the core; that
> handles to abort
> transactions in progress, clean-up resources, and so on.

Yeah, I considered this too --- basically this is the question I posted
elsewhere, "what else do we need to provide for modules"?  A sigsetjmp()
standard block or something is probably part of that.

> It ensures bgworker must be registered within shared_preload_libraries.
> Why not raise an error if someone try to register bgworker later?

Hm, sure, we can do that I guess.

> How about to move bgw_name field into tail of BackgroundWorker structure?
> It makes simplifies the logic in RegisterBackgroundWorker(), if it is
> located as:
> typedef struct BackgroundWorker
> {
> int bgw_flags;
> BgWorkerStartTime bgw_start_time;
> bgworker_main_type  bgw_main;
> void   *bgw_main_arg;
> bgworker_sighdlr_type bgw_sighup;
> bgworker_sighdlr_type bgw_sigterm;
> charbgw_name[1];  <== (*)
> } BackgroundWorker;

Makes sense.

> StartOneBackgroundWorker always scan the BackgroundWorkerList from
> the head. Isn't it available to save the current position at static variable?
> If someone tries to manage thousand of bgworkers, it makes a busy loop. :(

We could try that.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-16 Thread Kohei KaiGai
2012/10/22 Alvaro Herrera :
> Here's an updated version of this patch, which also works in
> an EXEC_BACKEND environment.  (I haven't tested this at all on Windows,
> but I don't see anything that would create a portability problem there.)
>
I also tried to check the latest patch "briefly".
Let me comment on several points randomly.

Once bgworker process got crashed, postmaster tries to restart it about 60
seconds later. My preference is this interval being configurable with initial
registration parameter; that includes "never restart again" option.
Probably, some extensions don't want to restart again on unexpected crash.

Regarding to process restart, this interface allows to specify the timing to
start bgworker using BgWorkerStart_* label. On the other hand,
bgworker_should_start_now() checks correspondence between pmState
and the configured start-time using equal matching. It can make such a
scenarios, for instance, a bgworker launched at PM_INIT state, then it got
crashed. Author expected it shall be restarted, however, pmState was
already progressed to PM_RUN, so nobody can restart this bgworker again.
How about an idea to provide the start time with bitmap? It allows extensions
to specify multiple candidates to launch bgworker.

Stop is one other significant event for bgworker, not only start-up.
This interface has no way to specify when we want to stop bgworker.
Probably, we can offer two options. Currently, bgworkers are simultaneously
terminated with other regular backends. In addition, I want an option to make
sure bgworker being terminated after all the regular backends exit.
It is critical issue for me, because I try to implement parallel
calculation server
with bgworker, thus, it should not be terminated earlier than regular backend.

do_start_bgworker() initializes process state according to normal manner,
then it invokes worker->bgw_main(). Once ERROR is raised inside of the
main routine, the control is backed to the sigsetjmp() on do_start_bgworker(),
then the bgworker is terminated.
I'm not sure whether it is suitable for bgworker that tries to connect database,
because it may raise an error everywhere, such as zero division and so on...
I think it is helpful to provide a utility function in the core; that
handles to abort
transactions in progress, clean-up resources, and so on.
spi_worker invokes BackgroundWorkerInitializeConnection() at the head of
main routine. In a similar fashion, is it available to support a
utility function
that initialize the process as transaction-aware background-worker?
I don't think it is a good manner each extension has to implement its own
sigsetjmp() block and rollback logic. I'd also like to investigate the necessary
logic for transaction abort and resource cleanup here

Some other misc comments below.

It ensures bgworker must be registered within shared_preload_libraries.
Why not raise an error if someone try to register bgworker later?

How about to move bgw_name field into tail of BackgroundWorker structure?
It makes simplifies the logic in RegisterBackgroundWorker(), if it is
located as:
typedef struct BackgroundWorker
{
int bgw_flags;
BgWorkerStartTime bgw_start_time;
bgworker_main_type  bgw_main;
void   *bgw_main_arg;
bgworker_sighdlr_type bgw_sighup;
bgworker_sighdlr_type bgw_sigterm;
charbgw_name[1];  <== (*)
} BackgroundWorker;

StartOneBackgroundWorker always scan the BackgroundWorkerList from
the head. Isn't it available to save the current position at static variable?
If someone tries to manage thousand of bgworkers, it makes a busy loop. :(

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-16 Thread Simon Riggs
On 15 November 2012 10:10, Alvaro Herrera  wrote:

> I am unsure about the amount of pre-cooked stuff we need to provide.
> For instance, do we want some easy way to let the user code run
> transactions?

That sounds like a basic requirement. There will be a few
non-transactional bgworkers but most will be just user code, probably
PL/pgSQL.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-15 Thread Heikki Linnakangas

On 15.11.2012 17:10, Alvaro Herrera wrote:

Heikki Linnakangas escribió:

On 23.10.2012 00:29, Alvaro Herrera wrote:

Here's an updated version of this patch, which also works in
an EXEC_BACKEND environment.  (I haven't tested this at all on Windows,
but I don't see anything that would create a portability problem there.)


Looks good at first glance.


Thanks.


Fails on Windows, though:

"C:\postgresql\pgsql.sln" (default target) (1) ->
"C:\postgresql\auth_counter.vcxproj" (default target) (29) ->
(Link target) ->
   auth_counter.obj : error LNK2001: unresolved external symbol
UnBlockSig [C:\p
ostgresql\auth_counter.vcxproj]
   .\Release\auth_counter\auth_counter.dll : fatal error LNK1120: 1
unresolved externals [C:\postgresql\auth_counter.vcxproj]


Wow.  If that's the only problem it has on Windows, I am extremely
pleased.

Were you able to test the provided test modules?


I tested the auth_counter module, seemed to work. It counted all 
connections as "successful", though, even when I tried to log in with an 
invalid username/database. Didn't try with an invalid password. And I 
didn't try worker_spi.



I am unsure about the amount of pre-cooked stuff we need to provide.
For instance, do we want some easy way to let the user code run
transactions?


Would be nice, of course. I guess it depends on how much work would it 
be provide that. But we can leave that for later, once the base patch is in.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-15 Thread Alvaro Herrera
Heikki Linnakangas escribió:
> On 23.10.2012 00:29, Alvaro Herrera wrote:
> >Here's an updated version of this patch, which also works in
> >an EXEC_BACKEND environment.  (I haven't tested this at all on Windows,
> >but I don't see anything that would create a portability problem there.)
> 
> Looks good at first glance.

Thanks.

> Fails on Windows, though:
> 
> "C:\postgresql\pgsql.sln" (default target) (1) ->
> "C:\postgresql\auth_counter.vcxproj" (default target) (29) ->
> (Link target) ->
>   auth_counter.obj : error LNK2001: unresolved external symbol
> UnBlockSig [C:\p
> ostgresql\auth_counter.vcxproj]
>   .\Release\auth_counter\auth_counter.dll : fatal error LNK1120: 1
> unresolved externals [C:\postgresql\auth_counter.vcxproj]

Wow.  If that's the only problem it has on Windows, I am extremely
pleased.

Were you able to test the provided test modules?  Only now I realise
that they aren't very friendly because there's a hardcoded database name
in there ("alvherre", not the wisest choice I guess), but they should at
least be able to run and not turn into a fork bomb due to being unable
to connect, for instance.

> Marking UnBlockSig with PGDLLIMPORT fixes that. But I wonder if it's
> a good idea to leave unblocking signals the responsibility of the
> user code in the first place? That seems like the kind of low-level
> stuff that you want to hide from extension writers.

Sounds sensible.

I am unsure about the amount of pre-cooked stuff we need to provide.
For instance, do we want some easy way to let the user code run
transactions?

> Oh, and this needs docs.

Hmm, yes it does.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-11-15 Thread Heikki Linnakangas

On 23.10.2012 00:29, Alvaro Herrera wrote:

Here's an updated version of this patch, which also works in
an EXEC_BACKEND environment.  (I haven't tested this at all on Windows,
but I don't see anything that would create a portability problem there.)


Looks good at first glance. Fails on Windows, though:

"C:\postgresql\pgsql.sln" (default target) (1) ->
"C:\postgresql\auth_counter.vcxproj" (default target) (29) ->
(Link target) ->
  auth_counter.obj : error LNK2001: unresolved external symbol 
UnBlockSig [C:\p

ostgresql\auth_counter.vcxproj]
  .\Release\auth_counter\auth_counter.dll : fatal error LNK1120: 1 
unresolved externals [C:\postgresql\auth_counter.vcxproj]



"C:\postgresql\pgsql.sln" (default target) (1) ->
"C:\postgresql\worker_spi.vcxproj" (default target) (77) ->
  worker_spi.obj : error LNK2001: unresolved external symbol UnBlockSig 
[C:\pos

tgresql\worker_spi.vcxproj]
  .\Release\worker_spi\worker_spi.dll : fatal error LNK1120: 1 
unresolved externals [C:\postgresql\worker_spi.vcxproj]


Marking UnBlockSig with PGDLLIMPORT fixes that. But I wonder if it's a 
good idea to leave unblocking signals the responsibility of the user 
code in the first place? That seems like the kind of low-level stuff 
that you want to hide from extension writers.


Oh, and this needs docs.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-27 Thread Alvaro Herrera
Excerpts from Kohei KaiGai's message of jue sep 27 01:06:41 -0300 2012:
> Hi Alvaro,
> 
> Let me volunteer for reviewing, of course, but now pgsql_fdw is in my queue...

Sure, thanks -- keep in mind I entered this patch in the next
commitfest, so please do invest more effort in the ones in the
commitfest now in progress.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-26 Thread Kohei KaiGai
Hi Alvaro,

Let me volunteer for reviewing, of course, but now pgsql_fdw is in my queue...

If some other folks can also volunteer it soon, it is welcome.

2012/9/26 Alvaro Herrera :
> Excerpts from Alvaro Herrera's message of mié sep 26 13:04:34 -0300 2012:
>> Excerpts from Kohei KaiGai's message of mié abr 25 06:40:23 -0300 2012:
>>
>> > I tried to implement a patch according to the idea. It allows extensions
>> > to register an entry point of the self-managed daemon processes,
>> > then postmaster start and stop them according to the normal manner.
>>
>> Here's my attempt at this.  This is loosely based on your code, as well
>> as parts of what Simon sent me privately.
>
> Actually please consider this version instead, in which the support to
> connect to a database and run transactions actually works.  I have also
> added a new sample module (worker_spi) which talks to the server using
> the SPI interface.
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services



-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-23 Thread Amit Kapila
> On Monday, September 24, 2012 12:24 AM Alvaro Herrera wrote:
> Excerpts from Amit kapila's message of sáb sep 22 01:14:40 -0300 2012:
> > On Friday, September 21, 2012 6:50 PM Alvaro Herrera wrote:
> > Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300
> 2012:
> > > On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote:
> >
> 
> You could also have worker groups commanded by one process: one queen
> bee, one or more worker bees.  The queen determines what to do, sets
> tasklist info in shmem, signals worker bees.  While the tasklist is
> empty, workers would sleep.
> 
> As you can see there are many things that can be done with this.

  Yes, this really is a good feature which can be used for many different 
functionalaties.


With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-23 Thread Alvaro Herrera
Excerpts from Amit kapila's message of sáb sep 22 01:14:40 -0300 2012:
> On Friday, September 21, 2012 6:50 PM Alvaro Herrera wrote:
> Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300 2012:
> > On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote:
> 
> > > > Well, there is a difficulty here which is that the number of processes
> > >> connected to databases must be configured during postmaster start
> > >> (because it determines the size of certain shared memory structs).  So
> > >> you cannot just spawn more tasks if all max_worker_tasks are busy.
> > >> (This is a problem only for those workers that want to be connected as
> > >> backends.  Those that want libpq connections do not need this and are
> > >> easier to handle.)
> >
> 
> >> If not above then where there is a need of dynamic worker tasks as 
> >> mentioned by Simon?
> 
> > Well, I think there are many uses for dynamic workers, or short-lived
> > workers (start, do one thing, stop and not be restarted).
> 
> > In my design, a worker is always restarted if it stops; otherwise there
> > is no principled way to know whether it should be running or not (after
> > a crash, should we restart a registered worker?  We don't know whether
> > it stopped before the crash.)  So it seems to me that at least for this
> > first shot we should consider workers as processes that are going to be
> > always running as long as postmaster is alive.  On a crash, if they have
> > a backend connection, they are stopped and then restarted.
> 
> a. Is there a chance that it would have made shared memory inconsitent after 
> crash like by having lock on some structure and crash before releasing it?
> If such is case, do we need reinitialize the shared memory as well with 
> worker restart?

Any worker that requires access to shared memory will have to be stopped
and restarted on a crash (of any other postmaster child process).
Conversely, if a worker requires shmem access, it will have to cause the
whole system to be stopped/restarted if it crashes in some ugly way.
Same as any current process that's connected to shared memory, I think.

So, to answer your question, yes.  We need to take the safe route and
consider that a crashed process might have corrupted shmem.  (But if it
dies cleanly, then there is no need for this.)

> b. do these worker tasks be able to take any new jobs, or whatever
> they are started with they will do only those jobs?

Not sure I understand this question.  If a worker connects to a
database, it will stay connected to that database until it dies;
changing DBs is not allowed.  If you want a worker that connects to
database A, does stuff there, and then connects to database B, it could
connect to A, do its deed, then set up database=B in shared memory and
stop, which will cause postmaster to restart it; next time it starts, it
reads shmem and knows to connect to the other DB.

My code has the ability to connect to no particular database -- what
autovac launcher does (this lets it read shared catalogs).  So you could
do useful things like have the first invocation of your worker connect
to that on the first invocation and read pg_database to determine what
DB to connect next, then terminate.

You could also have worker groups commanded by one process: one queen
bee, one or more worker bees.  The queen determines what to do, sets
tasklist info in shmem, signals worker bees.  While the tasklist is
empty, workers would sleep.

As you can see there are many things that can be done with this.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-21 Thread Amit kapila
On Friday, September 21, 2012 6:50 PM Alvaro Herrera wrote:
Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300 2012:
> On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote:

> > > Well, there is a difficulty here which is that the number of processes
> >> connected to databases must be configured during postmaster start
> >> (because it determines the size of certain shared memory structs).  So
> >> you cannot just spawn more tasks if all max_worker_tasks are busy.
> >> (This is a problem only for those workers that want to be connected as
> >> backends.  Those that want libpq connections do not need this and are
> >> easier to handle.)
>

>> If not above then where there is a need of dynamic worker tasks as mentioned 
>> by Simon?

> Well, I think there are many uses for dynamic workers, or short-lived
> workers (start, do one thing, stop and not be restarted).

> In my design, a worker is always restarted if it stops; otherwise there
> is no principled way to know whether it should be running or not (after
> a crash, should we restart a registered worker?  We don't know whether
> it stopped before the crash.)  So it seems to me that at least for this
> first shot we should consider workers as processes that are going to be
> always running as long as postmaster is alive.  On a crash, if they have
> a backend connection, they are stopped and then restarted.

a. Is there a chance that it would have made shared memory inconsitent after 
crash like by having lock on some structure and crash before releasing it?
If such is case, do we need reinitialize the shared memory as well with 
worker restart?

b. do these worker tasks be able to take any new jobs, or whatever they are 
started with they will do only those jobs?


With Regards,
Amit Kapila. 


With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-21 Thread Alvaro Herrera
Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300 2012:
> On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote:

> > Well, there is a difficulty here which is that the number of processes
> > connected to databases must be configured during postmaster start
> > (because it determines the size of certain shared memory structs).  So
> > you cannot just spawn more tasks if all max_worker_tasks are busy.
> > (This is a problem only for those workers that want to be connected as
> > backends.  Those that want libpq connections do not need this and are
> > easier to handle.)
> 
> Are you telling about shared memory structs that need to be allocated for 
> each worker task?
> I am not sure if they can be shared across multiple slaves or will be 
> required for each slave.
> However even if that is not possible, other mechanism can be used to get the 
> work done by existing slaves.

I mean stuff like PGPROC entries and such.  Currently, they are
allocated based on max_autovacuum_workers + max_connections +
max_prepared_transactions IIRC.  So by following identical reasoning we
would just have to add an hypothetical new max_bgworkers to the mix;
however as I said above, we don't really need that because we can count
the number of registered workers at postmaster start time and use that
to size PGPROC.

Shared memory used by each worker (or by a group of workers) that's not
part of core structs should be allocated by the worker itself via
RequestAddInShmemSpace.

> If not above then where there is a need of dynamic worker tasks as mentioned 
> by Simon?

Well, I think there are many uses for dynamic workers, or short-lived
workers (start, do one thing, stop and not be restarted).

In my design, a worker is always restarted if it stops; otherwise there
is no principled way to know whether it should be running or not (after
a crash, should we restart a registered worker?  We don't know whether
it stopped before the crash.)  So it seems to me that at least for this
first shot we should consider workers as processes that are going to be
always running as long as postmaster is alive.  On a crash, if they have
a backend connection, they are stopped and then restarted.

> > One thing I am not going to look into is how is this new capability be
> > used for parallel query.  I feel we have enough use cases without it,
> > that we can develop a fairly powerful feature.  After that is done and
> > proven (and committed) we can look into how we can use this to implement
> > these short-lived workers for stuff such as parallel query.
> 
>   Agreed and I also meant to say the same as you are saying.

Great.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-20 Thread Amit Kapila
On Thursday, September 20, 2012 7:35 PM Kohei KaiGai wrote:
2012/9/20 Amit Kapila :
> On Thursday, September 20, 2012 1:44 AM Simon Riggs wrote:
> On 12 September 2012 04:30, Amit Kapila  wrote:
>> On Tuesday, September 11, 2012 9:09 PM Alvaro Herrera wrote:
>> Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400
> 2012:
>>
>> We have some use cases for this patch, when can you post
>> a new version? I would test and review it.
>>
> What use cases do you have in mind?
>>
   Wouldn't it be helpful for some features like parallel query in
future?
>
>>> Trying to solve that is what delayed this patch, so the scope of this
>>> needs to be "permanent daemons" rather than dynamically spawned worker
>>> tasks.
>
>>   Why can't worker tasks be also permanent, which can be controlled
through
>>   configuration. What I mean to say is that if user has need for parallel
>> operations
>>   he can configure max_worker_tasks and those many worker tasks will get
>> created.
>>   Otherwise without having such parameter, we might not be sure whether
such
>> deamons
>>   will be of use to database users who don't need any background ops.
>
>>   The dynamism will come in to scene when we need to allocate such
daemons
>> for particular ops(query), because
>>   might be operation need certain number of worker tasks, but no such
task
>> is available, at that time it need
>>   to be decided whether to spawn a new task or change the parallelism in
>> operation such that it can be executed with
>>   available number of worker tasks.
>


> I'm also not sure why "permanent daemons" is more difficult than
dynamically
> spawned daemons, 

I think Alvaro and Simon also felt "permanent daemons" is not difficult and
is the right way to go, 
that’s why the feature is getting developed on those lines.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-20 Thread Amit Kapila
On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote:
Excerpts from Amit Kapila's message of jue sep 20 02:10:23 -0300 2012:


>>   Why can't worker tasks be also permanent, which can be controlled through
>>   configuration. What I mean to say is that if user has need for parallel
>> operations
>>   he can configure max_worker_tasks and those many worker tasks will get
>> created.
>>   Otherwise without having such parameter, we might not be sure whether such
>> deamons
>>   will be of use to database users who don't need any background ops.
> 
>>   The dynamism will come in to scene when we need to allocate such daemons
>> for particular ops(query), because
>>   might be operation need certain number of worker tasks, but no such task
>> is available, at that time it need 
>>   to be decided whether to spawn a new task or change the parallelism in
>> operation such that it can be executed with 
>>   available number of worker tasks.

> Well, there is a difficulty here which is that the number of processes
> connected to databases must be configured during postmaster start
> (because it determines the size of certain shared memory structs).  So
> you cannot just spawn more tasks if all max_worker_tasks are busy.
> (This is a problem only for those workers that want to be connected as
> backends.  Those that want libpq connections do not need this and are
> easier to handle.)

Are you telling about shared memory structs that need to be allocated for each 
worker task?
I am not sure if they can be shared across multiple slaves or will be required 
for each slave.
However even if that is not possible, other mechanism can be used to get the 
work done by existing slaves.

If not above then where there is a need of dynamic worker tasks as mentioned by 
Simon?

> The design we're currently discussing actually does not require a new
> GUC parameter at all.  This is why: since the workers must be registered
> before postmaster start anyway (in the _PG_init function of a module
> that's listed in shared_preload_libraries) then we have to run a
>registering function during postmaster start.  So postmaster can simply
> count how many it needs and size those structs from there.  Workers that
> do not need a backend-like connection don't have a shmem sizing
> requirement so are not important for this.  Configuration is thus
> simplified.

> BTW I am working on this patch and I think I have a workable design in
> place; I just couldn't get the code done before the start of this
> commitfest.  (I am missing handling the EXEC_BACKEND case though, but I
> will not even look into that until the basic Unix case is working).

> One thing I am not going to look into is how is this new capability be
> used for parallel query.  I feel we have enough use cases without it,
> that we can develop a fairly powerful feature.  After that is done and
> proven (and committed) we can look into how we can use this to implement
> these short-lived workers for stuff such as parallel query.

  Agreed and I also meant to say the same as you are saying.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-20 Thread Kohei KaiGai
2012/9/20 Amit Kapila :
> On Thursday, September 20, 2012 1:44 AM Simon Riggs wrote:
> On 12 September 2012 04:30, Amit Kapila  wrote:
>> On Tuesday, September 11, 2012 9:09 PM Alvaro Herrera wrote:
>> Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400
> 2012:
>>
> We have some use cases for this patch, when can you post
> a new version? I would test and review it.
>>
 What use cases do you have in mind?
>>
>>>   Wouldn't it be helpful for some features like parallel query in future?
>
>> Trying to solve that is what delayed this patch, so the scope of this
>> needs to be "permanent daemons" rather than dynamically spawned worker
>> tasks.
>
>   Why can't worker tasks be also permanent, which can be controlled through
>   configuration. What I mean to say is that if user has need for parallel
> operations
>   he can configure max_worker_tasks and those many worker tasks will get
> created.
>   Otherwise without having such parameter, we might not be sure whether such
> deamons
>   will be of use to database users who don't need any background ops.
>
>   The dynamism will come in to scene when we need to allocate such daemons
> for particular ops(query), because
>   might be operation need certain number of worker tasks, but no such task
> is available, at that time it need
>   to be decided whether to spawn a new task or change the parallelism in
> operation such that it can be executed with
>   available number of worker tasks.
>
>   Although I understood and agree that such "permanent daemons" will be
> useful for usecases other than
>   parallel operations. However my thinking is that having "permanent
> daemons" can also be useful for parallel ops.
>   So even currently it is getting developed for certain usecases but the
> overall idea can be enhanced to have them for
>   parallel ops as well.
>
I'm also not sure why "permanent daemons" is more difficult than dynamically
spawned daemons, because I guess all the jobs needed for permanent
daemons are quite similar to what we're now doing to implement bgwriter
or others in postmaster.c, except for it is implemented with loadable modules.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-20 Thread Alvaro Herrera
Excerpts from Amit Kapila's message of jue sep 20 02:10:23 -0300 2012:


>   Why can't worker tasks be also permanent, which can be controlled through
>   configuration. What I mean to say is that if user has need for parallel
> operations
>   he can configure max_worker_tasks and those many worker tasks will get
> created.
>   Otherwise without having such parameter, we might not be sure whether such
> deamons
>   will be of use to database users who don't need any background ops.
> 
>   The dynamism will come in to scene when we need to allocate such daemons
> for particular ops(query), because
>   might be operation need certain number of worker tasks, but no such task
> is available, at that time it need 
>   to be decided whether to spawn a new task or change the parallelism in
> operation such that it can be executed with 
>   available number of worker tasks.

Well, there is a difficulty here which is that the number of processes
connected to databases must be configured during postmaster start
(because it determines the size of certain shared memory structs).  So
you cannot just spawn more tasks if all max_worker_tasks are busy.
(This is a problem only for those workers that want to be connected as
backends.  Those that want libpq connections do not need this and are
easier to handle.)

The design we're currently discussing actually does not require a new
GUC parameter at all.  This is why: since the workers must be registered
before postmaster start anyway (in the _PG_init function of a module
that's listed in shared_preload_libraries) then we have to run a
registering function during postmaster start.  So postmaster can simply
count how many it needs and size those structs from there.  Workers that
do not need a backend-like connection don't have a shmem sizing
requirement so are not important for this.  Configuration is thus
simplified.

BTW I am working on this patch and I think I have a workable design in
place; I just couldn't get the code done before the start of this
commitfest.  (I am missing handling the EXEC_BACKEND case though, but I
will not even look into that until the basic Unix case is working).

One thing I am not going to look into is how is this new capability be
used for parallel query.  I feel we have enough use cases without it,
that we can develop a fairly powerful feature.  After that is done and
proven (and committed) we can look into how we can use this to implement
these short-lived workers for stuff such as parallel query.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-19 Thread Amit Kapila
On Thursday, September 20, 2012 1:44 AM Simon Riggs wrote:
On 12 September 2012 04:30, Amit Kapila  wrote:
> On Tuesday, September 11, 2012 9:09 PM Alvaro Herrera wrote:
> Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400
2012:
>
 We have some use cases for this patch, when can you post
 a new version? I would test and review it.
>
>>> What use cases do you have in mind?
>
>>   Wouldn't it be helpful for some features like parallel query in future?

> Trying to solve that is what delayed this patch, so the scope of this
> needs to be "permanent daemons" rather than dynamically spawned worker
> tasks.
  
  Why can't worker tasks be also permanent, which can be controlled through
  configuration. What I mean to say is that if user has need for parallel
operations
  he can configure max_worker_tasks and those many worker tasks will get
created.
  Otherwise without having such parameter, we might not be sure whether such
deamons
  will be of use to database users who don't need any background ops.

  The dynamism will come in to scene when we need to allocate such daemons
for particular ops(query), because
  might be operation need certain number of worker tasks, but no such task
is available, at that time it need 
  to be decided whether to spawn a new task or change the parallelism in
operation such that it can be executed with 
  available number of worker tasks.

  Although I understood and agree that such "permanent daemons" will be
useful for usecases other than 
  parallel operations. However my thinking is that having "permanent
daemons" can also be useful for parallel ops.
  So even currently it is getting developed for certain usecases but the
overall idea can be enhanced to have them for 
  parallel ops as well.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-19 Thread Simon Riggs
On 12 September 2012 04:30, Amit Kapila  wrote:
> On Tuesday, September 11, 2012 9:09 PM Alvaro Herrera wrote:
> Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400 2012:
>
>>> We have some use cases for this patch, when can you post
>>> a new version? I would test and review it.
>
>> What use cases do you have in mind?
>
>   Wouldn't it be helpful for some features like parallel query in future?

Trying to solve that is what delayed this patch, so the scope of this
needs to be "permanent daemons" rather than dynamically spawned worker
tasks.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-11 Thread Alvaro Herrera
Excerpts from Amit Kapila's message of mié sep 12 00:30:40 -0300 2012:
> On Tuesday, September 11, 2012 9:09 PM Alvaro Herrera wrote:
> Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400 2012:
> 
> >> We have some use cases for this patch, when can you post
> >> a new version? I would test and review it.
> 
> > What use cases do you have in mind?
> 
>   Wouldn't it be helpful for some features like parallel query in future?

Maybe, maybe not -- but I don't think it's a wise idea to include too
much complexity just to support such a thing.  I would vote to leave
that out for now and just concentrate on getting external stuff working.
There are enough use cases that it's already looking nontrivial.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-11 Thread Amit Kapila
On Tuesday, September 11, 2012 9:09 PM Alvaro Herrera wrote:
Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400 2012:

>> We have some use cases for this patch, when can you post
>> a new version? I would test and review it.

> What use cases do you have in mind?

  Wouldn't it be helpful for some features like parallel query in future?

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-11 Thread Boszormenyi Zoltan

2012-09-11 17:58 keltezéssel, Alvaro Herrera írta:

Excerpts from Kohei KaiGai's message of mar sep 11 12:46:34 -0300 2012:

2012/9/11 Alvaro Herrera :

Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400 2012:


We have some use cases for this patch, when can you post
a new version? I would test and review it.

What use cases do you have in mind?


I'm motivated with this feature to implement background calculation server
to handle accesses to GPU device; to avoid limitation of number of processes
that can use GPU device simultaneously.

Hmm, okay, so basically a worker would need a couple of LWLocks, a
shared memory area, and not much else?  Not a database connection.


Probably, other folks have their use cases.
For example, Zoltan introduced his use case in the upthread as follows:

- an SQL-driven scheduler, similar to pgAgent, it's generic enough,
   we might port it to this scheme and publish it

Hm, this would benefit from a direct backend connection to get the
schedule data (SPI interface I guess).


Indeed. And the advantage is that the scheduler's lifetime is exactly
the server's lifetime so there is no need to try reconnecting as soon
as the server goes away and wait until it comes back.


- a huge volume importer daemon, it was written for a very specific
   purpose and for a single client, we cannot publish it.

This one AFAIR requires more than one connection, so a direct data
connection is no good -- hence link libpq like walreceiver.


Yes.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-11 Thread Alvaro Herrera
Excerpts from Kohei KaiGai's message of mar sep 11 13:25:18 -0300 2012:
> 2012/9/11 Alvaro Herrera :

> >> > - an SQL-driven scheduler, similar to pgAgent, it's generic enough,
> >> >   we might port it to this scheme and publish it
> >
> > Hm, this would benefit from a direct backend connection to get the
> > schedule data (SPI interface I guess).
> >
> I also think SPI interface will be first candidate for the daemons that
> needs database access. Probably, lower layer interfaces (such as
> heap_open and heap_beginscan) are also available if SPI interface
> can be used.

Well, as soon as you have a database connection on which you can run
SPI, you need a lot of stuff to ensure your transaction is aborted in
case of trouble and so on.  At that point you can do direct access as
well.

I think it would be a good design to provide different cleanup routes
for the different use cases: for those that need database connections we
nede to go through AbortOutOfAnyTransaction() or something similar; for
others we can probably get away with much less than that.  Not 100% sure
at this point.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-11 Thread Kohei KaiGai
2012/9/11 Alvaro Herrera :
> Excerpts from Kohei KaiGai's message of mar sep 11 12:46:34 -0300 2012:
>> 2012/9/11 Alvaro Herrera :
>> > Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400 
>> > 2012:
>> >
>> >> We have some use cases for this patch, when can you post
>> >> a new version? I would test and review it.
>> >
>> > What use cases do you have in mind?
>> >
>> I'm motivated with this feature to implement background calculation server
>> to handle accesses to GPU device; to avoid limitation of number of processes
>> that can use GPU device simultaneously.
>
> Hmm, okay, so basically a worker would need a couple of LWLocks, a
> shared memory area, and not much else?  Not a database connection.
>
Right. It needs shared memory area to communicate with each backend
and locking mechanism, but my case does not take database accesses
right now.

>> Probably, other folks have their use cases.
>> For example, Zoltan introduced his use case in the upthread as follows:
>> > - an SQL-driven scheduler, similar to pgAgent, it's generic enough,
>> >   we might port it to this scheme and publish it
>
> Hm, this would benefit from a direct backend connection to get the
> schedule data (SPI interface I guess).
>
I also think SPI interface will be first candidate for the daemons that
needs database access. Probably, lower layer interfaces (such as
heap_open and heap_beginscan) are also available if SPI interface
can be used.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-11 Thread Alvaro Herrera
Excerpts from Kohei KaiGai's message of mar sep 11 12:46:34 -0300 2012:
> 2012/9/11 Alvaro Herrera :
> > Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400 
> > 2012:
> >
> >> We have some use cases for this patch, when can you post
> >> a new version? I would test and review it.
> >
> > What use cases do you have in mind?
> >
> I'm motivated with this feature to implement background calculation server
> to handle accesses to GPU device; to avoid limitation of number of processes
> that can use GPU device simultaneously.

Hmm, okay, so basically a worker would need a couple of LWLocks, a
shared memory area, and not much else?  Not a database connection.

> Probably, other folks have their use cases.
> For example, Zoltan introduced his use case in the upthread as follows:
> > - an SQL-driven scheduler, similar to pgAgent, it's generic enough,
> >   we might port it to this scheme and publish it

Hm, this would benefit from a direct backend connection to get the
schedule data (SPI interface I guess).

> > - a huge volume importer daemon, it was written for a very specific
> >   purpose and for a single client, we cannot publish it.

This one AFAIR requires more than one connection, so a direct data
connection is no good -- hence link libpq like walreceiver.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-11 Thread Kohei KaiGai
2012/9/11 Alvaro Herrera :
> Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400 2012:
>
>> We have some use cases for this patch, when can you post
>> a new version? I would test and review it.
>
> What use cases do you have in mind?
>
I'm motivated with this feature to implement background calculation server
to handle accesses to GPU device; to avoid limitation of number of processes
that can use GPU device simultaneously.

Probably, other folks have their use cases.
For example, Zoltan introduced his use case in the upthread as follows:
> - an SQL-driven scheduler, similar to pgAgent, it's generic enough,
>   we might port it to this scheme and publish it
> - a huge volume importer daemon, it was written for a very specific
>   purpose and for a single client, we cannot publish it.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-11 Thread Alvaro Herrera
Excerpts from Boszormenyi Zoltan's message of vie jun 29 09:11:23 -0400 2012:

> We have some use cases for this patch, when can you post
> a new version? I would test and review it.

What use cases do you have in mind?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-08-31 Thread Kohei KaiGai
2012/6/21 Simon Riggs :
> On 21 June 2012 19:13, Jaime Casanova  wrote:
>> On Sun, Jun 10, 2012 at 4:15 AM, Kohei KaiGai  wrote:
>>> 2012/6/8 Simon Riggs :
>>>
 I have a prototype that has some of these characteristics, so I see
 our work as complementary.

 At present, I don't think this patch would be committable in CF1, but
 I'd like to make faster progress with it than that. Do you want to
 work on this more, or would you like me to merge our prototypes into a
 more likely candidate?

>>> I'm not favor in duplicate similar efforts. If available, could you merge
>>> some ideas in my patch into your prototypes?
>>>
>>
>> so, we are waiting for a new patch? is it coming from Simon or Kohei?
>
> There is an updated patch coming from me. I thought I would focus on
> review of other work first.
>
Simon, what about the current status of this patch?

Do I have something to help for the integration by the upcoming CF?

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-29 Thread Jaime Casanova
On Fri, Jun 29, 2012 at 9:44 AM, Kohei KaiGai  wrote:
>
> The auth_counter is just an proof-of-concept patch, so, it is helpful if you
> could provide another use case that can make sense.
>

what about pgbouncer?

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-29 Thread Boszormenyi Zoltan

2012-06-29 16:44 keltezéssel, Kohei KaiGai írta:

2012/6/29 Boszormenyi Zoltan :

2012-04-25 11:40 keltezéssel, Kohei KaiGai írta:


2012/3/10 Simon Riggs :

On Fri, Mar 9, 2012 at 6:51 PM, Andrew Dunstan 
wrote:


On 03/09/2012 01:40 PM, Robert Haas wrote:

On Fri, Mar 9, 2012 at 12:02 PM, David E.
Wheeler
  wrote:

On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:

100% agree  (having re-read the thread and Alvaro's idea having sunk
in).  Being able to set up daemon processes side by side with the
postmaster would fit the bill nicely.  It's pretty interesting to
think of all the places you could go with it.

pgAgent could use it *right now*. I keep forgetting to restart it
after
restarting PostgreSQL and finding after a day or so that no jobs have
run.

That can and should be fixed by teaching pgAgent that failing to
connect to the server, or getting disconnected, is not a fatal error,
but a reason to sleep and retry.


Yeah. It's still not entirely clear to me what a postmaster-controlled
daemon is going to be able to do that an external daemon can't.

Start and stop at the same time as postmaster, without any pain.

It's a considerable convenience to be able to design this aspect once
and then have all things linked to the postmaster follow that. It
means people will be able to write code that runs on all OS easily,
without everybody having similar but slightly different code about
starting up, reading parameters, following security rules etc.. Tight
integration, with good usability.


I tried to implement a patch according to the idea. It allows extensions
to register an entry point of the self-managed daemon processes,
then postmaster start and stop them according to the normal manner.

[kaigai@iwashi patch]$ ps ax | grep postgres
27784 pts/0S  0:00 /usr/local/pgsql/bin/postgres
27786 ?Ss 0:00 postgres: writer process
27787 ?Ss 0:00 postgres: checkpointer process
27788 ?Ss 0:00 postgres: wal writer process
27789 ?Ss 0:00 postgres: autovacuum launcher process
27790 ?Ss 0:00 postgres: stats collector process
27791 ?Ss 0:00 postgres: auth_counter  <== (*)

The auth_counter being included in this patch is just an example of
this functionality. It does not have significant meanings. It just logs
number of authentication success and fails every intervals.

I'm motivated to define an extra daemon that attach shared memory
segment of PostgreSQL as a computing server to avoid limitation of
number of GPU code that we can load concurrently.

Thanks,


I have tested this original version. The patch has a single trivial reject,
after fixing it, it compiled nicely.

After adding shared_preload_libraries='$libdir/auth_counter', the extra
daemon start and stops nicely with pg_ctl start/stop. The auth_counter.c
code is a fine minimalistic example on writing one's own daemon.


Thanks for your testing.

According to Simon's comment, I'm waiting for his integration of this patch
with another implementation by him.

The auth_counter is just an proof-of-concept patch, so, it is helpful if you
could provide another use case that can make sense.


Well, we have two use cases that are more complex:

- an SQL-driven scheduler, similar to pgAgent, it's generic enough,
  we might port it to this scheme and publish it
- a huge volume importer daemon, it was written for a very specific
  purpose and for a single client, we cannot publish it.

Both need database connections, the second needs more than one,
so they need to link to the client side libpq, the way it was done for
walreceiver can be done here as well.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-29 Thread Kohei KaiGai
2012/6/29 Boszormenyi Zoltan :
> 2012-04-25 11:40 keltezéssel, Kohei KaiGai írta:
>
>> 2012/3/10 Simon Riggs :
>>>
>>> On Fri, Mar 9, 2012 at 6:51 PM, Andrew Dunstan 
>>> wrote:


 On 03/09/2012 01:40 PM, Robert Haas wrote:
>
> On Fri, Mar 9, 2012 at 12:02 PM, David E.
> Wheeler
>  wrote:
>>
>> On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:
>>>
>>> 100% agree  (having re-read the thread and Alvaro's idea having sunk
>>> in).  Being able to set up daemon processes side by side with the
>>> postmaster would fit the bill nicely.  It's pretty interesting to
>>> think of all the places you could go with it.
>>
>> pgAgent could use it *right now*. I keep forgetting to restart it
>> after
>> restarting PostgreSQL and finding after a day or so that no jobs have
>> run.
>
> That can and should be fixed by teaching pgAgent that failing to
> connect to the server, or getting disconnected, is not a fatal error,
> but a reason to sleep and retry.


 Yeah. It's still not entirely clear to me what a postmaster-controlled
 daemon is going to be able to do that an external daemon can't.
>>>
>>> Start and stop at the same time as postmaster, without any pain.
>>>
>>> It's a considerable convenience to be able to design this aspect once
>>> and then have all things linked to the postmaster follow that. It
>>> means people will be able to write code that runs on all OS easily,
>>> without everybody having similar but slightly different code about
>>> starting up, reading parameters, following security rules etc.. Tight
>>> integration, with good usability.
>>>
>> I tried to implement a patch according to the idea. It allows extensions
>> to register an entry point of the self-managed daemon processes,
>> then postmaster start and stop them according to the normal manner.
>>
>> [kaigai@iwashi patch]$ ps ax | grep postgres
>> 27784 pts/0    S      0:00 /usr/local/pgsql/bin/postgres
>> 27786 ?        Ss     0:00 postgres: writer process
>> 27787 ?        Ss     0:00 postgres: checkpointer process
>> 27788 ?        Ss     0:00 postgres: wal writer process
>> 27789 ?        Ss     0:00 postgres: autovacuum launcher process
>> 27790 ?        Ss     0:00 postgres: stats collector process
>> 27791 ?        Ss     0:00 postgres: auth_counter              <== (*)
>>
>> The auth_counter being included in this patch is just an example of
>> this functionality. It does not have significant meanings. It just logs
>> number of authentication success and fails every intervals.
>>
>> I'm motivated to define an extra daemon that attach shared memory
>> segment of PostgreSQL as a computing server to avoid limitation of
>> number of GPU code that we can load concurrently.
>>
>> Thanks,
>
>
> I have tested this original version. The patch has a single trivial reject,
> after fixing it, it compiled nicely.
>
> After adding shared_preload_libraries='$libdir/auth_counter', the extra
> daemon start and stops nicely with pg_ctl start/stop. The auth_counter.c
> code is a fine minimalistic example on writing one's own daemon.
>
Thanks for your testing.

According to Simon's comment, I'm waiting for his integration of this patch
with another implementation by him.

The auth_counter is just an proof-of-concept patch, so, it is helpful if you
could provide another use case that can make sense.

Best regards,
-- 
KaiGai Kohei 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-29 Thread Boszormenyi Zoltan

2012-04-25 11:40 keltezéssel, Kohei KaiGai írta:

2012/3/10 Simon Riggs :

On Fri, Mar 9, 2012 at 6:51 PM, Andrew Dunstan  wrote:


On 03/09/2012 01:40 PM, Robert Haas wrote:

On Fri, Mar 9, 2012 at 12:02 PM, David E. Wheeler
  wrote:

On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:

100% agree  (having re-read the thread and Alvaro's idea having sunk
in).  Being able to set up daemon processes side by side with the
postmaster would fit the bill nicely.  It's pretty interesting to
think of all the places you could go with it.

pgAgent could use it *right now*. I keep forgetting to restart it after
restarting PostgreSQL and finding after a day or so that no jobs have run.

That can and should be fixed by teaching pgAgent that failing to
connect to the server, or getting disconnected, is not a fatal error,
but a reason to sleep and retry.


Yeah. It's still not entirely clear to me what a postmaster-controlled
daemon is going to be able to do that an external daemon can't.

Start and stop at the same time as postmaster, without any pain.

It's a considerable convenience to be able to design this aspect once
and then have all things linked to the postmaster follow that. It
means people will be able to write code that runs on all OS easily,
without everybody having similar but slightly different code about
starting up, reading parameters, following security rules etc.. Tight
integration, with good usability.


I tried to implement a patch according to the idea. It allows extensions
to register an entry point of the self-managed daemon processes,
then postmaster start and stop them according to the normal manner.

[kaigai@iwashi patch]$ ps ax | grep postgres
27784 pts/0S  0:00 /usr/local/pgsql/bin/postgres
27786 ?Ss 0:00 postgres: writer process
27787 ?Ss 0:00 postgres: checkpointer process
27788 ?Ss 0:00 postgres: wal writer process
27789 ?Ss 0:00 postgres: autovacuum launcher process
27790 ?Ss 0:00 postgres: stats collector process
27791 ?Ss 0:00 postgres: auth_counter  <== (*)

The auth_counter being included in this patch is just an example of
this functionality. It does not have significant meanings. It just logs
number of authentication success and fails every intervals.

I'm motivated to define an extra daemon that attach shared memory
segment of PostgreSQL as a computing server to avoid limitation of
number of GPU code that we can load concurrently.

Thanks,


I have tested this original version. The patch has a single trivial reject,
after fixing it, it compiled nicely.

After adding shared_preload_libraries='$libdir/auth_counter', the extra
daemon start and stops nicely with pg_ctl start/stop. The auth_counter.c
code is a fine minimalistic example on writing one's own daemon.

Thanks,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-29 Thread Boszormenyi Zoltan

2012-06-21 23:53 keltezéssel, Simon Riggs írta:

On 21 June 2012 19:13, Jaime Casanova  wrote:

On Sun, Jun 10, 2012 at 4:15 AM, Kohei KaiGai  wrote:

2012/6/8 Simon Riggs :


I have a prototype that has some of these characteristics, so I see
our work as complementary.

At present, I don't think this patch would be committable in CF1, but
I'd like to make faster progress with it than that. Do you want to
work on this more, or would you like me to merge our prototypes into a
more likely candidate?


I'm not favor in duplicate similar efforts. If available, could you merge
some ideas in my patch into your prototypes?


so, we are waiting for a new patch? is it coming from Simon or Kohei?

There is an updated patch coming from me. I thought I would focus on
review of other work first.


We have some use cases for this patch, when can you post
a new version? I would test and review it.

Thanks in advance,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-21 Thread Simon Riggs
On 21 June 2012 19:13, Jaime Casanova  wrote:
> On Sun, Jun 10, 2012 at 4:15 AM, Kohei KaiGai  wrote:
>> 2012/6/8 Simon Riggs :
>>
>>> I have a prototype that has some of these characteristics, so I see
>>> our work as complementary.
>>>
>>> At present, I don't think this patch would be committable in CF1, but
>>> I'd like to make faster progress with it than that. Do you want to
>>> work on this more, or would you like me to merge our prototypes into a
>>> more likely candidate?
>>>
>> I'm not favor in duplicate similar efforts. If available, could you merge
>> some ideas in my patch into your prototypes?
>>
>
> so, we are waiting for a new patch? is it coming from Simon or Kohei?

There is an updated patch coming from me. I thought I would focus on
review of other work first.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-21 Thread Jaime Casanova
On Sun, Jun 10, 2012 at 4:15 AM, Kohei KaiGai  wrote:
> 2012/6/8 Simon Riggs :
>
>> I have a prototype that has some of these characteristics, so I see
>> our work as complementary.
>>
>> At present, I don't think this patch would be committable in CF1, but
>> I'd like to make faster progress with it than that. Do you want to
>> work on this more, or would you like me to merge our prototypes into a
>> more likely candidate?
>>
> I'm not favor in duplicate similar efforts. If available, could you merge
> some ideas in my patch into your prototypes?
>

so, we are waiting for a new patch? is it coming from Simon or Kohei?

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-10 Thread Kohei KaiGai
2012/6/8 Simon Riggs :
> On 25 April 2012 10:40, Kohei KaiGai  wrote:
>
>> I tried to implement a patch according to the idea. It allows extensions
>> to register an entry point of the self-managed daemon processes,
>> then postmaster start and stop them according to the normal manner.
>
> The patch needs much work yet, but has many good ideas.
>
> There doesn't seem to be a place where we pass the parameter to say
> which one of the multiple daemons a particular process should become.
> It would be helpful for testing to make the example module call 2
> daemons each with slightly different characteristics or parameters, so
> we can test the full function of the patch.
>
This patch intended to register a daemon multiple times with different
name such as "auth-counter-1" or "auth-counter-2".
But, I agree with the suggestion to take a parameter to identify each
daemon makes interface better than the original one.

> I think its essential that we allow these processes to execute SQL, so
> we must correctly initialise them as backends and set up signalling.
> Which also means we need a parameter to limit the number of such
> processes.
>
It should be controllable with a flag of RegisterExtraDaemon().
Although it helps to reduce code duplication in case when extra daemons
execute SQL, but some other use-cases may not need SQL execution.

> Also, I prefer to call these bgworker processes, which is more similar
> to auto vacuum worker and bgwriter naming. That also gives a clue as
> to how to set up signalling etc..
>
> I don't think we should allow these processes to override sighup and
> sigterm. Signal handling should be pretty standard, just as it is with
> normal backends.
>
Hmm. CHECK_FOR_INTERRUPTS() might be sufficient to handle
signaling behavior according to the standard.

> I have a prototype that has some of these characteristics, so I see
> our work as complementary.
>
> At present, I don't think this patch would be committable in CF1, but
> I'd like to make faster progress with it than that. Do you want to
> work on this more, or would you like me to merge our prototypes into a
> more likely candidate?
>
I'm not favor in duplicate similar efforts. If available, could you merge
some ideas in my patch into your prototypes?

Thanks,
-- 
KaiGai Kohei 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-06-08 Thread Simon Riggs
On 25 April 2012 10:40, Kohei KaiGai  wrote:

> I tried to implement a patch according to the idea. It allows extensions
> to register an entry point of the self-managed daemon processes,
> then postmaster start and stop them according to the normal manner.

The patch needs much work yet, but has many good ideas.

There doesn't seem to be a place where we pass the parameter to say
which one of the multiple daemons a particular process should become.
It would be helpful for testing to make the example module call 2
daemons each with slightly different characteristics or parameters, so
we can test the full function of the patch.

I think its essential that we allow these processes to execute SQL, so
we must correctly initialise them as backends and set up signalling.
Which also means we need a parameter to limit the number of such
processes.

Also, I prefer to call these bgworker processes, which is more similar
to auto vacuum worker and bgwriter naming. That also gives a clue as
to how to set up signalling etc..

I don't think we should allow these processes to override sighup and
sigterm. Signal handling should be pretty standard, just as it is with
normal backends.

I have a prototype that has some of these characteristics, so I see
our work as complementary.

At present, I don't think this patch would be committable in CF1, but
I'd like to make faster progress with it than that. Do you want to
work on this more, or would you like me to merge our prototypes into a
more likely candidate?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-04-25 Thread Simon Riggs
On Wed, Apr 25, 2012 at 10:40 AM, Kohei KaiGai  wrote:

> I tried to implement a patch according to the idea. It allows extensions
> to register an entry point of the self-managed daemon processes,
> then postmaster start and stop them according to the normal manner.

I've got a provisional version of this as well, that I was expecting
to submit for 9.3CF1

Best thing is probably to catch up at PGCon on this, so we can merge
the proposals and code.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-04-25 Thread Kohei KaiGai
2012/3/10 Simon Riggs :
> On Fri, Mar 9, 2012 at 6:51 PM, Andrew Dunstan  wrote:
>>
>>
>> On 03/09/2012 01:40 PM, Robert Haas wrote:
>>>
>>> On Fri, Mar 9, 2012 at 12:02 PM, David E. Wheeler
>>>  wrote:

 On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:
>
> 100% agree  (having re-read the thread and Alvaro's idea having sunk
> in).  Being able to set up daemon processes side by side with the
> postmaster would fit the bill nicely.  It's pretty interesting to
> think of all the places you could go with it.

 pgAgent could use it *right now*. I keep forgetting to restart it after
 restarting PostgreSQL and finding after a day or so that no jobs have run.
>>>
>>> That can and should be fixed by teaching pgAgent that failing to
>>> connect to the server, or getting disconnected, is not a fatal error,
>>> but a reason to sleep and retry.
>>
>>
>> Yeah. It's still not entirely clear to me what a postmaster-controlled
>> daemon is going to be able to do that an external daemon can't.
>
> Start and stop at the same time as postmaster, without any pain.
>
> It's a considerable convenience to be able to design this aspect once
> and then have all things linked to the postmaster follow that. It
> means people will be able to write code that runs on all OS easily,
> without everybody having similar but slightly different code about
> starting up, reading parameters, following security rules etc.. Tight
> integration, with good usability.
>
I tried to implement a patch according to the idea. It allows extensions
to register an entry point of the self-managed daemon processes,
then postmaster start and stop them according to the normal manner.

[kaigai@iwashi patch]$ ps ax | grep postgres
27784 pts/0S  0:00 /usr/local/pgsql/bin/postgres
27786 ?Ss 0:00 postgres: writer process
27787 ?Ss 0:00 postgres: checkpointer process
27788 ?Ss 0:00 postgres: wal writer process
27789 ?Ss 0:00 postgres: autovacuum launcher process
27790 ?Ss 0:00 postgres: stats collector process
27791 ?Ss 0:00 postgres: auth_counter  <== (*)

The auth_counter being included in this patch is just an example of
this functionality. It does not have significant meanings. It just logs
number of authentication success and fails every intervals.

I'm motivated to define an extra daemon that attach shared memory
segment of PostgreSQL as a computing server to avoid limitation of
number of GPU code that we can load concurrently.

Thanks,
-- 
KaiGai Kohei 


pgsql-v9.3-extra-daemon.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] elegant and effective way for running jobs inside a database

2012-03-12 Thread Artur Litwinowicz
Dear Developers,
   I am looking for elegant and effective way for running jobs inside a
database or cluster - for now I can not find that solution.
OK if You say "use cron" or "pgAgent" I say I know that solutions, but
the are not effective and elegant. Compilation of "pgAgent" is a pain
(especially wxWidgets usage on system with no X) - it can run jobs with
minimal 60s periods but what when someone needs run it faster for eg.
with 5s period ? Of course using cron I can do that but it is not
effective and elegant solution. Why PostgreSQL can not have so elegant
solution like Oracle database ? I am working with Oracle databases for
many years, but I like much more PostgreSQL project but this one
thing... I can not understand - the lack of jobs inside the database...

Best regards,
Artur



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-12 Thread Simon Riggs
On Sat, Mar 10, 2012 at 2:59 PM, Andrew Dunstan  wrote:

> The devil is in the details, though, pace Mies van der Rohe.
>
> In particular, it's the "tight integration" piece I'm worried about.
>
> What is the postmaster supposed to do if the daemon start fails? What if it
> gets a flood of failures? What access will the daemon have to Postgres
> internals? What OS privileges will it have, since this would have to run as
> the OS postgres user? In general I think we don't want arbitrary processes
> running as the OS postgres user.

So why are the answers to those questions different for a daemon than
for a C function executed from an external client? What additional
exposure is there?

> I accept that cron might not be the best tool for the jobs, since a) its
> finest granularity is 1 minute and b) it would need a new connection for
> each job. But a well written external daemon that runs as a different user
> and is responsible for making its own connection to the database and
> re-establishing it if necessary, seems to me at least as clean a design for
> a job scheduler as one that is stopped and started by the postmaster.

As of this thread, you can see that many people don't agree. Bear in
mind that nobody is trying to prevent you from writing a program in
that way if you believe that. That route will remain available.

It's a key aspect of modular software we're talking about. People want
to have programs that are intimately connected to the database, so
that nobody needs to change the operational instructions when they
start or stop the database.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-11 Thread Dimitri Fontaine
Tom Lane  writes:
> I don't want to have a server-side ticker at all, especially not one
> that exists only for a client that might or might not be there.  We've
> been doing what we can to reduce PG's idle-power consumption, which is
> an important consideration for large-data-center applications.  Adding a
> new source of periodic wakeups is exactly the wrong direction to be
> going.

I would guess that's an opt-in solution, as some other of our subprocess
are, much like autovacuum.

> There is no need for a ticker to drive a job system.  It should be able
> to respond to interrupts (if a NOTIFY comes in) and otherwise sleep
> until the precalculated time that it next needs to launch a job.

I think the ticker was proposed as a minimal component allowing to be
developing the job system as an extension.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-10 Thread Andrew Dunstan



On 03/10/2012 07:11 AM, Simon Riggs wrote:

On Fri, Mar 9, 2012 at 6:51 PM, Andrew Dunstan  wrote:


On 03/09/2012 01:40 PM, Robert Haas wrote:

On Fri, Mar 9, 2012 at 12:02 PM, David E. Wheeler
  wrote:

On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:

100% agree  (having re-read the thread and Alvaro's idea having sunk
in).  Being able to set up daemon processes side by side with the
postmaster would fit the bill nicely.  It's pretty interesting to
think of all the places you could go with it.

pgAgent could use it *right now*. I keep forgetting to restart it after
restarting PostgreSQL and finding after a day or so that no jobs have run.

That can and should be fixed by teaching pgAgent that failing to
connect to the server, or getting disconnected, is not a fatal error,
but a reason to sleep and retry.


Yeah. It's still not entirely clear to me what a postmaster-controlled
daemon is going to be able to do that an external daemon can't.

Start and stop at the same time as postmaster, without any pain.

It's a considerable convenience to be able to design this aspect once
and then have all things linked to the postmaster follow that. It
means people will be able to write code that runs on all OS easily,
without everybody having similar but slightly different code about
starting up, reading parameters, following security rules etc.. Tight
integration, with good usability.



The devil is in the details, though, pace Mies van der Rohe.

In particular, it's the "tight integration" piece I'm worried about.

What is the postmaster supposed to do if the daemon start fails? What if 
it gets a flood of failures? What access will the daemon have to 
Postgres internals? What OS privileges will it have, since this would 
have to run as the OS postgres user? In general I think we don't want 
arbitrary processes running as the OS postgres user.


I accept that cron might not be the best tool for the jobs, since a) its 
finest granularity is 1 minute and b) it would need a new connection for 
each job. But a well written external daemon that runs as a different 
user and is responsible for making its own connection to the database 
and re-establishing it if necessary, seems to me at least as clean a 
design for a job scheduler as one that is stopped and started by the 
postmaster.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-10 Thread Artur Litwinowicz
W dniu 2012-03-09 16:55, Merlin Moncure pisze:
> On Fri, Mar 9, 2012 at 9:36 AM, Kohei KaiGai 
> wrote:
>> 2012/3/6 Alvaro Herrera :
>>> It seems to me that the only thing that needs core support is
>>> the ability to start up the daemon when postmaster is ready to
>>> accept queries, and shut the daemon down when postmaster kills
>>> backends (either because one crashed, or because it's shutting
>>> down).
>>> 
>> So, although my motivation is not something like Cron in core, it
>> seems to me Alvaro's idea is quite desirable and reasonable, to
>> be discussed in v9.3.
> 
> 100% agree  (having re-read the thread and Alvaro's idea having
> sunk in).  Being able to set up daemon processes side by side with
> the postmaster would fit the bill nicely.  It's pretty interesting
> to think of all the places you could go with it.
> 
> merlin

Good to hear that (I hope that even though English is not my native
language I understand properly posts in this thread). I am convinced,
that all of You will be pround of the new solution like a "heart bit"
for PostgreSQL. May be it is too poetic but considering cron or
pgAgent instead real job manager is like considering defibrillator
instead a real heart. Currently, especially in web applications, the
idea is not where to store a data but how a data can flow and how *fast*.

"It's pretty interesting to think of all the places you could go with
it." - in fact it is :)

Best regards,
Artur

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-10 Thread Simon Riggs
On Fri, Mar 9, 2012 at 6:51 PM, Andrew Dunstan  wrote:
>
>
> On 03/09/2012 01:40 PM, Robert Haas wrote:
>>
>> On Fri, Mar 9, 2012 at 12:02 PM, David E. Wheeler
>>  wrote:
>>>
>>> On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:

 100% agree  (having re-read the thread and Alvaro's idea having sunk
 in).  Being able to set up daemon processes side by side with the
 postmaster would fit the bill nicely.  It's pretty interesting to
 think of all the places you could go with it.
>>>
>>> pgAgent could use it *right now*. I keep forgetting to restart it after
>>> restarting PostgreSQL and finding after a day or so that no jobs have run.
>>
>> That can and should be fixed by teaching pgAgent that failing to
>> connect to the server, or getting disconnected, is not a fatal error,
>> but a reason to sleep and retry.
>
>
> Yeah. It's still not entirely clear to me what a postmaster-controlled
> daemon is going to be able to do that an external daemon can't.

Start and stop at the same time as postmaster, without any pain.

It's a considerable convenience to be able to design this aspect once
and then have all things linked to the postmaster follow that. It
means people will be able to write code that runs on all OS easily,
without everybody having similar but slightly different code about
starting up, reading parameters, following security rules etc.. Tight
integration, with good usability.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-09 Thread Andrew Dunstan



On 03/09/2012 01:40 PM, Robert Haas wrote:

On Fri, Mar 9, 2012 at 12:02 PM, David E. Wheeler  wrote:

On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:

100% agree  (having re-read the thread and Alvaro's idea having sunk
in).  Being able to set up daemon processes side by side with the
postmaster would fit the bill nicely.  It's pretty interesting to
think of all the places you could go with it.

pgAgent could use it *right now*. I keep forgetting to restart it after 
restarting PostgreSQL and finding after a day or so that no jobs have run.

That can and should be fixed by teaching pgAgent that failing to
connect to the server, or getting disconnected, is not a fatal error,
but a reason to sleep and retry.


Yeah. It's still not entirely clear to me what a postmaster-controlled 
daemon is going to be able to do that an external daemon can't.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-09 Thread Robert Haas
On Fri, Mar 9, 2012 at 12:02 PM, David E. Wheeler  wrote:
> On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:
>> 100% agree  (having re-read the thread and Alvaro's idea having sunk
>> in).  Being able to set up daemon processes side by side with the
>> postmaster would fit the bill nicely.  It's pretty interesting to
>> think of all the places you could go with it.
>
> pgAgent could use it *right now*. I keep forgetting to restart it after 
> restarting PostgreSQL and finding after a day or so that no jobs have run.

That can and should be fixed by teaching pgAgent that failing to
connect to the server, or getting disconnected, is not a fatal error,
but a reason to sleep and retry.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-09 Thread David E. Wheeler
On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote:

> 100% agree  (having re-read the thread and Alvaro's idea having sunk
> in).  Being able to set up daemon processes side by side with the
> postmaster would fit the bill nicely.  It's pretty interesting to
> think of all the places you could go with it.

pgAgent could use it *right now*. I keep forgetting to restart it after 
restarting PostgreSQL and finding after a day or so that no jobs have run.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-09 Thread Merlin Moncure
On Fri, Mar 9, 2012 at 9:36 AM, Kohei KaiGai  wrote:
> 2012/3/6 Alvaro Herrera :
>> It seems to me that the only thing that needs core support is the
>> ability to start up the daemon when postmaster is ready to accept
>> queries, and shut the daemon down when postmaster kills backends (either
>> because one crashed, or because it's shutting down).
>>
> So, although my motivation is not something like Cron in core,
> it seems to me Alvaro's idea is quite desirable and reasonable,
> to be discussed in v9.3.

100% agree  (having re-read the thread and Alvaro's idea having sunk
in).  Being able to set up daemon processes side by side with the
postmaster would fit the bill nicely.  It's pretty interesting to
think of all the places you could go with it.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-09 Thread Kohei KaiGai
2012/3/6 Alvaro Herrera :
> It seems to me that the only thing that needs core support is the
> ability to start up the daemon when postmaster is ready to accept
> queries, and shut the daemon down when postmaster kills backends (either
> because one crashed, or because it's shutting down).
>
+10

Even though it is different from the original requirement, I also would
like to see the feature to run daemon processes managed by extension
according to start/stop of the postmaster.

I'm trying to implement an extension that uses GPU devices to help
calculation of complex qualifiers. CUDA or OpenCL has a limitation
that does not allow a particular number of processes open a device
concurrently.
So, I launches calculation threads that handles all the communication
with GPU devices behalf on the postmaster process, however, it is not
a graceful design, of course.
Each backend communicate with the calculation thread via shared-
memory segment, thus, it should be a child process of postmaster.

So, although my motivation is not something like Cron in core,
it seems to me Alvaro's idea is quite desirable and reasonable,
to be discussed in v9.3.

Thanks,
-- 
KaiGai Kohei 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-07 Thread Tom Lane
Merlin Moncure  writes:
> sure, I get that, especially in regards to procedures.  a server
> ticker though is a pretty small thing and it's fair to ask if maybe
> that should be exposed instead of (or perhaps in addition to) a job
> scheduling system.

I don't want to have a server-side ticker at all, especially not one
that exists only for a client that might or might not be there.  We've
been doing what we can to reduce PG's idle-power consumption, which is
an important consideration for large-data-center applications.  Adding a
new source of periodic wakeups is exactly the wrong direction to be
going.

There is no need for a ticker to drive a job system.  It should be able
to respond to interrupts (if a NOTIFY comes in) and otherwise sleep
until the precalculated time that it next needs to launch a job.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-07 Thread Merlin Moncure
On Wed, Mar 7, 2012 at 2:14 PM, Simon Riggs  wrote:
> The stored procedure route sounds attractive but its a long way off
> and doesn't address all of the states needs people have voiced. I'm
> not against doing both, I just want to do the quickest and easiest.

sure, I get that, especially in regards to procedures.  a server
ticker though is a pretty small thing and it's fair to ask if maybe
that should be exposed instead of (or perhaps in addition to) a job
scheduling system.

a userland scheduling system has some advantages -- for example it
could be pulled in as an extension.  it would have a very different
feel though since it would be participatory scheduling.  i guess it
really depends on who's writing it and what the objective is (if
anyone is willing to rewrite cron into the postmaster, by all
means...)

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-07 Thread Simon Riggs
On Wed, Mar 7, 2012 at 7:55 PM, Merlin Moncure  wrote:
> On Wed, Mar 7, 2012 at 2:15 AM, Simon Riggs  wrote:
>> We talked about this at last year's Dev meeting. And we got
>> sidetracked into "what we really want is stored procedures". Maybe we
>> want that, but its a completely separate thing. Please lets not get
>> distracted from a very simple thing because of the existence of other
>> requirements.
>
> The reason why stored procedures were brought up is because they are
> one way to implement an ad hoc scheduler without rewriting cron.
> Another (better) way to do that would be to have the postgres expose a
> heartbeat ticker that you could layer a scheduler on top of.  These
> are minimalist approaches with the intent of providing scaffolding
> upon which robust external solutions can be built.  Not having them
> forces dependency on the operating system (cron) or an external daemon
> like pgqd.  PGQ does exactly this (over the daemon) so that the bulk
> of the algorithm can be kept in SQL which is IMNSHO extremely nice.
>
> With a built in heartbeat you can expose a 100% SQL api that user
> applications can call without having to maintain a separate process to
> drive everything (although you can certainly do that if you wish).
> This is exactly what PGQ (which I consider to be an absolute marvel)
> does.  So if you want to start small, do that -- it can be used to do
> a number of interesting things that aren't really possible at the
> moment.
>
> OTOH, if you want to implement a fully fledged out job scheduler
> inside of the postmaster, then do that...it's a great solution to the
> problem.  But it's a little unfair to dismiss those who are saying:
> "If I had stored procedures, this could get done" and conclude that
> scheduling through the postmaster is the only way forward.

It's not the only way, I agree. But we do need a way forwards
otherwise nothing gets done.

The stored procedure route sounds attractive but its a long way off
and doesn't address all of the states needs people have voiced. I'm
not against doing both, I just want to do the quickest and easiest.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-07 Thread Merlin Moncure
On Wed, Mar 7, 2012 at 2:15 AM, Simon Riggs  wrote:
> We talked about this at last year's Dev meeting. And we got
> sidetracked into "what we really want is stored procedures". Maybe we
> want that, but its a completely separate thing. Please lets not get
> distracted from a very simple thing because of the existence of other
> requirements.

The reason why stored procedures were brought up is because they are
one way to implement an ad hoc scheduler without rewriting cron.
Another (better) way to do that would be to have the postgres expose a
heartbeat ticker that you could layer a scheduler on top of.  These
are minimalist approaches with the intent of providing scaffolding
upon which robust external solutions can be built.  Not having them
forces dependency on the operating system (cron) or an external daemon
like pgqd.  PGQ does exactly this (over the daemon) so that the bulk
of the algorithm can be kept in SQL which is IMNSHO extremely nice.

With a built in heartbeat you can expose a 100% SQL api that user
applications can call without having to maintain a separate process to
drive everything (although you can certainly do that if you wish).
This is exactly what PGQ (which I consider to be an absolute marvel)
does.  So if you want to start small, do that -- it can be used to do
a number of interesting things that aren't really possible at the
moment.

OTOH, if you want to implement a fully fledged out job scheduler
inside of the postmaster, then do that...it's a great solution to the
problem.  But it's a little unfair to dismiss those who are saying:
"If I had stored procedures, this could get done" and conclude that
scheduling through the postmaster is the only way forward.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-07 Thread Alvaro Herrera

Excerpts from Simon Riggs's message of mié mar 07 05:15:03 -0300 2012:

> We talked about this at last year's Dev meeting. And we got
> sidetracked into "what we really want is stored procedures". Maybe we
> want that, but its a completely separate thing. Please lets not get
> distracted from a very simple thing because of the existence of other
> requirements.

Completely agreed.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-07 Thread Pavel Stehule
2012/3/7 Simon Riggs :
> On Tue, Mar 6, 2012 at 3:21 PM, Tom Lane  wrote:
>
>> But having said that, it's not apparent to me why such a thing would
>> need to live "inside the database" at all.  It's very easy to visualize
>> a task scheduler that runs as a client and requires nothing new from the
>> core code.  Approaching the problem that way would let the scheduler
>> be an independent project that stands or falls on its own merits.
>
> On Tue, Mar 6, 2012 at 4:36 PM, Alvaro Herrera
>  wrote:
>
>> What such an external scheduler would need from core is support for
>> starting up and shutting down along postmaster (as well as restarts at
>> appropriate times).  Postmaster already has the ability to start and
>> shut down many processes depending on several different policies; I
>> think it's mostly a matter of exporting that functionality in a sane way.
>
> Tom's question is exactly on the money, and so is Alvaro's answer.
>
> Many, many people have requested code that "runs in core", but the key
> point is that all they actually want are the core features required to
> build one. The actual projects actively want to live outside of core.
> The "run in core" bit is actually just what Alvaro says, the ability
> to interact gracefully for startup and shutdown.
>
> What I think we need is an API like the LWlock add in requests, so we
> can have a library that requests it is assigned a daemon to run in,
> looking very much like autovacuum launcher, with the guts removed. It
> would then be a matter for the code authors as to whether it was a
> client program that interacts with server, or whether it was a full
> blown daemon like autovacuum.
>

it is true - first step should be short - and maintaining, assign to
jobs and others can be implemented as extension. There is not
necessary SQL api (other than functions).

Regards

Pavel


> We talked about this at last year's Dev meeting. And we got
> sidetracked into "what we really want is stored procedures". Maybe we
> want that, but its a completely separate thing. Please lets not get
> distracted from a very simple thing because of the existence of other
> requirements.
>
> --
>  Simon Riggs   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-07 Thread Simon Riggs
On Tue, Mar 6, 2012 at 3:21 PM, Tom Lane  wrote:

> But having said that, it's not apparent to me why such a thing would
> need to live "inside the database" at all.  It's very easy to visualize
> a task scheduler that runs as a client and requires nothing new from the
> core code.  Approaching the problem that way would let the scheduler
> be an independent project that stands or falls on its own merits.

On Tue, Mar 6, 2012 at 4:36 PM, Alvaro Herrera
 wrote:

> What such an external scheduler would need from core is support for
> starting up and shutting down along postmaster (as well as restarts at
> appropriate times).  Postmaster already has the ability to start and
> shut down many processes depending on several different policies; I
> think it's mostly a matter of exporting that functionality in a sane way.

Tom's question is exactly on the money, and so is Alvaro's answer.

Many, many people have requested code that "runs in core", but the key
point is that all they actually want are the core features required to
build one. The actual projects actively want to live outside of core.
The "run in core" bit is actually just what Alvaro says, the ability
to interact gracefully for startup and shutdown.

What I think we need is an API like the LWlock add in requests, so we
can have a library that requests it is assigned a daemon to run in,
looking very much like autovacuum launcher, with the guts removed. It
would then be a matter for the code authors as to whether it was a
client program that interacts with server, or whether it was a full
blown daemon like autovacuum.

We talked about this at last year's Dev meeting. And we got
sidetracked into "what we really want is stored procedures". Maybe we
want that, but its a completely separate thing. Please lets not get
distracted from a very simple thing because of the existence of other
requirements.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Daniel Farina
On Tue, Mar 6, 2012 at 3:31 PM, Andrew Dunstan  wrote:>
> We don't slavishly need to reproduce every piece of cron. In any case, on my
> Linux machine at least, batch is part of the "at" package, not the "cron"
> package. If you want anything at all done, then I'd suggest starting with a
> simple scheduler. Just about he quickest way to get something rejected in
> Postgres is to start with something overly complex and baroque.

I sort agree with this, I think.  However, I don't see the need to
have Postgres get involved with scheduling and triggering of jobs at
all.  Rather, it just doesn't have support for what I'd think of as a
"job", period, regardless of how it gets triggered.

The crux of the issue for me is that sometimes it's pretty annoying to
have to maintain a socket connection just to get some things to run
for a while: I can't tell the database "execute stored procedure (not
UDF) 'job' in a new backend, I'm going to disconnect now".

Nearly relatedly, I've heard from at least two people in immediate
memory that would like database sessions to be reified somehow from
their socket, so that they could resume work in a session that had a
connection blip.

At the same time, it would really suck to have an "idle in
transaction" because a client died and didn't bother to reconnect and
clean up...a caveat.

Nevertheless, I think session support (think "GNU screen" or "tmux")
is both useful and painful to accomplish without backend support (for
example, the BackendKey might be useful).  And stored procedures are a
familiar quantity at large...

Thoughts?

-- 
fdr

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Andrew Dunstan



On 03/06/2012 06:12 PM, Christopher Browne wrote:

On Tue, Mar 6, 2012 at 5:01 PM, Alvaro Herrera
  wrote:

Why do we need a ticker?  Just fetch the time of the task closest in the
future, and sleep till that time or a notify arrives (meaning schedule
change).

Keep in mind that cron functionality also includes "batch", which
means that the process needs to have the ability to be woken up by the
need to handle some pressing engagement that comes in suddenly.

For some events to be initiated by a NOTIFY received by a LISTENing
batch processor would be pretty slick...


We don't slavishly need to reproduce every piece of cron. In any case, 
on my Linux machine at least, batch is part of the "at" package, not the 
"cron" package. If you want anything at all done, then I'd suggest 
starting with a simple scheduler. Just about he quickest way to get 
something rejected in Postgres is to start with something overly complex 
and baroque.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Christopher Browne
On Tue, Mar 6, 2012 at 5:01 PM, Alvaro Herrera
 wrote:
> Why do we need a ticker?  Just fetch the time of the task closest in the
> future, and sleep till that time or a notify arrives (meaning schedule
> change).

Keep in mind that cron functionality also includes "batch", which
means that the process needs to have the ability to be woken up by the
need to handle some pressing engagement that comes in suddenly.

For some events to be initiated by a NOTIFY received by a LISTENing
batch processor would be pretty slick...
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Tom Lane
Alvaro Herrera  writes:
> I was thinking that the connection would be kept open but no query would
> be running.  Does this preclude reception of notifies?  I mean, you
> don't sleep via "SELECT pg_sleep()" but rather a select/poll in the
> daemon.

No.  If you're not inside a transaction, notifies will be sent
immediately.  They'd be pretty useless if they didn't work that way ---
the whole point is for clients not to have to busy-wait.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Alvaro Herrera

Excerpts from Merlin Moncure's message of mar mar 06 19:07:51 -0300 2012:
> 
> On Tue, Mar 6, 2012 at 4:01 PM, Alvaro Herrera
>  wrote:
> > Why do we need a ticker?  Just fetch the time of the task closest in the
> > future, and sleep till that time or a notify arrives (meaning schedule
> > change).
> 
> Because that can't be done in userland (at least, not without stored
> procedures) since you'd have to keep an open running transaction while
> sleeping.

I was thinking that the connection would be kept open but no query would
be running.  Does this preclude reception of notifies?  I mean, you
don't sleep via "SELECT pg_sleep()" but rather a select/poll in the
daemon.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Merlin Moncure
On Tue, Mar 6, 2012 at 4:01 PM, Alvaro Herrera
 wrote:
> Why do we need a ticker?  Just fetch the time of the task closest in the
> future, and sleep till that time or a notify arrives (meaning schedule
> change).

Because that can't be done in userland (at least, not without stored
procedures) since you'd have to keep an open running transaction while
sleeping.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Alvaro Herrera

Excerpts from Dimitri Fontaine's message of mar mar 06 18:44:18 -0300 2012:
> Josh Berkus  writes:
> > Activity and discretion beyond that could be defined in PL code,
> > including run/don't run conditions, activities, and dependancies.  The
> > only thing Postgres doesn't currently have is a clock which fires
> > events.  Anything we try to implement which is more complex than the
> > above is going to not work for someone.  And the pg_agent could be
> > adapted easily to use the Postgres clock instead of cron.
> 
> Oh, you mean like a ticker?  If only we knew about a project that did
> implement a ticker, in C, using the PostgreSQL licence, and who's using
> it in large scale production.  While at it, if such a ticker could be
> used to implement job queues…
> 
>   https://github.com/markokr/skytools/tree/master/sql/ticker

Why do we need a ticker?  Just fetch the time of the task closest in the
future, and sleep till that time or a notify arrives (meaning schedule
change).

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Merlin Moncure
On Tue, Mar 6, 2012 at 3:44 PM, Dimitri Fontaine  wrote:
> Josh Berkus  writes:
>> Activity and discretion beyond that could be defined in PL code,
>> including run/don't run conditions, activities, and dependancies.  The
>> only thing Postgres doesn't currently have is a clock which fires
>> events.  Anything we try to implement which is more complex than the
>> above is going to not work for someone.  And the pg_agent could be
>> adapted easily to use the Postgres clock instead of cron.
>
> Oh, you mean like a ticker?  If only we knew about a project that did
> implement a ticker, in C, using the PostgreSQL licence, and who's using
> it in large scale production.  While at it, if such a ticker could be
> used to implement job queues…
>
>  https://github.com/markokr/skytools/tree/master/sql/ticker

right -- exactly.  it would be pretty neat if the database exposed
this or a similar feature somehow -- perhaps by having the ticker send
a notify?  then a scheduler could sit on top of it without any
dependencies on the host operating system.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Dimitri Fontaine
Josh Berkus  writes:
> Activity and discretion beyond that could be defined in PL code,
> including run/don't run conditions, activities, and dependancies.  The
> only thing Postgres doesn't currently have is a clock which fires
> events.  Anything we try to implement which is more complex than the
> above is going to not work for someone.  And the pg_agent could be
> adapted easily to use the Postgres clock instead of cron.

Oh, you mean like a ticker?  If only we knew about a project that did
implement a ticker, in C, using the PostgreSQL licence, and who's using
it in large scale production.  While at it, if such a ticker could be
used to implement job queues…

  https://github.com/markokr/skytools/tree/master/sql/ticker

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Josh Berkus

>> It seems to me that the only thing that needs core support is the
>> ability to start up the daemon when postmaster is ready to accept
>> queries, and shut the daemon down when postmaster kills backends (either
>> because one crashed, or because it's shutting down).

I think this could be addressed simply by the ability to call actions at
a predefined interval, i.e.:

CREATE RECURRING JOB {job_name}
FOR EACH {interval}
[ STARTING {timestamptz} ]
[ ENDING {timestamptz} ]
EXECUTE PROCEDURE {procedure name}

CREATE RECURRING JOB {job_name}
FOR EACH {interval}
[ STARTING {timestamptz} ]
[ ENDING {timestamptz} ]
EXECUTE STATEMENT 'some statement'

(obviously, we'd want to adjust the above to use existing reserved
words, but you get the idea)

Activity and discretion beyond that could be defined in PL code,
including run/don't run conditions, activities, and dependancies.  The
only thing Postgres doesn't currently have is a clock which fires
events.  Anything we try to implement which is more complex than the
above is going to not work for someone.  And the pg_agent could be
adapted easily to use the Postgres clock instead of cron.

Oh, and the ability to run VACUUM inside a larger statement in some way.
 But that's a different TODO.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Jaime Casanova
On Tue, Mar 6, 2012 at 1:14 PM, Alvaro Herrera
 wrote:
>
> It seems to me that the only thing that needs core support is the
> ability to start up the daemon when postmaster is ready to accept
> queries, and shut the daemon down when postmaster kills backends (either
> because one crashed, or because it's shutting down).
>

+1

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Christopher Browne
On Tue, Mar 6, 2012 at 12:47 PM, Robert Haas  wrote:
> And also some interface.  It'd be useful to have background jobs that
> executed either immediately or at a certain time or after a certain
> delay, as well as repeating jobs that execute at a certain interval or
> on a certain schedule.  Figuring out what all that should look like
> is, well, part of the work that someone has to do.

Certainly.  It would seem to make sense to have a database schema
indicating this kind of metadata.

It needs to be sophisticated enough to cover *enough* unusual cases.

A schema duplicating crontab might look something like:
create table cron (
  id serial primary key,
  minutes integer[],
  hours text integer[],
  dayofmonth integer[],
  month integer[],
  dayofweek integer[],
  command text
);

That's probably a bit too minimalist, and that only properly supports
one user's crontab.

The schema needs to include things like:
a) When to perform actions.  Several bases for this, including
time-based, event-based.
b) What actions to perform (including context as to database user,
search_path, desired UNIX $PWD, perhaps more than that)
c) Sequencing information, including what jobs should NOT be run concurrently.
d) Logging.  If a job succeeds, that should be noted.  If it fails,
that should be noted.  Want to know start + end times.
e) What to do on failure.  "Everything blows up" is not a good answer :-).
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Merlin Moncure
On Tue, Mar 6, 2012 at 9:37 AM, Robert Haas  wrote:
>> But having said that, it's not apparent to me why such a thing would
>> need to live "inside the database" at all.  It's very easy to visualize
>> a task scheduler that runs as a client and requires nothing new from the
>> core code.  Approaching the problem that way would let the scheduler
>> be an independent project that stands or falls on its own merits.
>
> I was trying to make a general comment about PostgreSQL development,
> without diving too far into the merits or demerits of this particular
> feature.  I suspect you'd agree with me that, in general, a lot of
> valuable things don't get done because there aren't enough people or
> enough hours in the day, and we can always use more contributors.
>
> But since you brought it up, I think there is a lot of value to having
> a scheduler that's integrated with the database.  There are many
> things that the database does which could also be done outside the
> database, but people want them in the database because it's easier
> that way.  If you have a web application that talks to the database,
> and which sometimes needs to schedule tasks to run at a future time,
> it is much nicer to do that by inserting a row into an SQL table
> somewhere, or executing some bit of DDL, than it is to do it by making
> your web application know how to connect to a PostgreSQL database and
> also how to rewrite crontab (in a concurrency-safe manner, no less).

The counter argument to this is that there's nothing keeping you from
layering your own scheduling system on top of cron.  Cron provides the
heartbeat -- everything else you build out with tables implementing a
work queue or whatever else comes to mind.

The counter-counter argument is that cron has a couple of annoying
limitations -- sub minute scheduling is not possible, lousy windows
support, etc.  It's pretty appealing that you would be able to back up
your database and get all your scheduling configuration back up with
it.  Dealing with cron is a headache for database administrators.

Personally I find the C-unixy way of solving this problem inside
postgres not worth chasing -- that really does belong outside and you
really are rewriting cron.  A (mostly) sql driven scheduler would be
pretty neat though.

I agree with Chris B upthread: I find that what people really need
here is stored procedures, or some way of being able to embed code in
the database that can manage it's own transactions.  That way your
server-side entry, dostuff() called every minute doesn't have to exit
to avoid accumulating locks for everything it needs to do or be broken
up into multiple independent entry points in scripts outside the
database.  Without SP though, you can still do it via 100% sql/plpgsql
using listen/notify and dblink for the AT workaround, and at least one
dedicated task runner.  By 'it' I mean a server side scheduling system
relying on a heartbeat from out of the database code.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Alvaro Herrera

Excerpts from Pavel Stehule's message of mar mar 06 14:57:30 -0300 2012:
> 2012/3/6 Robert Haas :
> > On Tue, Mar 6, 2012 at 12:37 PM, Christopher Browne  
> > wrote:
> >> On Tue, Mar 6, 2012 at 12:20 PM, Artur Litwinowicz  wrote:
> >>> Algorithm for first loop:
> >>> check jobs exists and is time to run it
> >>>   run job as other sql statements (some validity check may be done)
> >>>   get next job
> >>> no jobs - delay
> >>
> >> There are crucial things missing here, namely the need to establish at
> >> least one database connection in order to be able to check for the
> >> existence of jobs, as well as to establish additional connections as
> >> contexts in which to run jobs.
> >>
> >> That implies the need for some New Stuff that isn't quite the same as
> >> what we have within server processes today.
> >>
> >> There is nothing horrible about this; just that there's some extra
> >> mechanism that needs to come into existence in order to do this.
> >
> > And also some interface.  It'd be useful to have background jobs that
> > executed either immediately or at a certain time or after a certain
> > delay, as well as repeating jobs that execute at a certain interval or
> > on a certain schedule.  Figuring out what all that should look like
> > is, well, part of the work that someone has to do.
> 
> +1

It seems to me that we could simply have some sort of external daemon
program running the schedule, i.e. starting up other programs or running
queries; that daemon would connect to the database somehow to fetch
tasks to run.  Separately a client program could be provided to program
tasks using a graphical interface, web, or whatever (more than one, if
we want to get fancy); this would also connect to the database and store
tasks to run by the daemon.  The client doesn't have to talk to the
daemon directly (we can simply have a trigger on the schedule table so
that the daemon receives a notify whenever the client changes stuff).

It seems to me that the only thing that needs core support is the
ability to start up the daemon when postmaster is ready to accept
queries, and shut the daemon down when postmaster kills backends (either
because one crashed, or because it's shutting down).

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Pavel Stehule
2012/3/6 Robert Haas :
> On Tue, Mar 6, 2012 at 12:37 PM, Christopher Browne  
> wrote:
>> On Tue, Mar 6, 2012 at 12:20 PM, Artur Litwinowicz  wrote:
>>> Algorithm for first loop:
>>> check jobs exists and is time to run it
>>>   run job as other sql statements (some validity check may be done)
>>>   get next job
>>> no jobs - delay
>>
>> There are crucial things missing here, namely the need to establish at
>> least one database connection in order to be able to check for the
>> existence of jobs, as well as to establish additional connections as
>> contexts in which to run jobs.
>>
>> That implies the need for some New Stuff that isn't quite the same as
>> what we have within server processes today.
>>
>> There is nothing horrible about this; just that there's some extra
>> mechanism that needs to come into existence in order to do this.
>
> And also some interface.  It'd be useful to have background jobs that
> executed either immediately or at a certain time or after a certain
> delay, as well as repeating jobs that execute at a certain interval or
> on a certain schedule.  Figuring out what all that should look like
> is, well, part of the work that someone has to do.

+1

Regards

Pavel

>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Robert Haas
On Tue, Mar 6, 2012 at 12:37 PM, Christopher Browne  wrote:
> On Tue, Mar 6, 2012 at 12:20 PM, Artur Litwinowicz  wrote:
>> Algorithm for first loop:
>> check jobs exists and is time to run it
>>   run job as other sql statements (some validity check may be done)
>>   get next job
>> no jobs - delay
>
> There are crucial things missing here, namely the need to establish at
> least one database connection in order to be able to check for the
> existence of jobs, as well as to establish additional connections as
> contexts in which to run jobs.
>
> That implies the need for some New Stuff that isn't quite the same as
> what we have within server processes today.
>
> There is nothing horrible about this; just that there's some extra
> mechanism that needs to come into existence in order to do this.

And also some interface.  It'd be useful to have background jobs that
executed either immediately or at a certain time or after a certain
delay, as well as repeating jobs that execute at a certain interval or
on a certain schedule.  Figuring out what all that should look like
is, well, part of the work that someone has to do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Christopher Browne
On Tue, Mar 6, 2012 at 12:20 PM, Artur Litwinowicz  wrote:
> Algorithm for first loop:
> check jobs exists and is time to run it
>   run job as other sql statements (some validity check may be done)
>   get next job
> no jobs - delay

There are crucial things missing here, namely the need to establish at
least one database connection in order to be able to check for the
existence of jobs, as well as to establish additional connections as
contexts in which to run jobs.

That implies the need for some New Stuff that isn't quite the same as
what we have within server processes today.

There is nothing horrible about this; just that there's some extra
mechanism that needs to come into existence in order to do this.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Artur Litwinowicz
With all the respect to all into this Community...
I have no experience enough rich with C or C++ to say yes I can do that
alone. I do not know the internals of PostgreSQL at all. But I have
quite long experience with other languages.
I imagine if you have a piece of code which can run function like
"Select function(123);" you can reuse it (with some modifications) to
run jobs saved in job manager tables in the same manner. All we need is
two "crazy" (some simplification) loops - one for job running and one
for control and logging purposes - all fast enought with period not
slower then 5s or faster.

Algorithm for first loop:
check jobs exists and is time to run it
   run job as other sql statements (some validity check may be done)
   get next job
no jobs - delay

second loop:
find started job
   check it is still working
  if error log it, calculate next start time
(may be longer then at the first time)
if configured and clean up
  if works fine log duration
  if just finished log it, calculate next run and clean up
   find next job
no jobs - delay

And it will be art of state if the job could return (but not have to)
next run time value (for log loop to save).
And it is quite all about I wanted to say - do not understand me bad (I
do not want to teach anyone or something like that ;) - I wanted just
explain what I meant.

Best regards,
Artur



0xAF4A859D.asc
Description: application/pgp-keys

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Alvaro Herrera

Excerpts from Tom Lane's message of mar mar 06 12:47:46 -0300 2012:
> Robert Haas  writes:

> > But since you brought it up, I think there is a lot of value to having
> > a scheduler that's integrated with the database.  There are many
> > things that the database does which could also be done outside the
> > database, but people want them in the database because it's easier
> > that way.  If you have a web application that talks to the database,
> > and which sometimes needs to schedule tasks to run at a future time,
> > it is much nicer to do that by inserting a row into an SQL table
> > somewhere, or executing some bit of DDL, than it is to do it by making
> > your web application know how to connect to a PostgreSQL database and
> > also how to rewrite crontab (in a concurrency-safe manner, no less).
> 
> Sure, and I would expect that a client-side scheduler would work just
> the same way: you make requests to it through database actions such
> as inserting a row in a task table.

What such an external scheduler would need from core is support for
starting up and shutting down along postmaster (as well as restarts at
appropriate times).  Postmaster already has the ability to start and
shut down many processes depending on several different policies; I
think it's mostly a matter of exporting that funcionality in a sane way.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Tom Lane
Robert Haas  writes:
> On Tue, Mar 6, 2012 at 10:21 AM, Tom Lane  wrote:
>> But having said that, it's not apparent to me why such a thing would
>> need to live "inside the database" at all.  It's very easy to visualize
>> a task scheduler that runs as a client and requires nothing new from the
>> core code.  Approaching the problem that way would let the scheduler
>> be an independent project that stands or falls on its own merits.

> But since you brought it up, I think there is a lot of value to having
> a scheduler that's integrated with the database.  There are many
> things that the database does which could also be done outside the
> database, but people want them in the database because it's easier
> that way.  If you have a web application that talks to the database,
> and which sometimes needs to schedule tasks to run at a future time,
> it is much nicer to do that by inserting a row into an SQL table
> somewhere, or executing some bit of DDL, than it is to do it by making
> your web application know how to connect to a PostgreSQL database and
> also how to rewrite crontab (in a concurrency-safe manner, no less).

Sure, and I would expect that a client-side scheduler would work just
the same way: you make requests to it through database actions such
as inserting a row in a task table.

> Now, the extent to which such a schedule requires core support is
> certainly arguable.  Maybe it doesn't, and can be an entirely
> stand-alone project.  pgAgent aims to do something like this, but it
> has a number of deficiencies, including a tendency to quit
> unexpectedly and a very klunky interface.

Well, if they didn't get it right the first time, that suggests that
it's a harder problem than people would like to think.  All the more
reason to do it as an external project, at least to start with.
I would much rather entertain a proposal to integrate a design that's
been proven by an external implementation, than a proposal to implement
a design that's never been tested at all (which we'll nonetheless have
to support for eternity, even if it turns out to suck).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Robert Haas
On Tue, Mar 6, 2012 at 10:21 AM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Mon, Mar 5, 2012 at 5:03 PM, Artur Litwinowicz  wrote:
>>> Regarding a functional area I can help... but I can not understand why
>>> this idea is so unappreciated?
>
>> I think it's a bit unfair to say that this idea is unappreciated.
>
> Well, there is the question of why we should re-invent the cron wheel.
>
>> There are LOTS of good features that we don't have yet simply because
>> nobody's had time to implement them.
>
> Implementation work is only part of it.  Any large feature will create
> an ongoing, distributed maintenance overhead.  It seems entirely
> possible to me that we'd not accept such a feature even if someone
> dropped a working implementation on us.
>
> But having said that, it's not apparent to me why such a thing would
> need to live "inside the database" at all.  It's very easy to visualize
> a task scheduler that runs as a client and requires nothing new from the
> core code.  Approaching the problem that way would let the scheduler
> be an independent project that stands or falls on its own merits.

I was trying to make a general comment about PostgreSQL development,
without diving too far into the merits or demerits of this particular
feature.  I suspect you'd agree with me that, in general, a lot of
valuable things don't get done because there aren't enough people or
enough hours in the day, and we can always use more contributors.

But since you brought it up, I think there is a lot of value to having
a scheduler that's integrated with the database.  There are many
things that the database does which could also be done outside the
database, but people want them in the database because it's easier
that way.  If you have a web application that talks to the database,
and which sometimes needs to schedule tasks to run at a future time,
it is much nicer to do that by inserting a row into an SQL table
somewhere, or executing some bit of DDL, than it is to do it by making
your web application know how to connect to a PostgreSQL database and
also how to rewrite crontab (in a concurrency-safe manner, no less).

Now, the extent to which such a schedule requires core support is
certainly arguable.  Maybe it doesn't, and can be an entirely
stand-alone project.  pgAgent aims to do something like this, but it
has a number of deficiencies, including a tendency to quit
unexpectedly and a very klunky interface.  Those are things that could
presumably fixed, or done differently in a new implementation, and
maybe that's all anyone needs.  Or maybe it's not.  But at any rate I
think the idea of a better job scheduler is a good one, and if anyone
is interested in working on that, I think we should encourage them to
do so, regardless of what happens vis-a-vis core.  This is a very
common need, and the current solutions are clearly more awkward than
our users would like.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Pavel Stehule
2012/3/6 Tom Lane :
> Robert Haas  writes:
>> On Mon, Mar 5, 2012 at 5:03 PM, Artur Litwinowicz  wrote:
>>> Regarding a functional area I can help... but I can not understand why
>>> this idea is so unappreciated?
>
>> I think it's a bit unfair to say that this idea is unappreciated.
>
> Well, there is the question of why we should re-invent the cron wheel.
>
>> There are LOTS of good features that we don't have yet simply because
>> nobody's had time to implement them.
>
> Implementation work is only part of it.  Any large feature will create
> an ongoing, distributed maintenance overhead.  It seems entirely
> possible to me that we'd not accept such a feature even if someone
> dropped a working implementation on us.
>
> But having said that, it's not apparent to me why such a thing would
> need to live "inside the database" at all.  It's very easy to visualize
> a task scheduler that runs as a client and requires nothing new from the
> core code.  Approaching the problem that way would let the scheduler
> be an independent project that stands or falls on its own merits.

There are a few arguments for scheduler in core

* platform independence
* possible richer workflow and loging possibilities or as minimum -
better integration with SP
* when application has lot of business logic in stored procedures,
then outer scheduler is little bit foreign element - harder
maintaining, harder configuration
* when somebody would to implement some like materialised views, then
have to have use outer schedule for very simple task - just exec SP
every 5 minutes

so I think there are reason why we can have a scheduler on core -
simple or richer, but it can helps. cron and similar works, but
maintaining of external scheduler is more expensive then using some
simple scheduler in core.

Regards

Pavel

>
>                        regards, tom lane
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Tom Lane
Robert Haas  writes:
> On Mon, Mar 5, 2012 at 5:03 PM, Artur Litwinowicz  wrote:
>> Regarding a functional area I can help... but I can not understand why
>> this idea is so unappreciated?

> I think it's a bit unfair to say that this idea is unappreciated.

Well, there is the question of why we should re-invent the cron wheel.

> There are LOTS of good features that we don't have yet simply because
> nobody's had time to implement them.

Implementation work is only part of it.  Any large feature will create
an ongoing, distributed maintenance overhead.  It seems entirely
possible to me that we'd not accept such a feature even if someone
dropped a working implementation on us.

But having said that, it's not apparent to me why such a thing would
need to live "inside the database" at all.  It's very easy to visualize
a task scheduler that runs as a client and requires nothing new from the
core code.  Approaching the problem that way would let the scheduler
be an independent project that stands or falls on its own merits.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-06 Thread Robert Haas
On Mon, Mar 5, 2012 at 5:03 PM, Artur Litwinowicz  wrote:
> Regarding a functional area I can help... but I can not understand why
> this idea is so unappreciated?

I think it's a bit unfair to say that this idea is unappreciated.
There are LOTS of good features that we don't have yet simply because
nobody's had time to implement them.  There are many things I'd really
like to have that I have spent no time at all on as yet, just because
there are other things that I (or my employer) would like to have even
more.  The good news is that this is an open-source project and there
is always room at the table for more people who would like to
contribute (or fund others so that they can contribute).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread David Johnston
> >
> > Keep in mind that it's not about coding in C but mostly about figuring
> > out what a sane design out to look like.
> >
> 

While I can straddle the fence pretty my first reaction is that we are talking 
about "application" functionality that falls outside what belongs in "core" 
PostgreSQL.  I'd rather see pgAgent be improved and act as a basic 
implementation while, for more complex use-cases, letting the 
community/marketplace provide solutions.

Even with simple use-cases you end up having a separate process continually 
running anyway.  The main benefit to linking with core would be the ability to 
startup that process after the server starts and shutdown the process before 
the server shutdown.  That communication channel is something to consider 
outside this specific application and, if done, could be used to talk with 
whatever designated "pgAgent"-like application the user chooses.  Other 
applications could also be communicated with in this way.  Basically some form 
of API where in the postgres.conf file you specify which IP addresses and ports 
you wish to synchronize and which executable to launch just prior to 
communicating on said port.  If the startup routine succeeds that Postgres 
will, within reason, attempt to communicate and wait for these external process 
to finish before shutting down.  If the external application closes it should 
proactively notify Postgres that it is doing so AND if you startup a program 
manually it can look for and talk with a running Postgres instance.

David J.


 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Daniel Farina
On Mon, Mar 5, 2012 at 12:17 PM, Pavel Stehule  wrote:
> Hello
>
> 2012/3/5 Alvaro Herrera :
>>
>> Excerpts from Artur Litwinowicz's message of lun mar 05 16:18:56 -0300 2012:
>>> Dear Developers,
>>>    I am looking for elegant and effective way for running jobs inside a
>>> database or cluster - for now I can not find that solution.
>>
>> Yeah, it'd be good to have something.  Many people say it's not
>> necessary, and probably some hackers would oppose it; but mainly I think
>> we just haven't agreed (or even discussed) what the design of such a
>> scheduler would look like.  For example, do we want it to be able to
>> just connect and run queries and stuff, or do we want something more
>> elaborate able to start programs such as running pg_dump?  What if the
>> program crashes -- should it cause the server to restart?  And so on.
>> It's not a trivial problem.
>>
>
> I agree - it is not simple
>
> * workflow support
> * dependency support
>
> a general ACID scheduler can be nice (in pg) but it is not really
> simple. There was some proposal about using autovacuum demon like
> scheduler.

I've been thinking about making autovacuum a special case of a general
*non*-transactional job-running system because dealing with large
physical changes to a database (where one wants to rewrite 300GB of
data, or whatever) that are prohibitive in a transaction are -- to
understate things -- incredibly painful.  Painful enough that people
will risk taking their site down with a large UPDATE or ALTER TABLE,
hoping that they can survive the duration (and then when they cancel
it and are left with huge volumes of dead tuples, things get a lot
more ugly).

The closest approximation a client program can make is "well, I guess
I'll paginate through the database and rewrite small chunks". Instead,
it may make more sense to have the database spoon-feed work to do the
transformations little-at-a-time ala autovacuum.

--
fdr

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Christopher Browne
On Mon, Mar 5, 2012 at 4:44 PM, Alvaro Herrera
 wrote:
>
> Excerpts from Artur Litwinowicz's message of lun mar 05 18:32:44 -0300 2012:
>
>> Ouch... "in next 2-4 years" - it broke my heart like a bullet - You
>> should not write it... ;)
>> I feel that I need to set aside SQL, Python, PHP and so on and take to
>> my hands old book about C programming language from university ;)
>> I hope my words are like drops of water for this idea and in the future
>> some people will be happy to use professional job manager :)
>
> Keep in mind that it's not about coding in C but mostly about figuring
> out what a sane design out to look like.

Just so.

And it seems to me that the Right Thing here is to go down the road to
having the fabled Stored Procedure Language, which is *not* pl/pgsql,
in that iIt needs to run *outside* transactional context.  It needs to
be able to start transactions, not to run inside them.

Given a language which can do some setup of transactions and then run
them, this could be readily used for a number of useful purposes, of
which a job scheduler would be just a single example.

It would enable turning some backend processes from hand-coded C into
possibly more dynamically-flexible scripted structures.

I'd expect this to be useful for having more customizable/dynamic
policies for the autovacuum process, for instance.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Artur Litwinowicz
W dniu 2012-03-05 23:09, Jaime Casanova pisze:
> On Mon, Mar 5, 2012 at 5:03 PM, Artur Litwinowicz  wrote:
>>
>> I understand it... (I meant if you wanna something... do it for your
>> self - it is the fastest way).
> 
> other way is to fund the work so someone can use his/her time to do it
> 
>> Regarding a functional area I can help... but I can not understand why
>> this idea is so unappreciated?
> 
> is not unappreciated, is just a problem that already *has* a solution
> if it were something that currently you can't do it then there would
> be more people after it
> 
>> It will be so powerfull feature - I am working with systems made for
>> goverment (Orcale) - jobs are the core gears for data flow between many
>> systems and other goverment bureaus.
>>
> 
> me too, and we solve it with cron
> 

And You can modulate the jobs frequency, stop them and start from inside
the database automatically using only algorithms and interenal events
without administrator hand work... with cron... I can not belive... I do
not meant just simple: run stored procedure... I am using cron as well,
but in my work I like elegant, complex solutions - many "lego" blocks is
not always the best and simplest solution...


0xAF4A859D.asc
Description: application/pgp-keys

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Jaime Casanova
On Mon, Mar 5, 2012 at 5:03 PM, Artur Litwinowicz  wrote:
>
> I understand it... (I meant if you wanna something... do it for your
> self - it is the fastest way).

other way is to fund the work so someone can use his/her time to do it

> Regarding a functional area I can help... but I can not understand why
> this idea is so unappreciated?

is not unappreciated, is just a problem that already *has* a solution
if it were something that currently you can't do it then there would
be more people after it

> It will be so powerfull feature - I am working with systems made for
> goverment (Orcale) - jobs are the core gears for data flow between many
> systems and other goverment bureaus.
>

me too, and we solve it with cron

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Artur Litwinowicz
W dniu 2012-03-05 22:44, Alvaro Herrera pisze:
> 
> Excerpts from Artur Litwinowicz's message of lun mar 05 18:32:44 -0300 2012:
> 
>> Ouch... "in next 2-4 years" - it broke my heart like a bullet - You
>> should not write it... ;)
>> I feel that I need to set aside SQL, Python, PHP and so on and take to
>> my hands old book about C programming language from university ;)
>> I hope my words are like drops of water for this idea and in the future
>> some people will be happy to use professional job manager :)
> 
> Keep in mind that it's not about coding in C but mostly about figuring
> out what a sane design out to look like.
> 

I understand it... (I meant if you wanna something... do it for your
self - it is the fastest way).
Regarding a functional area I can help... but I can not understand why
this idea is so unappreciated?
It will be so powerfull feature - I am working with systems made for
goverment (Orcale) - jobs are the core gears for data flow between many
systems and other goverment bureaus.

Best regards,
Artur


0xAF4A859D.asc
Description: application/pgp-keys

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Alvaro Herrera

Excerpts from Artur Litwinowicz's message of lun mar 05 18:32:44 -0300 2012:

> Ouch... "in next 2-4 years" - it broke my heart like a bullet - You
> should not write it... ;)
> I feel that I need to set aside SQL, Python, PHP and so on and take to
> my hands old book about C programming language from university ;)
> I hope my words are like drops of water for this idea and in the future
> some people will be happy to use professional job manager :)

Keep in mind that it's not about coding in C but mostly about figuring
out what a sane design out to look like.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Artur Litwinowicz
W dniu 2012-03-05 22:09, Pavel Stehule pisze:
> 2012/3/5 Artur Litwinowicz :
>> W dniu 2012-03-05 20:56, Alvaro Herrera pisze:
>>>
>>> Excerpts from Artur Litwinowicz's message of lun mar 05 16:18:56 -0300 2012:
 Dear Developers,
I am looking for elegant and effective way for running jobs inside a
 database or cluster - for now I can not find that solution.
>>>
>>> Yeah, it'd be good to have something.  Many people say it's not
>>> necessary, and probably some hackers would oppose it; but mainly I think
>>> we just haven't agreed (or even discussed) what the design of such a
>>> scheduler would look like.  For example, do we want it to be able to
>>> just connect and run queries and stuff, or do we want something more
>>> elaborate able to start programs such as running pg_dump?  What if the
>>> program crashes -- should it cause the server to restart?  And so on.
>>> It's not a trivial problem.
>>>
>>
>> Yes, yes it is not a trivial problem... - tools like "pgAgent" are good
>> when someone starts play with PostgreSQL - but this great environment
>> (only one serious against something like Oracle or DB2) needs something
>> professional, production ready. It can not happen when we are upgrading
>> database or OS and can not compile "pgAgent" because of "strange"
>> dependences... and for example whole sofisticated solution like web
>> application with complicated data flow has a problem... For example I am
>> using stored functions developed in Lua language, which are writing and
>> reading data to and from Redis server with a periods less then one
>> minute. Without "heart beat" like precise job manager it can not works
>> as professional as it can. Every one can use CRON or something like that
>> - yes it works but PostgreSQL has so many features and something like
>> job manager is inalienable in mine mind.
> 
> Long time a strategy for PostgreSQL was a minimal core and extensible
> modules without duplication some system services. This strategy is
> valid still but some services are in core - example should be
> replication.
> 
> Some proposals about custom scheduler exists
> http://archives.postgresql.org/pgsql-hackers/2010-02/msg01701.php and
> it is part of ToDo - so this feature should be in core (in next 2-4
> years).
> 
> Why this is not in core? Nobody wrote it :).
> 
> Regards
> 
> Pavel Stehule
> 
>>
>> Best regards,
>> Artur
>>
>>
>>
>> --
>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-hackers
>>
> 

Ouch... "in next 2-4 years" - it broke my heart like a bullet - You
should not write it... ;)
I feel that I need to set aside SQL, Python, PHP and so on and take to
my hands old book about C programming language from university ;)
I hope my words are like drops of water for this idea and in the future
some people will be happy to use professional job manager :)

Best regards,
Artur


0xAF4A859D.asc
Description: application/pgp-keys

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Pavel Stehule
2012/3/5 Artur Litwinowicz :
> W dniu 2012-03-05 20:56, Alvaro Herrera pisze:
>>
>> Excerpts from Artur Litwinowicz's message of lun mar 05 16:18:56 -0300 2012:
>>> Dear Developers,
>>>    I am looking for elegant and effective way for running jobs inside a
>>> database or cluster - for now I can not find that solution.
>>
>> Yeah, it'd be good to have something.  Many people say it's not
>> necessary, and probably some hackers would oppose it; but mainly I think
>> we just haven't agreed (or even discussed) what the design of such a
>> scheduler would look like.  For example, do we want it to be able to
>> just connect and run queries and stuff, or do we want something more
>> elaborate able to start programs such as running pg_dump?  What if the
>> program crashes -- should it cause the server to restart?  And so on.
>> It's not a trivial problem.
>>
>
> Yes, yes it is not a trivial problem... - tools like "pgAgent" are good
> when someone starts play with PostgreSQL - but this great environment
> (only one serious against something like Oracle or DB2) needs something
> professional, production ready. It can not happen when we are upgrading
> database or OS and can not compile "pgAgent" because of "strange"
> dependences... and for example whole sofisticated solution like web
> application with complicated data flow has a problem... For example I am
> using stored functions developed in Lua language, which are writing and
> reading data to and from Redis server with a periods less then one
> minute. Without "heart beat" like precise job manager it can not works
> as professional as it can. Every one can use CRON or something like that
> - yes it works but PostgreSQL has so many features and something like
> job manager is inalienable in mine mind.

Long time a strategy for PostgreSQL was a minimal core and extensible
modules without duplication some system services. This strategy is
valid still but some services are in core - example should be
replication.

Some proposals about custom scheduler exists
http://archives.postgresql.org/pgsql-hackers/2010-02/msg01701.php and
it is part of ToDo - so this feature should be in core (in next 2-4
years).

Why this is not in core? Nobody wrote it :).

Regards

Pavel Stehule

>
> Best regards,
> Artur
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Artur Litwinowicz
W dniu 2012-03-05 20:56, Alvaro Herrera pisze:
> 
> Excerpts from Artur Litwinowicz's message of lun mar 05 16:18:56 -0300 2012:
>> Dear Developers,
>>I am looking for elegant and effective way for running jobs inside a
>> database or cluster - for now I can not find that solution.
> 
> Yeah, it'd be good to have something.  Many people say it's not
> necessary, and probably some hackers would oppose it; but mainly I think
> we just haven't agreed (or even discussed) what the design of such a
> scheduler would look like.  For example, do we want it to be able to
> just connect and run queries and stuff, or do we want something more
> elaborate able to start programs such as running pg_dump?  What if the
> program crashes -- should it cause the server to restart?  And so on.
> It's not a trivial problem.
> 

Yes, yes it is not a trivial problem... - tools like "pgAgent" are good
when someone starts play with PostgreSQL - but this great environment
(only one serious against something like Oracle or DB2) needs something
professional, production ready. It can not happen when we are upgrading
database or OS and can not compile "pgAgent" because of "strange"
dependences... and for example whole sofisticated solution like web
application with complicated data flow has a problem... For example I am
using stored functions developed in Lua language, which are writing and
reading data to and from Redis server with a periods less then one
minute. Without "heart beat" like precise job manager it can not works
as professional as it can. Every one can use CRON or something like that
- yes it works but PostgreSQL has so many features and something like
job manager is inalienable in mine mind.

Best regards,
Artur



0xAF4A859D.asc
Description: application/pgp-keys

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Pavel Stehule
Hello

2012/3/5 Alvaro Herrera :
>
> Excerpts from Artur Litwinowicz's message of lun mar 05 16:18:56 -0300 2012:
>> Dear Developers,
>>    I am looking for elegant and effective way for running jobs inside a
>> database or cluster - for now I can not find that solution.
>
> Yeah, it'd be good to have something.  Many people say it's not
> necessary, and probably some hackers would oppose it; but mainly I think
> we just haven't agreed (or even discussed) what the design of such a
> scheduler would look like.  For example, do we want it to be able to
> just connect and run queries and stuff, or do we want something more
> elaborate able to start programs such as running pg_dump?  What if the
> program crashes -- should it cause the server to restart?  And so on.
> It's not a trivial problem.
>

I agree - it is not simple

* workflow support
* dependency support

a general ACID scheduler can be nice (in pg) but it is not really
simple. There was some proposal about using autovacuum demon like
scheduler.

Pavel

> --
> Álvaro Herrera 
> The PostgreSQL Company - Command Prompt, Inc.
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] elegant and effective way for running jobs inside a database

2012-03-05 Thread Alvaro Herrera

Excerpts from Artur Litwinowicz's message of lun mar 05 16:18:56 -0300 2012:
> Dear Developers,
>I am looking for elegant and effective way for running jobs inside a
> database or cluster - for now I can not find that solution.

Yeah, it'd be good to have something.  Many people say it's not
necessary, and probably some hackers would oppose it; but mainly I think
we just haven't agreed (or even discussed) what the design of such a
scheduler would look like.  For example, do we want it to be able to
just connect and run queries and stuff, or do we want something more
elaborate able to start programs such as running pg_dump?  What if the
program crashes -- should it cause the server to restart?  And so on.
It's not a trivial problem.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >