Re: PostgreSQL backend: a waste of time?

2002-12-03 Thread Christian Schulte
Nicola Ranaldo wrote:


But another main reason is that I hold in the same database user passwords
and other accounting informations, imp prefs/addressbook, and all my
sendmail maps (Yes! also virtusertable!). 

Hello,

that sounds interesting for me! Did you patch sendmail to read its maps 
out from a database ? How did you do that ? I mean: If there exist 
documentation on how to do this, I would like to know where I can find 
that !

--Christian--





Re: PostgreSQL backend: a waste of time?

2002-12-03 Thread Nicola Ranaldo
> That being said, I really think that using an RDBMS for the simple
> key/value pairings that cyrus needs is really unnecessary and reeks of "I
> want to use a buzzword" more than being a real solution.
>
> -Rob

Oh! finally a negative response :)
Howewer this solution is *real* for me, I solved all my problems! It is
stable and fast, more then bdb, and I don't know if I can trust skiplist
over alphaserver (now).
But another main reason is that I hold in the same database user passwords
and other accounting informations, imp prefs/addressbook, and all my
sendmail maps (Yes! also virtusertable!). And all these fields are trigger
protected in the RDBMS solution. This gives more flexibility and integration
in my informative service.

I think I am not alone in this!

Best Regards

Nicola Ranaldo





Re: PostgreSQL backend: a waste of time?

2002-12-03 Thread Nicola Ranaldo
>I really don't know... This buffer is in the daemon?

This would be in the client.

>Don't you have to receive responses from the SQL DB? Or these commands
>are only writes (UPDATE, INSERT)? If these are only writes it seams a
>good ideia, but if you need to SELECT (inside the transaction) too there
>is the problem of different connections getting different transactions.

We can exec SELECT(s) immediately and store the other commands, i think this
would be safe, cyrus transactions are small and closed in a very local
context, for example: a create mailbox require an insert and some filesystem
operations, if these fail abort the transaction (this is also an example
showing that backends have to support transactions, in this case an
autocommit backend will leave a gangling mailbox!)

>It depends on you user base:
>If your system is a backend for a webmail, for instance, your "users"
>(the php or perl script) will always connect, fetch something,
>disconnect. In this situation you'll never see lots of simultaneous
>connections.

This is my case!

>If you have 50.000 users on a campus setup using IMAP you'll get 5000
>concurrent connection easily.

I do not know if PostgreSQL may scale up to these numbers! we need someone
experienced in this.

Another solution may be to open a connection only when you need it and close
it asap.


Regards

Nicola Ranaldo




RE: PostgreSQL backend: a waste of time?

2002-12-02 Thread Brasseur Valéry
> -Original Message-
> From: Nuno Silva [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, November 30, 2002 2:50 AM
> To: Nicola Ranaldo
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: PostgreSQL backend: a waste of time?
> 
> 
> Hello!
> 
> Nicola Ranaldo wrote:
> > I cannot spread sql commands of a unique transaction over 
> multiple pgsql
> > connection, and a connection cannot handle parallel transactions.
> > So if i have 1000 imapd process starting a transaction the 
> mailbox daemon
> > has to open 1000 pgsql connection.
> 
> Reading from the DB should be trivial, right?
> 
> I'm not 100% sure, but I suppose that one can virtualize the 
> connections. What I mean is: imapd (or pop3d or lmtpd...) 
> wants to write 
> something -> ask the daemon and the daemon will choose a free 
> connection 
> and commit those changes. This is the "one operation simple case".
> Some of the DBs that cyrus maintains appear to be this simple (the 
> mailboxes file).
> 
> Other cyrus' DBs seem to require transactions (seen and 
> delivery DB's).. 
> This makes it harder to manage with a single daemon connecting to a 
> RDBMS with onky a few connections.
> CMU people or Ken: comments? :)
> 
> > One solution could be:
> > BEGIN -> Allocate a new buffer to store sql commands
> > SQL COMMANDS -> add commands to buffer
> > COMMIT -> send all the buffered commands to the mailbox 
> daemon and cleanup
> > the buffer
> > ABORT -> cleanup the buffer
> > Do you think this is a good solution ?
> 
> I really don't know... This buffer is in the daemon?
> Don't you have to receive responses from the SQL DB? Or these 
> commands 
> are only writes (UPDATE, INSERT)? If these are only writes it seams a 
> good ideia, but if you need to SELECT (inside the 
> transaction) too there 
> is the problem of different connections getting different 
> transactions.
> 
> Brasseur Valéry posted a mysql-backend patch for cyrus recently.
> 
> Brasseur, do you use a mysql connection for each cyrus 
> process too? And 
> do you use transactions? Last time I checked mysql didn't 
> support these.
the patch is using a connection per process, but I am considering usibg
mbdeamon (look at Nuno's mail from 27/11) modify to use the mysql back-end.
in this case I will have a few (one for now) connection du Mysql, the
mbdaemon will handle all the processes connection.

for the transaction part.
I use mysql 4.0 which support transaction now. But I don't use this for
handling transaction in the code !!!

> 
> If transactions aren't required than it's easy to have 1 connection 
> shared amongst 100 processes, right? :)
> 
> > Howewer i think a pgsql connection for every master child 
> could not be a
> > problem, on my production server (7500 very active users, 
> cyrus.log is
> > 20MB/day) the average number of imapd is 15, pop3d is 30, 
> lmtpd is 5 (under
> > mail-bombing lmtpd process was 45). Howewer it is an 
> AlphaServer ES45 with 4
> > 1ghz CPU and 700Gb of raid disk and is quite fast. Wath's 
> your experience
> > with huge number of users or slow server ?
> > 
> 
> It depends on you user base:
> If your system is a backend for a webmail, for instance, your "users" 
> (the php or perl script) will always connect, fetch something, 
> disconnect. In this situation you'll never see lots of simultaneous 
> connections.
> 
> If you have 50.000 users on a campus setup using IMAP you'll get 5000 
> concurrent connection easily.
> 
> The same way if you have a company with 500 desktops all of them 
> checking the email with IMAP you can easily get 1000 (the double) 
> concurrent connections.
> 
> As internet people says, YMMV :)
> 
> Regards,
> Nuno Silva
> 
> 
> > Bye
> > 
> > Nicola Ranaldo
> > 
> > 
> >>IIRC someone implemented such a daemon and patched cyrus to 
> use it. This
> >>daemon's backend was a text-file but the "protocol" is there.
> >>
> >>a drawing :)
> >>
> >>imapd1   imapd2   imapd3 ... imap1000
> >>   |||   |
> >>   ---
> >>|
> >>  daemon
> >>|
> >>---
> >>|  |   |   |   |   |  |
> >>1  2   3   4   5   6  7
> >>
> >>1 to 7 would be postgresql connections. This number may 
> vary... maybe 1
> >>connection per 100 imapds? Or a user defined number. IMMV...
> > 
> > 
> > 
> > 
> > 
> 




Re: PostgreSQL backend: a waste of time?

2002-11-30 Thread David Chait
Rob,
This may be unfesable for the immediate future, however it is not such a
bad idea for the long term. Especially if it could be made to work with
enterprise quality commercial databases. All of these replication problems
that have been going back and forth, and file system incompatibilities go
away. In addition it would add a lot of fault tollerance by allowing for
multiple front end machines as well as multiple database replicates.

-David

_

David Chait
Sys Admin  - Facilities Operations
333 Bonair Siding Road #107
Stanford CA, 94305
[EMAIL PROTECTED]

- Original Message -
From: "Rob Siemborski" <[EMAIL PROTECTED]>
To: "Nuno Silva" <[EMAIL PROTECTED]>
Cc: "Nicola Ranaldo" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Friday, November 29, 2002 9:55 PM
Subject: Re: PostgreSQL backend: a waste of time?


> On Sat, 30 Nov 2002, Nuno Silva wrote:
>
> > I'm not 100% sure, but I suppose that one can virtualize the
> > connections. What I mean is: imapd (or pop3d or lmtpd...) wants to write
> > something -> ask the daemon and the daemon will choose a free connection
> > and commit those changes. This is the "one operation simple case".
> > Some of the DBs that cyrus maintains appear to be this simple (the
> > mailboxes file).
>
> Well, actually the mailboxes list requires transactions as well.
>
> > Other cyrus' DBs seem to require transactions (seen and delivery DB's)..
> > This makes it harder to manage with a single daemon connecting to a
> > RDBMS with onky a few connections.
> > CMU people or Ken: comments? :)
>
> All cyrusdb backends need to support transactions.
>
> That being said, I really think that using an RDBMS for the simple
> key/value pairings that cyrus needs is really unnecessary and reeks of "I
> want to use a buzzword" more than being a real solution.
>
> -Rob
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Rob Siemborski * Andrew Systems Group * Cyert Hall 207 * 412-268-7456
> Research Systems Programmer * /usr/contributed Gatekeeper
>
>
>




Re: PostgreSQL backend: a waste of time?

2002-11-30 Thread Rob Siemborski
On Sat, 30 Nov 2002, Nuno Silva wrote:

> In most scenarious I agree 100% :) However if you have a busy server and
> you want a remote server to take care of retreiving, inserting, sorting,
> etc, RDBMS is the answer. This will add a new server to your setup, so
> it's a Bad Thing(c) too :))

It will also add network latency to the cost of updates.

> But, with this kind of "virtual DB", cyrus spool could easily be
> installed over NFS or anything else. AFAICT, without the DB files, cyrus
> is NFS friendly.

No, it can't, since the mailbox header and index files still need to be
locked and these are not cyrusdb files.

If you really want to change cyrus storage to entirely use a RDBMS
backend, you're going to have to almost completely rewrite the server (I
think you might be able to keep some of the parsing and authentication
code ;).

-Rob

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Rob Siemborski * Andrew Systems Group * Cyert Hall 207 * 412-268-7456
Research Systems Programmer * /usr/contributed Gatekeeper





Re: PostgreSQL backend: a waste of time?

2002-11-30 Thread Nuno Silva
Hello!

Rob Siemborski wrote:



All cyrusdb backends need to support transactions.

That being said, I really think that using an RDBMS for the simple
key/value pairings that cyrus needs is really unnecessary and reeks of "I
want to use a buzzword" more than being a real solution.



In most scenarious I agree 100% :) However if you have a busy server and 
you want a remote server to take care of retreiving, inserting, sorting, 
etc, RDBMS is the answer. This will add a new server to your setup, so 
it's a Bad Thing(c) too :))


But, with this kind of "virtual DB", cyrus spool could easily be 
installed over NFS or anything else. AFAICT, without the DB files, cyrus 
is NFS friendly.

Imagine:

1 netapp/emc/whatever NFS with 5TB;
1 RDBMS per 20 frontends;
20 frontends (or 40 or 100 or 1000) without local storage

(you must add auth and MTA services of course...)

Sorry for keeping alive this academical discussion but I feel that this 
kind of setup will be a must in a couple of years (everybody wants to 
separate roles -- storage, DB, frontends, etc).

Regards,
Nuno Silva




Re: PostgreSQL backend: a waste of time?

2002-11-29 Thread Rob Siemborski
On Sat, 30 Nov 2002, Nuno Silva wrote:

> I'm not 100% sure, but I suppose that one can virtualize the
> connections. What I mean is: imapd (or pop3d or lmtpd...) wants to write
> something -> ask the daemon and the daemon will choose a free connection
> and commit those changes. This is the "one operation simple case".
> Some of the DBs that cyrus maintains appear to be this simple (the
> mailboxes file).

Well, actually the mailboxes list requires transactions as well.

> Other cyrus' DBs seem to require transactions (seen and delivery DB's)..
> This makes it harder to manage with a single daemon connecting to a
> RDBMS with onky a few connections.
> CMU people or Ken: comments? :)

All cyrusdb backends need to support transactions.

That being said, I really think that using an RDBMS for the simple
key/value pairings that cyrus needs is really unnecessary and reeks of "I
want to use a buzzword" more than being a real solution.

-Rob

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Rob Siemborski * Andrew Systems Group * Cyert Hall 207 * 412-268-7456
Research Systems Programmer * /usr/contributed Gatekeeper





Re: PostgreSQL backend: a waste of time?

2002-11-29 Thread Nuno Silva
Hello!

Nicola Ranaldo wrote:

I cannot spread sql commands of a unique transaction over multiple pgsql
connection, and a connection cannot handle parallel transactions.
So if i have 1000 imapd process starting a transaction the mailbox daemon
has to open 1000 pgsql connection.


Reading from the DB should be trivial, right?

I'm not 100% sure, but I suppose that one can virtualize the 
connections. What I mean is: imapd (or pop3d or lmtpd...) wants to write 
something -> ask the daemon and the daemon will choose a free connection 
and commit those changes. This is the "one operation simple case".
Some of the DBs that cyrus maintains appear to be this simple (the 
mailboxes file).

Other cyrus' DBs seem to require transactions (seen and delivery DB's).. 
This makes it harder to manage with a single daemon connecting to a 
RDBMS with onky a few connections.
CMU people or Ken: comments? :)

One solution could be:
BEGIN -> Allocate a new buffer to store sql commands
SQL COMMANDS -> add commands to buffer
COMMIT -> send all the buffered commands to the mailbox daemon and cleanup
the buffer
ABORT -> cleanup the buffer
Do you think this is a good solution ?


I really don't know... This buffer is in the daemon?
Don't you have to receive responses from the SQL DB? Or these commands 
are only writes (UPDATE, INSERT)? If these are only writes it seams a 
good ideia, but if you need to SELECT (inside the transaction) too there 
is the problem of different connections getting different transactions.

Brasseur Valéry posted a mysql-backend patch for cyrus recently.

Brasseur, do you use a mysql connection for each cyrus process too? And 
do you use transactions? Last time I checked mysql didn't support these.

If transactions aren't required than it's easy to have 1 connection 
shared amongst 100 processes, right? :)

Howewer i think a pgsql connection for every master child could not be a
problem, on my production server (7500 very active users, cyrus.log is
20MB/day) the average number of imapd is 15, pop3d is 30, lmtpd is 5 (under
mail-bombing lmtpd process was 45). Howewer it is an AlphaServer ES45 with 4
1ghz CPU and 700Gb of raid disk and is quite fast. Wath's your experience
with huge number of users or slow server ?



It depends on you user base:
If your system is a backend for a webmail, for instance, your "users" 
(the php or perl script) will always connect, fetch something, 
disconnect. In this situation you'll never see lots of simultaneous 
connections.

If you have 50.000 users on a campus setup using IMAP you'll get 5000 
concurrent connection easily.

The same way if you have a company with 500 desktops all of them 
checking the email with IMAP you can easily get 1000 (the double) 
concurrent connections.

As internet people says, YMMV :)

Regards,
Nuno Silva


Bye

Nicola Ranaldo



IIRC someone implemented such a daemon and patched cyrus to use it. This
daemon's backend was a text-file but the "protocol" is there.

a drawing :)

imapd1   imapd2   imapd3 ... imap1000
  |||   |
  ---
   |
 daemon
   |
---
|  |   |   |   |   |  |
1  2   3   4   5   6  7

1 to 7 would be postgresql connections. This number may vary... maybe 1
connection per 100 imapds? Or a user defined number. IMMV...












Re: PostgreSQL backend: a waste of time?

2002-11-29 Thread Rob Siemborski
On Fri, 29 Nov 2002, Nicola Ranaldo wrote:

> Howewer i think a pgsql connection for every master child could not be a
> problem, on my production server (7500 very active users, cyrus.log is
> 20MB/day) the average number of imapd is 15, pop3d is 30, lmtpd is 5 (under
> mail-bombing lmtpd process was 45). Howewer it is an AlphaServer ES45 with 4
> 1ghz CPU and 700Gb of raid disk and is quite fast. Wath's your experience
> with huge number of users or slow server ?

I'm not convinced that 45 concurrent connections constitutes "7500 very
active users", I'd expect to see something more like 3000
peak concurrent connections.

-Rob

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Rob Siemborski * Andrew Systems Group * Cyert Hall 207 * 412-268-7456
Research Systems Programmer * /usr/contributed Gatekeeper





Re: PostgreSQL backend: a waste of time?

2002-11-29 Thread Nicola Ranaldo
I cannot spread sql commands of a unique transaction over multiple pgsql
connection, and a connection cannot handle parallel transactions.
So if i have 1000 imapd process starting a transaction the mailbox daemon
has to open 1000 pgsql connection.
One solution could be:
BEGIN -> Allocate a new buffer to store sql commands
SQL COMMANDS -> add commands to buffer
COMMIT -> send all the buffered commands to the mailbox daemon and cleanup
the buffer
ABORT -> cleanup the buffer
Do you think this is a good solution ?
Howewer i think a pgsql connection for every master child could not be a
problem, on my production server (7500 very active users, cyrus.log is
20MB/day) the average number of imapd is 15, pop3d is 30, lmtpd is 5 (under
mail-bombing lmtpd process was 45). Howewer it is an AlphaServer ES45 with 4
1ghz CPU and 700Gb of raid disk and is quite fast. Wath's your experience
with huge number of users or slow server ?

Bye

Nicola Ranaldo

> IIRC someone implemented such a daemon and patched cyrus to use it. This
> daemon's backend was a text-file but the "protocol" is there.
>
> a drawing :)
>
> imapd1   imapd2   imapd3 ... imap1000
>|||   |
>---
> |
>   daemon
> |
> ---
> |  |   |   |   |   |  |
> 1  2   3   4   5   6  7
>
> 1 to 7 would be postgresql connections. This number may vary... maybe 1
> connection per 100 imapds? Or a user defined number. IMMV...






RE: PostgreSQL backend: a waste of time?

2002-11-27 Thread Brasseur Valéry
And I have done a port to imapd 2.1.9 and UDP instead of unix socket for
those who are interrested !!!

> -Original Message-
> From: Nuno Silva [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, November 27, 2002 5:01 AM
> To: [EMAIL PROTECTED]
> Subject: Re: PostgreSQL backend: a waste of time?
> 
> 
> 
> 
> Nuno Silva wrote:
> > 
> > Just searched a bit and found a reference to the 
> mailbox-daemon here:
> > 
> > 
> http://asg.web.cmu.edu/archive/message.php?mailbox=archive.inf
> o-cyrus&msg=8712 
> > 
> 
> Just found the URL:
> http://opensource.prim.hu/mbdaemon/
> 
> Regards,
> Nuno Silva
> 
> 



Re: PostgreSQL backend: a waste of time?

2002-11-26 Thread Nuno Silva


Nuno Silva wrote:


Just searched a bit and found a reference to the mailbox-daemon here:

http://asg.web.cmu.edu/archive/message.php?mailbox=archive.info-cyrus&msg=8712 


Just found the URL:
http://opensource.prim.hu/mbdaemon/

Regards,
Nuno Silva





Re: PostgreSQL backend: a waste of time?

2002-11-26 Thread Nuno Silva
Hi!

Nicola Ranaldo wrote:

I use PostgreSQL because it's very stable on True64 (!), and there is an
historic consolidation of transactions and referential integrity.
These make MySQL immature for my purpose! Howewer at first look it seems
porting C code from PostgreSQL to MySQL is very easy :)


I use postgresql for the same reasons, in alpha and x86. And I can't see 
any accountable performance difference between mysql and postgresql in 
*my* setup's :)

Anyway, returning to the topic:
A database backend would be very, very nice because you have more 
freedom (choices) and because, in some setup's, it could be much faster 
(the DB server can be in another machine).

Of course the files backend is still the obvious choice for some setup's 
and the way some people like it the most (it's, in some extent, the unix 
way of doing this :).

So, I don't think it's a waste of time :)

You talk about 1 connection per imapd/pop3d process... This may not be 
very good because you can have 1000 imapd processes in big setups. IMHO 
the right-thing to do is create a daemon that has 5 DB connections and 
the imapd's talk to this daemon via sockets or something.

IIRC someone implemented such a daemon and patched cyrus to use it. This 
daemon's backend was a text-file but the "protocol" is there.

a drawing :)

imapd1   imapd2   imapd3 ... imap1000
  |||   |
  ---
   |
 daemon
   |
---
|  |   |   |   |   |  |
1  2   3   4   5   6  7

1 to 7 would be postgresql connections. This number may vary... maybe 1 
connection per 100 imapds? Or a user defined number. IMMV...

Just searched a bit and found a reference to the mailbox-daemon here:

http://asg.web.cmu.edu/archive/message.php?mailbox=archive.info-cyrus&msg=8712

As Noll Janos points, even cache can be implemented because data always 
goes through this daemon.

I'm sorry for the long email.

Regards,
Nuno Silva





Re: PostgreSQL backend: a waste of time?

2002-11-26 Thread Mika Iisakkila
Nicola Ranaldo wrote:


 in imapd.c I read:

/*
 * run once when process is forked;
 * MUST NOT exit directly; must return with non-zero error code
 */
int service_init(int argc, char **argv, char **envp)

but mboxlist_init() is a void! it calls directly fatal() and so exit().

Master will not check exit code of its children and so does not reset the
number of ready workers, so if another connection arrives no process will
serve it!
So if PostgreSQL goes down i have to restart cyrus!

I know how to patch this in master.c but in case of fatal error, it will
bring up a huge number of sequential forks until PostgreSQL is available
again. So other changes will be necessary, but i think that's not up to me.



I believe this might be the problem that sometimes causes a given service to
disappear and not come up until master is restarted. An almost sure way to
trigger this is to pound Cyrus until the OS runs out of file descriptors.

As to your question in the subject, certainly not. All of the available
backends seem to have their shortcomings, so it's nice to have options
when something blows in a given environment...

--mika




Re: PostgreSQL backend: a waste of time?

2002-11-26 Thread Nicola Ranaldo
I use cyrus since version 2, on Linux Slackware and True64 from 4.0f to
5.1a. There are no big problems on Intel architecture, but every upgrade on
alpha was a war! For example since version 2.1.5 to compile it you have to
manually define HAVE_GETADDRINFO and in mpool.c you would apply:

(Thanks to Davide Bottalico, my great coworker!)
128c128
< #define ROUNDUP(num) (((num) + 15) & 0xFFF0)
---
> #define ROUNDUP(num) (((num) + 15) & 0xFFF0)
164c164
< p->ptr = (void *)ROUNDUP((unsigned long)p->ptr + size);
---
> p->ptr = (void *)ROUNDUP((unsigned int)p->ptr + size);

64bits!!!

Due to continous imapd-freeze in our last upgrade (2.1.9 bdb 4.0.14) I
decided to move to a definitively stable backend (as said in a previous
message PostgreSQL is very stable on True64). This is not so easy as it
seems and could create two main problems, if you will be patient I'll show
them:

1) A single connection to PostgreSQL allows you to exec as many transactions
as you want, but they cannot be nested or interleaved, and they must be
executed in sequential order (CONNECT, BEGIN SELECT, INSERT etc. COMMIT,
BEGIN ... COMMIT... DISCONNECT). To have indipendent transactions to the
same DB the same process should open as many connections as the number of
transactions. This could waste resources!
But it seems cyrus transactions are isolated in little atomic context and do
not involve different DBs , so I can have only a unique connection to
PostgreSQL for every master child that serves the process for all of its
life, all dbfile become different tables of the only PostgreSQL db!
This *now* works fine, my code logs all these problems, and on about
50megabytes of cyrus.log I had no problems.
If cyrus code will change to exec parallel transactions the actual code will
be rewritten (if it will be safe and useful).

2) I cannot shutdown a BerkeleyDB, so a fatal error in mboxlist_init is
really fatal! But I can shutdown a PostgreSQL process for mantenaince
pourposes, or it's unavailable because it has reached max number of
concurrent
connections. In both cases master childs exit with fatal error, and when
PostgreSQL will be available again there will be no listener on sockets!

 in imapd.c I read:

/*
 * run once when process is forked;
 * MUST NOT exit directly; must return with non-zero error code
 */
int service_init(int argc, char **argv, char **envp)

but mboxlist_init() is a void! it calls directly fatal() and so exit().

Master will not check exit code of its children and so does not reset the
number of ready workers, so if another connection arrives no process will
serve it!
So if PostgreSQL goes down i have to restart cyrus!

I know how to patch this in master.c but in case of fatal error, it will
bring up a huge number of sequential forks until PostgreSQL is available
again. So other changes will be necessary, but i think that's not up to me.

Nicola Ranaldo


> I have the same point of view, could you please share with me your
> experiences with cyrus, and a more detailed information about your
project?
>
> Nicola Ranaldo wrote:
>
> >Due to our historical problems using BerkeleyDB4 over True64Unix I'm
coding
> >a
> >PostgreSQL backend.











Re: PostgreSQL backend: a waste of time?

2002-11-26 Thread Nicola Ranaldo
I use PostgreSQL because it's very stable on True64 (!), and there is an
historic consolidation of transactions and referential integrity.
These make MySQL immature for my purpose! Howewer at first look it seems
porting C code from PostgreSQL to MySQL is very easy :)

Nicola Ranaldo


> Hy!
>
>  I think that's a very good idea, but we found that MySQL is much faster
> than Postgres, when there are no complex queries (this is the case here),
> so it might be a better idea to use MySQL.
>
>
> On 25-Nov-2002 Nicola Ranaldo wrote:
> > Due to our historical problems using BerkeleyDB4 over True64Unix I'm
> > coding a PostgreSQL backend.





RE: PostgreSQL backend: a waste of time?

2002-11-26 Thread Brasseur Valéry
for those who are interrested here is a patch for Mysql as a backend.

warning, it's a developpment version for now.


> -Original Message-
> From: Noll Janos [mailto:[EMAIL PROTECTED]]
> Sent: Monday, November 25, 2002 4:36 PM
> To: [EMAIL PROTECTED]
> Subject: RE: PostgreSQL backend: a waste of time?
> 
> 
> Hy!
> 
>  I think that's a very good idea, but we found that MySQL is 
> much faster
> than Postgres, when there are no complex queries (this is the 
> case here),
> so it might be a better idea to use MySQL.
> 
> 
> On 25-Nov-2002 Nicola Ranaldo wrote:
> > Due to our historical problems using BerkeleyDB4 over True64Unix I'm
> > coding a PostgreSQL backend.
> > It's in alpha stage and seems to work fine over an our production
> > server
> > with about 7500 mailboxes.
> > Cyrus.log is DBERROR free, and users do not report any problem.
> > Access to mboxlist is about 5 time faster then with BerkeleyDB4.
> > If the cyrus community is interested in this little piece of code (i
> > don't
> > know due to presence of skiplist backend) we can start a thread now!
> 
> 
> | Noll Janos <[EMAIL PROTECTED]> | http://www.johnzero.hu |
> | "Expect the unexpected!"|   ICQ# 4547866   |  Be free! |
> 




cyrus-imapd-2.1.9-mysql.patch
Description: Binary data


Re: PostgreSQL backend: a waste of time?

2002-11-25 Thread Rob Siemborski
On 25 Nov 2002, Erik Enge wrote:

> I wouldn't rate it as easy unless it was a config option at run-time,
> not an option to configure at compile-time.

This wouldn't be tremendously hard to do (probably just a matter of
replacing some macros with some code that is a bit more clever), but
there'd be a (small)  performance penalty and there isn't really any
intrest in doing it, since converting databases shouldn't be a common
operation anyway.

-Rob

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Rob Siemborski * Andrew Systems Group * Cyert Hall 207 * 412-268-7456
Research Systems Programmer * /usr/contributed Gatekeeper





Re: PostgreSQL backend: a waste of time?

2002-11-25 Thread Erik Enge
Ken Murchison <[EMAIL PROTECTED]> writes:

> All of the db stuff _is_ isolated in the cyrusdb layer.  That's why it
> is so easy to switch between flat/skiplist/berkeley.

I wouldn't rate it as easy unless it was a config option at run-time,
not an option to configure at compile-time.

Erik.



Re: PostgreSQL backend: a waste of time?

2002-11-25 Thread Jure Pecar
On Mon, 25 Nov 2002 10:04:03 -0600
[EMAIL PROTECTED] wrote:

> Seems kinda ironic in a way---doesn't MySQL use BerkeleyDB?  I guess
> it's all in the indexing/caching

yes, it is one of the choices for table type. others are MyISAM and
InnoDB, the latter supports transactions and is as fast as oracle if not a
bit faster.

--

Jure Pecar



Re: PostgreSQL backend: a waste of time?

2002-11-25 Thread Ken Murchison


> Alessandro Oliveira wrote:
> 
> I think that all the code that is dependent on a particular database
> should be completely isolated, making it simpler to port to new

All of the db stuff _is_ isolated in the cyrusdb layer.  That's why it
is so easy to switch between flat/skiplist/berkeley.



> databases, for instance: Nicola likes postgres, you like mysql, and I
> love oracle (besides it is very expensive), somebody else would like
> an interbase or a sapdb backend.
> 
> Noll Janos wrote:
> 
> > Hy!
> >
> >  I think that's a very good idea, but we found that MySQL is much
> > faster
> > than Postgres, when there are no complex queries (this is the case
> > here),
> > so it might be a better idea to use MySQL.
> >
> >
> > On 25-Nov-2002 Nicola Ranaldo wrote:
> >
> >
> >> Due to our historical problems using BerkeleyDB4 over True64Unix
> >> I'm
> >> coding a PostgreSQL backend.
> >> It's in alpha stage and seems to work fine over an our production
> >> server
> >> with about 7500 mailboxes.
> >> Cyrus.log is DBERROR free, and users do not report any problem.
> >> Access to mboxlist is about 5 time faster then with BerkeleyDB4.
> >> If the cyrus community is interested in this little piece of code
> >> (i
> >> don't
> >> know due to presence of skiplist backend) we can start a thread
> >> now!
> >>
> >>
> >
> > | Noll Janos <[EMAIL PROTECTED]> | http://www.johnzero.hu |
> > | "Expect the unexpected!"|   ICQ# 4547866   |  Be free! |
> >
> >
> >
> 
> --
> Best Regards,
> 
> Alessandro Oliveira
> Nuno Ferreira Cargas Internacionais Ltda.
> Phone: +55-11-3241-2000
> Fax  : +55-11-3242-9891
> ---
> 
> It's trivial to make fun of Microsoft products, but it takes a real
> man to make them work, and a god to make them do anything useful.

-- 
Kenneth Murchison Oceana Matrix Ltd.
Software Engineer 21 Princeton Place
716-662-8973 x26  Orchard Park, NY 14127
--PGP Public Key--http://www.oceana.com/~ken/ksm.pgp



Re: PostgreSQL backend: a waste of time?

2002-11-25 Thread Phil Brutsche
Noll Janos wrote:

Hy!

 I think that's a very good idea, but we found that MySQL is much faster
than Postgres, when there are no complex queries (this is the case here),
so it might be a better idea to use MySQL.


Or better yet, support both.

Some people already use a SQL database (Postgres, in my case) and don't want 
to waste time trying to learn & set up another.

A weird, maybe even stupid idea: if Nicola is going to "port" the patch to 
another database, port it to unixODBC.

--

Phil Brutsche
[EMAIL PROTECTED]



Re: PostgreSQL backend: a waste of time?

2002-11-25 Thread Alessandro Oliveira




I think that all the code that is dependent on a particular database
should be completely isolated, making it simpler to port to new
databases, for instance: Nicola likes postgres, you like mysql, and I
love oracle (besides it is very expensive), somebody else would like an
interbase or a sapdb backend.

Noll Janos wrote:

  Hy!

 I think that's a very good idea, but we found that MySQL is much faster
than Postgres, when there are no complex queries (this is the case here),
so it might be a better idea to use MySQL.


On 25-Nov-2002 Nicola Ranaldo wrote:
  
  
Due to our historical problems using BerkeleyDB4 over True64Unix I'm
coding a PostgreSQL backend.
It's in alpha stage and seems to work fine over an our production
server
with about 7500 mailboxes.
Cyrus.log is DBERROR free, and users do not report any problem.
Access to mboxlist is about 5 time faster then with BerkeleyDB4.
If the cyrus community is interested in this little piece of code (i
don't
know due to presence of skiplist backend) we can start a thread now!

  
  

| Noll Janos <[EMAIL PROTECTED]> | http://www.johnzero.hu |
| "Expect the unexpected!"|   ICQ# 4547866   |  Be free! |

  


-- 
Best Regards,

Alessandro Oliveira
Nuno Ferreira Cargas Internacionais Ltda.
Phone: +55-11-3241-2000
Fax  : +55-11-3242-9891
---

It's trivial to make fun of Microsoft products, but it takes a real 
man to make them work, and a god to make them do anything useful.




RE: PostgreSQL backend: a waste of time?

2002-11-25 Thread +archive . info-cyrus
--On Monday, November 25, 2002 4:36 PM +0100 Noll Janos 
<[EMAIL PROTECTED]> wrote:

|  I think that's a very good idea, but we found that MySQL is much faster
| than Postgres, when there are no complex queries (this is the case here),
| so it might be a better idea to use MySQL.

Seems kinda ironic in a way---doesn't MySQL use BerkeleyDB?  I guess it's 
all in the indexing/caching

Amos




RE: PostgreSQL backend: a waste of time?

2002-11-25 Thread Noll Janos
Hy!

 I think that's a very good idea, but we found that MySQL is much faster
than Postgres, when there are no complex queries (this is the case here),
so it might be a better idea to use MySQL.


On 25-Nov-2002 Nicola Ranaldo wrote:
> Due to our historical problems using BerkeleyDB4 over True64Unix I'm
> coding a PostgreSQL backend.
> It's in alpha stage and seems to work fine over an our production
> server
> with about 7500 mailboxes.
> Cyrus.log is DBERROR free, and users do not report any problem.
> Access to mboxlist is about 5 time faster then with BerkeleyDB4.
> If the cyrus community is interested in this little piece of code (i
> don't
> know due to presence of skiplist backend) we can start a thread now!


| Noll Janos <[EMAIL PROTECTED]> | http://www.johnzero.hu |
| "Expect the unexpected!"|   ICQ# 4547866   |  Be free! |




Re: PostgreSQL backend: a waste of time?

2002-11-25 Thread Alessandro Oliveira
I have the same point of view, could you please share with me your 
experiences with cyrus, and a more detailed information about your project ?

Nicola Ranaldo wrote:

Due to our historical problems using BerkeleyDB4 over True64Unix I'm coding
a
PostgreSQL backend.
It's in alpha stage and seems to work fine over an our production server
with about 7500 mailboxes.
Cyrus.log is DBERROR free, and users do not report any problem.
Access to mboxlist is about 5 time faster then with BerkeleyDB4.
If the cyrus community is interested in this little piece of code (i don't
know due to presence of skiplist backend) we can start a thread now!

Best Regards
Nicola Ranaldo
System Manager
C.D.S. Federico II University
Neaples - Italy

 

--
Best Regards,

Alessandro Oliveira
Nuno Ferreira Cargas Internacionais Ltda.
Phone: +55-11-3241-2000
Fax  : +55-11-3242-9891
---

It's trivial to make fun of Microsoft products, but it takes a real 
man to make them work, and a god to make them do anything useful.




PostgreSQL backend: a waste of time?

2002-11-25 Thread Nicola Ranaldo
Due to our historical problems using BerkeleyDB4 over True64Unix I'm coding
a
PostgreSQL backend.
It's in alpha stage and seems to work fine over an our production server
with about 7500 mailboxes.
Cyrus.log is DBERROR free, and users do not report any problem.
Access to mboxlist is about 5 time faster then with BerkeleyDB4.
If the cyrus community is interested in this little piece of code (i don't
know due to presence of skiplist backend) we can start a thread now!

Best Regards
Nicola Ranaldo
System Manager
C.D.S. Federico II University
Neaples - Italy