On Samstag, 29. Oktober 2005 06:33 Linda Walsh wrote: > Assuming it is some sort of berkeley db format, what is a good > cut-over size as a "rule-of-thumb"...or is there? What should I > expect in speeds for "sa-learn" or spamc? I.e. -- is there a > rough guideline for when it becomes more effective to use SQL > vs. the Berkeley DB? Or rephrased, when it is worth the effort to > convert to SQL and ensure all the SQL software is setup and running?
I don't know whether this really is a performance question, but I believe it's more of a "do I need it" question. For example, if you use a system wide bayes db, you probably won't need SQL. I do this for now. But if some users want/need their own bayes, or own settings, it starts becoming easier to use SQL for all that things - it's quickly becoming easier to manage, after 5 users or so need their special config. That's why I'm thinking of switching to SQL. Does anybody know whether MySQL or PostgreSQL is better suited for the job? I prefer PostgreSQL, but many times MySQL is better supported... mfg zmi -- // Michael Monnerie, Ing.BSc --- it-management Michael Monnerie // http://zmi.at Tel: 0660/4156531 Linux 2.6.11 // PGP Key: "lynx -source http://zmi.at/zmi2.asc | gpg --import" // Fingerprint: EB93 ED8A 1DCD BB6C F952 F7F4 3911 B933 7054 5879 // Keyserver: www.keyserver.net Key-ID: 0x70545879
pgpKJgM5KVQGA.pgp
Description: PGP signature