Re: Postfix Configuration Problem. (Solved was pebkac) noText

2004-03-05 Thread Christian Schlettig


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Tomàs Núñez Lirola
Hi
We're planning a new website where we will use a DB with 500.000 to 1.000.000 
records. We are now deciding which database server we will use. We've read 
that MySQL has big problems from 150.000 records and more. Also we've read 
that PostgreSQL is very slow on such records.
But we don't have any experience, so we must rely on other people experience.

I'm sure there are some stories about DB servers, like MySQL being the fastest 
ever, or MySQL functionality being the most ridiculous ever (can't do certain 
subselects, triggers...).

What do you think of that stories? Which DB server would you use?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Jan-Benedict Glaw
On Fri, 2004-03-05 12:14:51 +0100, Tomàs Núñez Lirola [EMAIL PROTECTED]
wrote in message [EMAIL PROTECTED]:
 I'm sure there are some stories about DB servers, like MySQL being the fastest 
 ever, or MySQL functionality being the most ridiculous ever (can't do certain 
 subselects, triggers...).

MySQL seems to perform quite good on small amounts of data and limited
selects. This is why it's used so much for web development.

Personally, I'd use PostgreSQL. It's stable and can handle large tables.
And, it's faster (most of the time) if you have more complex queries...

MfG, JBG

-- 
   Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481
   Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur | Gegen Krieg
fuer einen Freien Staat voll Freier Bürger | im Internet! |   im Irak!
   ret = do_actions((curr | FREE_SPEECH)  ~(NEW_COPYRIGHT_LAW | DRM | TCPA));


signature.asc
Description: Digital signature


Re: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Russell Coker
On Fri, 5 Mar 2004 22:14, Tomàs Núñez Lirola [EMAIL PROTECTED] wrote:
 We're planning a new website where we will use a DB with 500.000 to
 1.000.000 records. We are now deciding which database server we will use.
 We've read that MySQL has big problems from 150.000 records and more. Also
 we've read that PostgreSQL is very slow on such records.
 But we don't have any experience, so we must rely on other people
 experience.

How big are these records?  Usually records are no more than 1K in size, so 
the entire database should fit into cache.  I've run databases much slower 
than those on hardware that was OK by 1999 standards (but sucks badly by 
today's standards) and it was OK.

Of course it really depends on what exactly you are doing, how many indexes, 
how many programs may be writing at the same time, whether you need 
transactions, etc.  But given RAM prices etc I suggest first making sure that 
your RAM is about the same size as the database if at all possible.  If you 
can do that then apart from 5-10 mins at startup IO performance is totally 
dependant on writes.  Then get a battery-backed write-back disk cache for 
best write performance (maybe use data journalling and put an external 
journal on a device from http://www.umem.com ).

Probably getting the performance you want is easy if you have the right budget 
and are able to be a little creative with the way you install things (EG the 
uMem device).  The REAL issue will probably be redundancy.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



RE: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Boehn, Gunnar von
Hi,

I work as webmaster for SONY.
We use several db system (oracle, postgressql, mysql)

We are very happy and satisfied with mySQL! 


 We're planning a new website where we will use a DB with 
 500.000 to 1.000.000  records. We are now deciding which
 database server we will use. We've read that MySQL
 has big problems from 150.000 records and more. 

Not true. I administer several bigger websites/projects running on mySQL.
I have mySQL databases with over 10.000.000 records which runs fast
and are perfectly able to scope with high number of read and writes.

There is on thing you need to be aware of with mySQL databases:
With MySQL you can choose which table format (handler) you want to store
your data in.
These different table formats have different abilities. 

e.g.
myISAM
 -no transactions
 -very fast
 -excellent read performance
 -very good write performance
 -but bad performence for simultanious read and writes!

innodb
 -transactions
 -fast but little bit slower than myISAM
 -can handle high volume simultaniously reads and writes.



 Also we've read that PostgreSQL is very slow on such records.

Sounds like old/wrong info to me.
PostgreSQL is powerfull and actually quite fast. 




I have experience with Oracle, Postgress and mySQL.
I can get Oracle basicly for free (company agreement).
I would choose MySQL in 9 out of 10 times for my project.



I hope that my comments will help you.


Kind regards
Gunnar


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



TOP 50 de la liberté - - - ------ --

2004-03-05 Thread Patrick Dujoution - - - ------ --
PS, RPR, UMP, voilà le résultat : la France en 44ème position ! 

Voir document joint : Article paru dans le magazine de l'AGEFI 

 - - - -- --
inline: AGEFI.gif

RE: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Ben Yau


 Hi,

 I work as webmaster for SONY.
 We use several db system (oracle, postgressql, mysql)

 We are very happy and satisfied with mySQL!


  We're planning a new website where we will use a DB with
  500.000 to 1.000.000  records. We are now deciding which
  database server we will use. We've read that MySQL
  has big problems from 150.000 records and more.

 Not true. I administer several bigger websites/projects running on mySQL.
 I have mySQL databases with over 10.000.000 records which runs fast
 and are perfectly able to scope with high number of read and writes.


I too have had similar concerns.  I always thought of postgresql as the
stable database and mysql as the, well, development database or
non-mission-critical database.  Esp. since mysql didn't handle
transactoins (which is why it was faster).  Check out some of the articles
on phpbuilder.com re: speed tests and things like that of postgres vs mysql.
The author did lots of tests and found that mysql did not outperform
postgres on some of his tests.

That being said, during my research I had found that lots of companies are
using mysql on large production dbs without problems.  I did use it on a
recent project of mine and had no problems with it.  I think I inserted
10,000,000 records into a table and had minor problems with the index
rebuilding (slow...) and was doing lots of inserts/deletes during my testing
phase.  DIdn't seem as quick to do this as postgres was.  But if you're not
going to be doing huge maounts of inserts/deletes at one time, it seemed
fine

(my laptop by the way i was testing on was a PIII with 128MB RAM on redhat
9)
Ben




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



[no subject]

2004-03-05 Thread adolfo serrano






Re: Re: Sendmail or Qmail ? ..

2004-03-05 Thread Lucius Junevicus
Title: Message



I saw your post on 
setting up qmail over drbd. I would love to see how you did 
it.
I'd like to create a 
how-to on setting up a hybrid cluster (open-mosix and drbd) for 
qmail.

I'd love to know how 
you setup your cluster.

What do your 
drbd.conf, ha.cf, haresources files look like?

Which services do 
you have heartbeat control? (qmail, spamassassin, ?)

I know your probably 
very busy, but any help would be greatly appreciated.

Lucius


Re: Re: Sendmail or Qmail ? ..

2004-03-05 Thread Alex Borges
El vie, 05-03-2004 a las 12:56, Lucius Junevicus escribió:
 I saw your post on setting up qmail over drbd.  I would love to see
 how you did it.
 I'd like to create a how-to on setting up a hybrid cluster (open-mosix
 and drbd) for qmail.

Open Mosix? Isnt that like, autobalanced cluster? Interesting, how does
it help a smtp farm as opposed to simple load balancing?

  
 I'd love to know how you setup your cluster.
  
 What do your drbd.conf, ha.cf, haresources files look like?
  
 Which services do you have heartbeat control? (qmail, spamassassin, ?)
  
 I know your probably very busy, but any help would be greatly
 appreciated.

This is pretty straighforward.  A most mta's Qmail has configurable
queue directories and can deliver to maildirs anywhare as well (i use
vpopmail as delivery).

All you need is to set up your drbd partition as announced in drbd's
documentation (engeneer your disks, etc.). 

Our nodes look like this:

Primary
DELL 6250 PIV XEON 2.4gh DUal Processor 1GB ram
210GB RAID V SCSI storage

Secondary
DELL 6250 PIV XEON2.4gh Single processor 1GB ram
210GB RAID V SCSI storage

Make a big partition, set up some symlinks to make important directories
reside in this partition (i named it data and its mounted on /data):

/var/qmail - /data/var/qmail
/home/vpopmail - /data/home/vpopmail
/webhostingpeople - /data/webhostingpeople
/var/lib/mysql - /data/var/lib/mysql
/etc/passwd - /data/etc/passwd
/etc/group - /data/etc/group 


 etc.

HEre is the trick:

In the primary server:
Install (or mod) everything so that important services boot up without a
problem from files in this partition (already using the symlinks and
all). 

Make SHURE you profile every possible path of use that may be related to
file access creation, directory creation...etc.

In the secondary server:
Make a data partition
Make shure that data partition is absolutely exactly the same size of
the primary.

In the primary:
In init=1 (make shure all services are OFF) do:

tar cf --exclude-from exludedfiles /  | ssh -lroot secondary tar xf / 

In the file excludedfiles you should put /dev/ /var/log /var
...etc...anything that doesnt make sense putting in the failback node
(/proc, /sys).

This will snapshot the primary onto the secondary. Reboot the secondary,
all services should be on and working just as in the primary. If that is
the case, youre ready to roll.

Make the drbd magic you have to on the /data partition and youre
home free.


  
 Lucius


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Rod Rodolico
I have worked with Oracle, MS-SQL, Sybase, PostgreSQL and MySQL. For smaller 
applications like
you are describing, MySQL wins hands down.

I currently have one table that has over 5 million rows (distance between any two zip 
(postal)
codes that are = 50 miles apart in the US), and it works in real time with no 
problem. Most
complex query in the app is done with a union of four queries, each reading from four 
tables
(one of which is the zip codes), with probably a half dozen where clauses each and, 
as I
said, the response is in real time. That is on a 1.7Mhz machine, 512M DRAM, 10,000rpm 
SCSI in
a hardware RAID 5, though in tests it worked just fine on a 500Mhz machine with 128M 
and a
single SCSI HDD.

Only time I've been frustrated with it was when I created the zip code distance table 
by doing
a cross join of a zipcode/lat/long table with itself, then using floating point math 
to calc
the distance. That took about 2.5 hours on a 500Mhz computer with 128M RAM, which is
reasonable as far as I'm concerned.

PostgreSQL is fast also, though I haven't tested it with that load. I would assume it 
would
comparable. However, I hate managing PostgreSQL. Don't know why, but I always seem to 
have
problems when there is an upgrade and it drives me up a tree. (MySQL has very 
non-standard
permissions, by the way).

I would say that for under 10M rows in any table, they are both good. That may be
conservative. There is also the fact that MySQL has been updated quite a bit, and 
getting the
newer (4.???) version is well worth it. I chose PostgreSQL at one time because I love
subselects, but now MySQL does them also, plus transactions and (if you are of an 
entirely
different mindset from me) will support foreign keys Real Soon Now. And unions. 
Finally,
unions!

Basically, if you want full SQL-92 right now, I think you still have to go PostgreSQL. 
But, if
you don't need the whole thing, administration of MySQL makes it the winner for me.

Rod


 Hi
 We're planning a new website where we will use a DB with 500.000 to 1.000.000
 records. We are now deciding which database server we will use. We've read
 that MySQL has big problems from 150.000 records and more. Also we've read
 that PostgreSQL is very slow on such records.
 But we don't have any experience, so we must rely on other people experience.

 I'm sure there are some stories about DB servers, like MySQL being the fastest
 ever, or MySQL functionality being the most ridiculous ever (can't do certain
 subselects, triggers...).

 What do you think of that stories? Which DB server would you use?


 --
 To UNSUBSCRIBE, email to [EMAIL PROTECTED]
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




-- 
Latest survey shows that 3 out of 4 people make up 75% of the world's population.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Postfix Configuration Problem. (Solved was pebkac) noText

2004-03-05 Thread Christian Schlettig



MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Tomàs Núñez Lirola
Hi
We're planning a new website where we will use a DB with 500.000 to 1.000.000 
records. We are now deciding which database server we will use. We've read 
that MySQL has big problems from 150.000 records and more. Also we've read 
that PostgreSQL is very slow on such records.
But we don't have any experience, so we must rely on other people experience.

I'm sure there are some stories about DB servers, like MySQL being the fastest 
ever, or MySQL functionality being the most ridiculous ever (can't do certain 
subselects, triggers...).

What do you think of that stories? Which DB server would you use?




Re: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Jan-Benedict Glaw
On Fri, 2004-03-05 12:14:51 +0100, Tomàs Núñez Lirola [EMAIL PROTECTED]
wrote in message [EMAIL PROTECTED]:
 I'm sure there are some stories about DB servers, like MySQL being the 
 fastest 
 ever, or MySQL functionality being the most ridiculous ever (can't do certain 
 subselects, triggers...).

MySQL seems to perform quite good on small amounts of data and limited
selects. This is why it's used so much for web development.

Personally, I'd use PostgreSQL. It's stable and can handle large tables.
And, it's faster (most of the time) if you have more complex queries...

MfG, JBG

-- 
   Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481
   Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur | Gegen Krieg
fuer einen Freien Staat voll Freier Bürger | im Internet! |   im Irak!
   ret = do_actions((curr | FREE_SPEECH)  ~(NEW_COPYRIGHT_LAW | DRM | TCPA));


signature.asc
Description: Digital signature


Re: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Russell Coker
On Fri, 5 Mar 2004 22:14, Tomàs Núñez Lirola [EMAIL PROTECTED] wrote:
 We're planning a new website where we will use a DB with 500.000 to
 1.000.000 records. We are now deciding which database server we will use.
 We've read that MySQL has big problems from 150.000 records and more. Also
 we've read that PostgreSQL is very slow on such records.
 But we don't have any experience, so we must rely on other people
 experience.

How big are these records?  Usually records are no more than 1K in size, so 
the entire database should fit into cache.  I've run databases much slower 
than those on hardware that was OK by 1999 standards (but sucks badly by 
today's standards) and it was OK.

Of course it really depends on what exactly you are doing, how many indexes, 
how many programs may be writing at the same time, whether you need 
transactions, etc.  But given RAM prices etc I suggest first making sure that 
your RAM is about the same size as the database if at all possible.  If you 
can do that then apart from 5-10 mins at startup IO performance is totally 
dependant on writes.  Then get a battery-backed write-back disk cache for 
best write performance (maybe use data journalling and put an external 
journal on a device from http://www.umem.com ).

Probably getting the performance you want is easy if you have the right budget 
and are able to be a little creative with the way you install things (EG the 
uMem device).  The REAL issue will probably be redundancy.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




RE: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Boehn, Gunnar von
Hi,

I work as webmaster for SONY.
We use several db system (oracle, postgressql, mysql)

We are very happy and satisfied with mySQL! 


 We're planning a new website where we will use a DB with 
 500.000 to 1.000.000  records. We are now deciding which
 database server we will use. We've read that MySQL
 has big problems from 150.000 records and more. 

Not true. I administer several bigger websites/projects running on mySQL.
I have mySQL databases with over 10.000.000 records which runs fast
and are perfectly able to scope with high number of read and writes.

There is on thing you need to be aware of with mySQL databases:
With MySQL you can choose which table format (handler) you want to store
your data in.
These different table formats have different abilities. 

e.g.
myISAM
 -no transactions
 -very fast
 -excellent read performance
 -very good write performance
 -but bad performence for simultanious read and writes!

innodb
 -transactions
 -fast but little bit slower than myISAM
 -can handle high volume simultaniously reads and writes.



 Also we've read that PostgreSQL is very slow on such records.

Sounds like old/wrong info to me.
PostgreSQL is powerfull and actually quite fast. 




I have experience with Oracle, Postgress and mySQL.
I can get Oracle basicly for free (company agreement).
I would choose MySQL in 9 out of 10 times for my project.



I hope that my comments will help you.


Kind regards
Gunnar




TOP 50 de la liberté - - - ------ --

2004-03-05 Thread Patrick Dujoution - - - ------ --
PS, RPR, UMP, voilà le résultat : la France en 44ème position ! 

Voir document joint : Article paru dans le magazine de l'AGEFI 

 - - - -- --
inline: AGEFI.gif

[no subject]

2004-03-05 Thread adolfo serrano






Re: Re: Sendmail or Qmail ? ..

2004-03-05 Thread Lucius Junevicus
Title: Message



I saw your post on 
setting up qmail over drbd. I would love to see how you did 
it.
I'd like to create a 
how-to on setting up a hybrid cluster (open-mosix and drbd) for 
qmail.

I'd love to know how 
you setup your cluster.

What do your 
drbd.conf, ha.cf, haresources files look like?

Which services do 
you have heartbeat control? (qmail, spamassassin, ?)

I know your probably 
very busy, but any help would be greatly appreciated.

Lucius


Re: Re: Sendmail or Qmail ? ..

2004-03-05 Thread Alex Borges
El vie, 05-03-2004 a las 12:56, Lucius Junevicus escribió:
 I saw your post on setting up qmail over drbd.  I would love to see
 how you did it.
 I'd like to create a how-to on setting up a hybrid cluster (open-mosix
 and drbd) for qmail.

Open Mosix? Isnt that like, autobalanced cluster? Interesting, how does
it help a smtp farm as opposed to simple load balancing?

  
 I'd love to know how you setup your cluster.
  
 What do your drbd.conf, ha.cf, haresources files look like?
  
 Which services do you have heartbeat control? (qmail, spamassassin, ?)
  
 I know your probably very busy, but any help would be greatly
 appreciated.

This is pretty straighforward.  A most mta's Qmail has configurable
queue directories and can deliver to maildirs anywhare as well (i use
vpopmail as delivery).

All you need is to set up your drbd partition as announced in drbd's
documentation (engeneer your disks, etc.). 

Our nodes look like this:

Primary
DELL 6250 PIV XEON 2.4gh DUal Processor 1GB ram
210GB RAID V SCSI storage

Secondary
DELL 6250 PIV XEON2.4gh Single processor 1GB ram
210GB RAID V SCSI storage

Make a big partition, set up some symlinks to make important directories
reside in this partition (i named it data and its mounted on /data):

/var/qmail - /data/var/qmail
/home/vpopmail - /data/home/vpopmail
/webhostingpeople - /data/webhostingpeople
/var/lib/mysql - /data/var/lib/mysql
/etc/passwd - /data/etc/passwd
/etc/group - /data/etc/group 


 etc.

HEre is the trick:

In the primary server:
Install (or mod) everything so that important services boot up without a
problem from files in this partition (already using the symlinks and
all). 

Make SHURE you profile every possible path of use that may be related to
file access creation, directory creation...etc.

In the secondary server:
Make a data partition
Make shure that data partition is absolutely exactly the same size of
the primary.

In the primary:
In init=1 (make shure all services are OFF) do:

tar cf --exclude-from exludedfiles /  | ssh -lroot secondary tar xf / 

In the file excludedfiles you should put /dev/ /var/log /var
...etc...anything that doesnt make sense putting in the failback node
(/proc, /sys).

This will snapshot the primary onto the secondary. Reboot the secondary,
all services should be on and working just as in the primary. If that is
the case, youre ready to roll.

Make the drbd magic you have to on the /data partition and youre
home free.


  
 Lucius




Re: MySQL - PostgreSQL - DB2 - What?

2004-03-05 Thread Rod Rodolico
I have worked with Oracle, MS-SQL, Sybase, PostgreSQL and MySQL. For smaller 
applications like
you are describing, MySQL wins hands down.

I currently have one table that has over 5 million rows (distance between any 
two zip (postal)
codes that are = 50 miles apart in the US), and it works in real time with no 
problem. Most
complex query in the app is done with a union of four queries, each reading 
from four tables
(one of which is the zip codes), with probably a half dozen where clauses 
each and, as I
said, the response is in real time. That is on a 1.7Mhz machine, 512M DRAM, 
10,000rpm SCSI in
a hardware RAID 5, though in tests it worked just fine on a 500Mhz machine with 
128M and a
single SCSI HDD.

Only time I've been frustrated with it was when I created the zip code distance 
table by doing
a cross join of a zipcode/lat/long table with itself, then using floating point 
math to calc
the distance. That took about 2.5 hours on a 500Mhz computer with 128M RAM, 
which is
reasonable as far as I'm concerned.

PostgreSQL is fast also, though I haven't tested it with that load. I would 
assume it would
comparable. However, I hate managing PostgreSQL. Don't know why, but I always 
seem to have
problems when there is an upgrade and it drives me up a tree. (MySQL has very 
non-standard
permissions, by the way).

I would say that for under 10M rows in any table, they are both good. That may 
be
conservative. There is also the fact that MySQL has been updated quite a bit, 
and getting the
newer (4.???) version is well worth it. I chose PostgreSQL at one time because 
I love
subselects, but now MySQL does them also, plus transactions and (if you are of 
an entirely
different mindset from me) will support foreign keys Real Soon Now. And unions. 
Finally,
unions!

Basically, if you want full SQL-92 right now, I think you still have to go 
PostgreSQL. But, if
you don't need the whole thing, administration of MySQL makes it the winner for 
me.

Rod


 Hi
 We're planning a new website where we will use a DB with 500.000 to 1.000.000
 records. We are now deciding which database server we will use. We've read
 that MySQL has big problems from 150.000 records and more. Also we've read
 that PostgreSQL is very slow on such records.
 But we don't have any experience, so we must rely on other people experience.

 I'm sure there are some stories about DB servers, like MySQL being the fastest
 ever, or MySQL functionality being the most ridiculous ever (can't do certain
 subselects, triggers...).

 What do you think of that stories? Which DB server would you use?


 --
 To UNSUBSCRIBE, email to [EMAIL PROTECTED]
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




-- 
Latest survey shows that 3 out of 4 people make up 75% of the world's 
population.