Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
I know this is not even a test but i thought it might be a client
library issue, not server itself.
Well it is not, i tried it on .net with npgsql which is a .net client
library (Not using libpq). Results are same. Connect time does not have
much effect by the way.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Merlin Moncure
On 28 Mar 2006 17:01:45 -0800, Antimon [EMAIL PROTECTED] wrote:
 Nearly 3 times slower even w/o any table queries. But i could not
 reproduce this by writing stored procs on both which selects 0-1 in
 a loop to a variable. results were almost same.
 (I tried pg_ and mysqli_ functions too, results were not too different)

 Is it mysql client libraries performs better? Or postgre stored procs
 are 3x faster? I cannot understand, since there is not even an io
 operation or any query planning stuff, what is the cause of this?

Yes, it is the case that raw select performance for pgsql  drastically
improves with prepares statements as much as 2-3x.  In my experience
postgresql is just a bit faster with prepared statements and
substantially slower without.  There is a little bit of pain in
setting them up properly in a pooling web environment but it is worth
it if you want top performance.

Does anybody know if php uses the parameterized flavors of the C API? 
That's another huge optimization over PQexec.

I agree with Jim in that it't probably fast enough anyways.

Merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 02:45, Antimon wrote:
 I know this is not even a test but i thought it might be a client
 library issue, not server itself.
 Well it is not, i tried it on .net with npgsql which is a .net client
 library (Not using libpq). Results are same. Connect time does not have
 much effect by the way.

Just so you understand, there was a period of time when MySQL AB put out
benchmark after benchmark like this to prove that MySQL was faster
than PostgreSQL.  Of course, they never compared anything moderately
complex, since they knew they'd lose.

Is it a fair comparison of say, a mainframe and a workstation to compare
the boot up times?  Not really, unless your particular application is
gonna be rebooting the mainframe a lot.  It's a small, narrow channel to
compare on, and most the time it doesn't mean a lot.

That's all.  No personal attack meant against you, sorry if it came
across that way.  It's just many pgsql folks still smart from that era,
and it seems to come back to haunt us every so often.  

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
Hi,
I'm sorry about being understood like i was trying to compare pgsql
with mysql. I was trying stuff, did this and saw that huge difference
(even it is not bad alone, but comparing to mysql), and thought that
might be some library issue causing slow reads from server. I don't
need any rdbms to be faster on selecting a constant integer. My point
was the library, not server performance.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 12:19, Antimon wrote:
 Hi,
 I'm sorry about being understood like i was trying to compare pgsql
 with mysql. I was trying stuff, did this and saw that huge difference
 (even it is not bad alone, but comparing to mysql), and thought that
 might be some library issue causing slow reads from server. I don't
 need any rdbms to be faster on selecting a constant integer. My point
 was the library, not server performance.

No need to apologize, honest.  This discussion brings up some valid
points, even if the benchmark is not necessarily a valid method for
choosing the database.

PostgreSQL is generally heavier than MySQL.  A large portion of this
is that PostgreSQL is generally busy making sure things are done right
first, and fast second.  In MySQL, it's the other way around.

I wonder how your test would work if you did something a little more
complex (like the pick 1 row in a million scenario) and did in parallel
for say 5, 10, 20, 50 clients at a time.  That would like give you some
idea of how well connectivity and small sql select statements scale on
each system.

Generally, pgsql folks consider the single user scenario to not be that
important, and ask themselves so what happens when 50 people do this at
the same time?  Again, MySQL tends to be the opposite.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
Just noticed,
On windows, these results are produced.
But on linux, postgresql performs great. So postgre has a bad windows
integration than mysql. Well since it supports win32 for a long time,
it makes sense.
I did some multi client tests and postgre could not even catch mysql
so i decided to go test it on my linux box. It seems that the
performance issue is about windows version.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Joshua D. Drake

Antimon wrote:

Just noticed,
On windows, these results are produced.
But on linux, postgresql performs great. So postgre has a bad windows
integration than mysql. Well since it supports win32 for a long time,
it makes sense.
I did some multi client tests and postgre could not even catch mysql
so i decided to go test it on my linux box. It seems that the
performance issue is about windows version.


Part of the problem may be the connection penalty on Windows. Since 
PostgreSQL is a processed based database (mySQL is threaded I believe) 
it is expensive (resource wise) to fire a bunch of connections.


You may be able to get similar performance if you were to use pconnect 
or connection pooling with PHP and PostgreSQL.


Sincerely,

Joshua D. Drake





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly




--

=== The PostgreSQL Company: Command Prompt, Inc. ===
  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
  Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
Just tried, yes, pconnect boosts multi user performance.
But causes this:
http://img526.imageshack.us/img526/6302/pgsql7th.jpg
:)

Need to modify max persistent connection settings.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 15:22, Antimon wrote:
 Just tried, yes, pconnect boosts multi user performance.
 But causes this:
 http://img526.imageshack.us/img526/6302/pgsql7th.jpg
 :)
 
 Need to modify max persistent connection settings.

I don't thin that doesn't do what you think it does.  max persistent
setting in php.ini sets the max number of persistent connections PER PHP
/ web server process.

What web server is that with, btw?

With a web server like lighttpd, you have a pool of php child
processes pre-spawned and ready to go, and you limit that pool to
something reasonable.

With apache and other web servers, each web server child process gets
its own php process hanging off of it, and if you've got, say, a default
apache install, that allows up to 150 child processes.

If you're using apache, you're often better off limiting the number of
children to something smaller, or cranking up postgresql allowed
connections to keep up.  150 is usually a bit much.  20 to 50 is
normally plenty for most applications that can benefit from pconnects.

But you're probably better off with something like lighttpd.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 15:58, Scott Marlowe wrote:
 On Wed, 2006-03-29 at 15:22, Antimon wrote:
  Just tried, yes, pconnect boosts multi user performance.
  But causes this:
  http://img526.imageshack.us/img526/6302/pgsql7th.jpg
  :)
  
  Need to modify max persistent connection settings.
 
 I don't thin that doesn't do what you think it does.  max persistent
 setting in php.ini sets the max number of persistent connections PER PHP
 / web server process.

That should be I don't think that does what you think it does.  man,
my editor is so fired...

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
Yes i tried and realized apache child processes..
Then i looked for another pooling solution, the project pgpool i found.
No windows binaries, it might run on cygwin.
After all, i think postgreSQL is not meant to run on windows production
for ~2 more major releases or something. It performs great on linux
thats for sure but i cannot do the platform decision everytime, so
gonna stick with mySQL on windows and postgre on linux when i can.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 16:48, Antimon wrote:
 Yes i tried and realized apache child processes..
 Then i looked for another pooling solution, the project pgpool i found.
 No windows binaries, it might run on cygwin.
 After all, i think postgreSQL is not meant to run on windows production
 for ~2 more major releases or something. It performs great on linux
 thats for sure but i cannot do the platform decision everytime, so
 gonna stick with mySQL on windows and postgre on linux when i can.

There's a windows version of lighttpd available.  That might well do the
trick.

I'd still run ANY database on linux / unix before windows, just cause I
don't trust windows when things go wrong to work right.  But there's
certainly nothing stopping you from running a linux or BSD based
postgresql server behind a windows lighttpd.

If the db and web server HAVE to both be on the same server, and you
have to have windows, then you might want to look at firebird, which
apparently has quite good windows performance, and, unlike MySQL has
good SQL compliance and behaviour.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] PostgreSQL client api

2006-03-28 Thread Antimon
Hi,
I was testing MySQL and PgSQL performances on my home box (amd athlon
64 3000, 1gig ddr ram, sata I hdd, win xp (x86)), select and insert
times seeemed identical with innoDB.

But when i try to query both using php, there's a huge difference even
for a funny query like select 1

Here's the code:

?php
$mtime = microtime(true);

$pdo = new PDO('pgsql:host=localhost;dbname=test', testacc, pw);
for ($i = 0; $i  1; $i++)
{
$result = $pdo-query(Select +$i);
}

echo microtime(true) - $mtime;
echo br;

$mtime = microtime(true);

$pdo = new PDO('mysql:host=localhost;dbname=test', testacc, pw);
for ($i = 0; $i  1; $i++)
{
$result = $pdo-query(Select +$i);
}

echo microtime(true) - $mtime;
echo br;
?

output is:
2.7696590423584
0.89393591880798

Nearly 3 times slower even w/o any table queries. But i could not
reproduce this by writing stored procs on both which selects 0-1 in
a loop to a variable. results were almost same.
(I tried pg_ and mysqli_ functions too, results were not too different)

Is it mysql client libraries performs better? Or postgre stored procs
are 3x faster? I cannot understand, since there is not even an io
operation or any query planning stuff, what is the cause of this?

Thanks.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL client api

2006-03-28 Thread Jim C. Nasby
On Tue, Mar 28, 2006 at 05:01:45PM -0800, Antimon wrote:
 $pdo = new PDO('pgsql:host=localhost;dbname=test', testacc, pw);
 for ($i = 0; $i  1; $i++)
 {
   $result = $pdo-query(Select +$i);
 }
 output is:
 2.7696590423584

Ok, so that tells me that on this plain-vanilla hardware, you can 
'only' do 3600 queries per second from PHP.

Who cares?

If you're actually trying to run that kind of volume on that kind of
hardware, you need to reconsider what you're doing.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL client api

2006-03-28 Thread paul rivers

Try factoring the connect time out of the test.  My experience is the
connect is more expensive for Postgres than MySQL.  With that out of the
way, I'd wager the times will be closer.
Regards,
Paul

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Antimon
Sent: Tuesday, March 28, 2006 5:02 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostgreSQL client api

Hi,
I was testing MySQL and PgSQL performances on my home box (amd athlon
64 3000, 1gig ddr ram, sata I hdd, win xp (x86)), select and insert
times seeemed identical with innoDB.

But when i try to query both using php, there's a huge difference even
for a funny query like select 1

Here's the code:

?php
$mtime = microtime(true);

$pdo = new PDO('pgsql:host=localhost;dbname=test', testacc, pw);
for ($i = 0; $i  1; $i++)
{
$result = $pdo-query(Select +$i);
}

echo microtime(true) - $mtime;
echo br;

$mtime = microtime(true);

$pdo = new PDO('mysql:host=localhost;dbname=test', testacc, pw);
for ($i = 0; $i  1; $i++)
{
$result = $pdo-query(Select +$i);
}

echo microtime(true) - $mtime;
echo br;
?

output is:
2.7696590423584
0.89393591880798

Nearly 3 times slower even w/o any table queries. But i could not
reproduce this by writing stored procs on both which selects 0-1 in
a loop to a variable. results were almost same.
(I tried pg_ and mysqli_ functions too, results were not too different)

Is it mysql client libraries performs better? Or postgre stored procs
are 3x faster? I cannot understand, since there is not even an io
operation or any query planning stuff, what is the cause of this?

Thanks.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL client api

2006-03-28 Thread Scott Marlowe
On Tue, 2006-03-28 at 19:01, Antimon wrote:
 Hi,
 I was testing MySQL and PgSQL performances on my home box (amd athlon
 64 3000, 1gig ddr ram, sata I hdd, win xp (x86)), select and insert
 times seeemed identical with innoDB.
 
 But when i try to query both using php, there's a huge difference even
 for a funny query like select 1
 
 Here's the code:
 
 ?php
 $mtime = microtime(true);
 
 $pdo = new PDO('pgsql:host=localhost;dbname=test', testacc, pw);
 for ($i = 0; $i  1; $i++)
 {
   $result = $pdo-query(Select +$i);
 }
 
 echo microtime(true) - $mtime;
 echo br;
 
 $mtime = microtime(true);
 
 $pdo = new PDO('mysql:host=localhost;dbname=test', testacc, pw);
 for ($i = 0; $i  1; $i++)
 {
   $result = $pdo-query(Select +$i);
 }
 
 echo microtime(true) - $mtime;
 echo br;
 ?
 
 output is:
 2.7696590423584
 0.89393591880798
 
 Nearly 3 times slower even w/o any table queries. But i could not
 reproduce this by writing stored procs on both which selects 0-1 in
 a loop to a variable. results were almost same.
 (I tried pg_ and mysqli_ functions too, results were not too different)
 
 Is it mysql client libraries performs better? Or postgre stored procs
 are 3x faster? I cannot understand, since there is not even an io
 operation or any query planning stuff, what is the cause of this?

MySQL's client libs, connection speed, and parser are known to be quite
fast.  PostgreSQL's aren't exactly slow, but they have more to do.  As
you start running actual queries, you'll see the advantage start to
disappear, and by the time you're running queries full of subselects and
unions, you'll notice the it's the tortoise and the hare all over again.

As someone else mentioned, factor out the connection time.  But really,
this test is fairly bogus.  Unless you're writing an app that has a lot
of select 1 in it that is.

At least test something fairly realistic.  Maybe some kind of thing like
having 10,000,000 rows, pick a number at random from 1 to 10,000,000 and
use it to select one row at a time.  Better yet, make a union view with
a subselect on a function, create a functional index for the underlying
tables, and see if your outside where clause gets pushed down inside the
query by the planner.

Run updates in the background at the same time.

Pull the power plug whilst doing it, see which database comes back up.

Just benchmark what you're really gonna use the database for, cause if
you pick one or the other from this test, you might as well flip a coin.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match