Re: [PERFORM] Final decision

2005-04-28 Thread Dave Page
 

 -Original Message-
 From: Josh Berkus [mailto:[EMAIL PROTECTED] 
 Sent: 28 April 2005 04:09
 To: Dave Page
 Cc: Joshua D. Drake; Joel Fradkin; PostgreSQL Perform
 Subject: Re: [PERFORM] Final decision
 
 Dave, folks,
 
  Err, yes. But that's not quite the same as core telling us 
 the current
  driver is being replaced.
 
 Sorry, I spoke off the cuff.I also was unaware that work 
 on the current 
 driver had renewed.   Us Core people are not omnicient, 
 believe it or don't.

I was under the impression that you and Bruce negiotiated the developer
time! Certainly you and I chatted about it on IRC once... Ahh, well.
Never mind.

 Mind you, having 2 different teams working on two different 
 ODBC drivers is a 
 problem for another list ...

Absolutely.

Regards, Dave.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] Final decision

2005-04-27 Thread Joel Fradkin








I spent a great deal of time over the past week looking
seriously at Postgres and MYSQL.

Objectively I am not seeing that much of an improvement in
speed with MYSQL, and we have a huge investment in postgrs.

So I am planning on sticking with postgres fro our
production database (going live this weekend).



Many people have offered a great deal of help and I
appreciate all that time and energy.

I did not find any resolutions to my issues with
Commandprompt.com (we only worked together 2.5 hours).



Most of my application is working about the same speed as
MSSQL server (unfortunately its twice the speed box, but as many have pointed
out it could be an issue with the 4 proc dell). I spent considerable time with
Dell and could see my drives are delivering 40 meg per sec.



Things I still have to make better are my settings in config,
I have it set to no merge joins and no seq scans.

I am going to have to use flattened history files for
reporting (I saw huge difference here the view for audit cube took 10 minutes
in explain analyze and the flattened file took under one second).



I understand both of these practices are not desirable, but
I am at a place where I have to get it live and these are items I could not
resolve.

I may try some more time with Commanpromt.com, or seek other
professional help.



In stress testing I found Postgres was holding up very well
(but my IIS servers could not handle much of a load to really push the server).

I have a few desktops acting as IIS servers at the moment
and if I pushed past 50 consecutive users it pretty much blew the server up.

On inserts that number was like 7 consecutive users and
updates was also like 7 users.



I believe that was totally IIS not postgres, but I am
curious as to if using postgres odbc will put more stress on the IIS side then
MSSQL did.

I did have a question if any folks are using two servers one
for reporting and one for data entry what system should be the beefier?

I have a 2proc machine I will be using and I can either put
Sears off by themselves on this machine or split up functionality and have one
for reporting and one for inserts and updates; so not sure which machine would
be best for which spot (reminder the more robust is a 4proc with 8 gigs and 2
proc is 4 gigs, both dells).



Thank you for any ideas in this arena.



Joel Fradkin


















Re: [PERFORM] Final decision

2005-04-27 Thread mmiranda





  -Original Message-From: 
  [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of Joel 
  FradkinSent: Wednesday, April 27, 2005 9:02 AMTo: 
  PostgreSQL PerformSubject: [PERFORM] Final 
  decision
  
  I spent a great deal of time over 
  the past week looking seriously at Postgres and MYSQL.
  Objectively I am not seeing that 
  much of an improvement in speed with MYSQL, and we have a huge investment in 
  postgrs.
  So I am planning on sticking with 
  postgres fro our production database (going live this 
  weekend).
  
  Many people have offered a great 
  deal of help and I appreciate all that time and energy.
  I did not find any resolutions to 
  my issues with Commandprompt.com (we only worked together 2.5 
  hours).
  
  Most of my application is working 
  about the same speed as MSSQL server (unfortunately its twice the speed box, 
  but as many have pointed out it could be an issue with the 4 proc dell). I 
  spent considerable time with Dell and could see my drives are delivering 40 
  meg per sec.
  
  Things I still have to make better 
  are my settings in config, I have it set to no merge joins and no seq 
  scans.
  I am going to have to use 
  flattened history files for reporting (I saw huge difference here the view for 
  audit cube took 10 minutes in explain analyze and the flattened file took 
  under one second).
  
  I understand both of these 
  practices are not desirable, but I am at a place where I have to get it live 
  and these are items I could not resolve.
  I may try some more time with 
  Commanpromt.com, or seek other professional help.
  
  In stress testing I found Postgres 
  was holding up very well (but my IIS servers could not handle much of a load 
  to really push the server).
  I have a few desktops acting as 
  IIS servers at the moment and if I pushed past 50 consecutive users it pretty 
  much blew the server up.
  On inserts that number was like 7 
  consecutive users and updates was also like 7 users.
  
  I believe that was totally IIS not 
  postgres, but I am curious as to if using postgres odbc will put more stress 
  on the IIS side then MSSQL did.
  I did have a question if any folks 
  are using two servers one for reporting and one for data entry what system 
  should be the beefier?
  I have a 2proc machine I will be 
  using and I can either put Sears off by themselves on this machine or split up 
  functionality and have one for reporting and one for inserts and updates; so 
  not sure which machine would be best for which spot (reminder the more robust 
  is a 4proc with 8 gigs and 2 proc is 4 gigs, both dells).
  
  Thank you for any ideas in this 
  arena.
  
  Joel Fradkin
  
  
  
  
  
  
  
  You didnt tell us what 
  OS are you using, windows?
  If you want good 
  performance you must install unix on that 
  machine,
  
  ---
  
  


Re: [PERFORM] Final decision

2005-04-27 Thread Joel Fradkin









Sorry I am using Redhat AS4 and postgres
8.0.2

Joel





You didnt tell us what OS
are you using, windows?

If you want good
performance you must install unix on that machine,



---














Re: [PERFORM] Final decision

2005-04-27 Thread Rod Taylor

 
 I did have a question if any folks are using two servers one for
 reporting and one for data entry what system should be the beefier?

Yeah. We started putting up slaves for reporting purposes and
application specific areas using Slony replicating partial data sets to
various locations -- some for reporting.

If your reports have a long runtime and don't require transactional
safety for writes (daily summary written or results aren't recorded in
the DB at all) this is probably something to consider.

I understand that PGAdmin makes Slony fairly painless to setup, but it
can be time consuming to get going and Slony can add new complications
depending on the data size and what you're doing with it -- but they're
working hard to reduce the impact of those complications.

 I have a 2proc machine I will be using and I can either put Sears off
 by themselves on this machine or split up functionality and have one
 for reporting and one for inserts and updates; so not sure which
 machine would be best for which spot (reminder the more robust is a
 4proc with 8 gigs and 2 proc is 4 gigs, both dells).
 
  
 
 Thank you for any ideas in this arena.
 
  
 
 Joel Fradkin
 
  
 
 
  
 
  
 
 
-- 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Final decision

2005-04-27 Thread Josh Berkus
Joel,

 So I am planning on sticking with postgres fro our production database
 (going live this weekend).

Glad to have you.

 I did not find any resolutions to my issues with Commandprompt.com (we only
 worked together 2.5 hours).

BTW, your performance troubleshooting will continue to be hampered if you 
can't share actual queries and data structure.   I strongly suggest that you 
make a confidentiality contract with  a support provider so that you can give 
them detailed (rather than general) problem reports.

 Most of my application is working about the same speed as MSSQL server
 (unfortunately its twice the speed box, but as many have pointed out it
 could be an issue with the 4 proc dell). I spent considerable time with
 Dell and could see my drives are delivering 40 meg per sec.

FWIW, on a v40z I get 180mb/s.   So your disk array on the Dell is less than 
ideal ... basically, what you have is a more expensive box, not a faster 
one :-(

 Things I still have to make better are my settings in config, I have it set
 to no merge joins and no seq scans.

Yeah, I'm also finding that our estimator underestimates the real cost of 
merge joins on some systems.Basically we need a sort-cost variable, 
because I've found an up to 2x difference in sort cost depending on 
architecture.

 I am going to have to use flattened history files for reporting (I saw huge
 difference here the view for audit cube took 10 minutes in explain analyze
 and the flattened file took under one second).
 I understand both of these practices are not desirable, but I am at a place
 where I have to get it live and these are items I could not resolve.

Flattening data for reporting is completely reasonable; I do it all the time.

 I believe that was totally IIS not postgres, but I am curious as to if
 using postgres odbc will put more stress on the IIS side then MSSQL did.

Actually, I think the problem may be ODBC.   Our ODBC driver is not the best 
and is currently being re-built from scratch.   Is using npgsql, a much 
higher-performance driver (for .NET) out of the question?  According to one 
company, npgsql performs better than drivers supplied by Microsoft.

 I did have a question if any folks are using two servers one for reporting
 and one for data entry what system should be the beefier?

Depends on the relative # of users.This is often a good approach, because 
the requirements for DW reporting and OLTP are completely different.  
Basically:
OLTP: Many slow processors, disk array set up for fast writes, moderate shared 
mem, low work_mem.
DW: Few fast processors, disk array set up for fast reads, high shared mem and 
work mem.

If reporting is at least 1/4 of your workload, I'd suggest spinning that off 
to the 2nd machine before putting one client on that machine.That way you 
can also use the 2nd machine as a failover back-up.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Final decision

2005-04-27 Thread Dave Page
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 Josh Berkus
 Sent: 27 April 2005 17:14
 To: Joel Fradkin
 Cc: PostgreSQL Perform
 Subject: Re: [PERFORM] Final decision
 
 Actually, I think the problem may be ODBC.   Our ODBC driver 
 is not the best 
 and is currently being re-built from scratch.   

It is? No-one told the developers...

Regards, Dave

[and yes, I know Joshua said Command Prompt are rewriting /their/
driver]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Final decision

2005-04-27 Thread Josh Berkus
Dave,

  Actually, I think the problem may be ODBC.   Our ODBC driver
  is not the best
  and is currently being re-built from scratch.

 It is? No-one told the developers...

 Regards, Dave

 [and yes, I know Joshua said Command Prompt are rewriting /their/
 driver]

OK.   Well, let's put it this way:  the v3 and v3.5 drivers will not be based 
on the current driver, unless you suddenly have a bunch of free time.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Final decision

2005-04-27 Thread John A Meinel
Joel Fradkin wrote:
I spent a great deal of time over the past week looking seriously at
Postgres and MYSQL.
Objectively I am not seeing that much of an improvement in speed with
MYSQL, and we have a huge investment in postgrs.
So I am planning on sticking with postgres fro our production database
(going live this weekend).
Glad to hear it. Good luck.

...
Things I still have to make better are my settings in config, I have it
set to no merge joins and no seq scans.
Just realize, you probably *don't* want to set that in postgresql.conf.
You just want to issue an SET enable_seqscan TO off before issuing one
of the queries that are mis-planned.
Because there are lots of times when merge join and seq scan is actually
faster than the alternatives. And since I don't think you tested every
query you are going to run, you probably want to let the planner handle
the ones it gets right. (Usually it doesn't quite a good job.)
Also, I second the notion of getting a confidentiality contract. There
have been several times where someone had a pathological case, and by
sending the data to someone (Tom Lane), they were able to track down and
fix the problem.
I am going to have to use flattened history files for reporting (I saw
huge difference here the view for audit cube took 10 minutes in explain
analyze and the flattened file took under one second).

I understand both of these practices are not desirable, but I am at a
place where I have to get it live and these are items I could not resolve.
Nothing wrong with a properly updated flattened table. You just need to
be careful to keep it consistent with the rest of the data. (Update
triggers/lazy materialization, etc)
I may try some more time with Commanpromt.com, or seek other
professional help.

In stress testing I found Postgres was holding up very well (but my IIS
servers could not handle much of a load to really push the server).
I have a few desktops acting as IIS servers at the moment and if I
pushed past 50 consecutive users it pretty much blew the server up.
On inserts that number was like 7 consecutive users and updates was also
like 7 users.

I believe that was totally IIS not postgres, but I am curious as to if
using postgres odbc will put more stress on the IIS side then MSSQL did.
What do you mean by blew up? I assume you have IIS on a different
machine than the database. Are you saying that the database slowed down
dramatically, or that the machine crashed, or just that the web
interface became unresponsive?
I did have a question if any folks are using two servers one for
reporting and one for data entry what system should be the beefier?
I have a 2proc machine I will be using and I can either put Sears off by
themselves on this machine or split up functionality and have one for
reporting and one for inserts and updates; so not sure which machine
would be best for which spot (reminder the more robust is a 4proc with 8
gigs and 2 proc is 4 gigs, both dells).
It probably depends on what queries are being done, and what kind of
times you need. Usually the update machine needs the stronger hardware,
so that it can do the writing.
But it depends if you can wait longer to update data than to query data,
obviously the opposite is true. It all depends on load, and that is
pretty much application defined.

Thank you for any ideas in this arena.

Joel Fradkin
John
=:-


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Final decision

2005-04-27 Thread Joel Fradkin
BTW, your performance troubleshooting will continue to be hampered if you 
can't share actual queries and data structure.   I strongly suggest that you

make a confidentiality contract with  a support provider so that you can
give them detailed (rather than general) problem reports.

I am glad to hear your perspective, maybe my rollout is not as off base as I
thought.

FYI it is not that I can not share specifics (I have posted a few table
structures and views here and on pgsql, I just can not backup the entire
database and ship it off to a consultant.

What I had suggested with Commandprompt was to use remote connectivity for
him to have access to our server directly. In this way I can learn by
watching what types of test he does and it allows him to do tests with our
data set.

Once I am in production that will not be something I want tests done on, so
it may have to wait until we get a development box with a similar deployment
(at the moment development is on a XP machine and production will be on
Linux (The 4 proc is linux and will be our production).

Thank you for letting me know what I can hope to see in the way of disk
access on the next hardware procurement, I may email you off list to get the
specific brands etc that you found that kind of through put with.




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Final decision

2005-04-27 Thread Joel Fradkin
Just realize, you probably *don't* want to set that in postgresql.conf.
You just want to issue an SET enable_seqscan TO off before issuing one
of the queries that are mis-planned.

I believe all the tested queries (90 some odd views) saw an improvement.
I will however take the time to verify this and take your suggestion as I
can certainly put the appropriate settings in each as opposed to using the
config option, Thanks for the good advice (I believe Josh from
Commandprompt.com also suggested this approach and I in my lazy self some
how blurred the concept.)


Also, I second the notion of getting a confidentiality contract. There
have been several times where someone had a pathological case, and by
sending the data to someone (Tom Lane), they were able to track down and
fix the problem.

Excellent point, Our data is confidential, but I should write something to
allow me to ship concept without confidential, so in the future I can just
send a backup and not have it break our agreements, but allow minds greater
then my own to see, and feel my issues.


What do you mean by blew up? 
IIS testing was being done with an old 2300 and a optiplex both machines
reached 100%CPU utilization and the test suite (ASP code written in house by
one of programmers) was not returning memory correctly, so it ran out of
memory and died. Prior to death I did see cpu utilization on the 4proc linux
box running postgres fluctuate and at times hit the 100% level, but the
server seemed very stable. I did fix the memory usage of the suite and was
able to see 50 concurrent users with fairly high RPS especially on select
testing, the insert and update seemed to fall apart (many 404 errors etc)


I assume you have IIS on a different
machine than the database. Are you saying that the database slowed down
dramatically, or that the machine crashed, or just that the web
interface became unresponsive? Just the web interface.

It probably depends on what queries are being done, and what kind of
times you need. Usually the update machine needs the stronger hardware,
so that it can do the writing.

But it depends if you can wait longer to update data than to query data,
obviously the opposite is true. It all depends on load, and that is
pretty much application defined.

I am guessing our app is like 75% data entry and 25% reporting, but the
reporting is taking the toll SQL wise.

This was from my insert test with 15 users.
Test type: Dynamic 
 Simultaneous browser connections: 15 
 Warm up time (secs): 0 
 Test duration: 00:00:03:13 
 Test iterations: 200 
 Detailed test results generated: Yes
Response Codes 

 Response Code: 403 - The server understood the request, but is refusing to
fulfill it. 
  Count: 15 
  Percent (%): 0.29 
 
 
 Response Code: 302 - The requested resource resides temporarily under a
different URI (Uniform Resource Identifier). 
  Count: 200 
  Percent (%): 3.85 
 
 
 Response Code: 200 - The request completed successfully. 
  Count: 4,980 
  Percent (%): 95.86 
 
My select test with 25 users had this
Properties 

 Test type: Dynamic 
 Simultaneous browser connections: 25 
 Warm up time (secs): 0 
 Test duration: 00:00:06:05 
 Test iterations: 200 
 Detailed test results generated: Yes 
  
Summary 

 Total number of requests: 187 
 Total number of connections: 200 
  
 Average requests per second: 0.51 
 Average time to first byte (msecs): 30,707.42 
 Average time to last byte (msecs): 30,707.42 
 Average time to last byte per iteration (msecs): 28,711.44 
  
 Number of unique requests made in test: 1 
 Number of unique response codes: 1 
  
Errors Counts 

 HTTP: 0 
 DNS: 0 
 Socket: 26 
  
Additional Network Statistics 

 Average bandwidth (bytes/sec): 392.08 
  
 Number of bytes sent (bytes): 64,328 
 Number of bytes received (bytes): 78,780 
  
 Average rate of sent bytes (bytes/sec): 176.24 
 Average rate of received bytes (bytes/sec): 215.84 
  
 Number of connection errors: 0 
 Number of send errors: 13 
 Number of receive errors: 13 
 Number of timeout errors: 0 
  
Response Codes 

 Response Code: 200 - The request completed successfully. 
  Count: 187 
  Percent (%): 100.00 
 



Joel


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Final decision

2005-04-27 Thread Steve Poe
Joshua,
This article was in July 2002, so is there update to this information? 
When will a new ODBC driver be available for testing? Is there a release 
of the ODBC driver with better performance than 7.0.3.0200 for a 7.4.x 
database?

Steve Poe
We have mentioned it on the list.
http://www.linuxdevcenter.com/pub/a/linux/2002/07/16/drake.html
Regards, Dave
[and yes, I know Joshua said Command Prompt are rewriting /their/
driver]

:) No we are rewriting a complete OSS driver.
Sincerely,
Joshua D. Drake
Command Prompt, Inc.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Final decision

2005-04-27 Thread John A Meinel
Joel Fradkin wrote:
...
I am guessing our app is like 75% data entry and 25% reporting, but the
reporting is taking the toll SQL wise.
This was from my insert test with 15 users.
Test type: Dynamic
 Simultaneous browser connections: 15
 Warm up time (secs): 0
 Test duration: 00:00:03:13
 Test iterations: 200
 Detailed test results generated: Yes
Response Codes
 Response Code: 403 - The server understood the request, but is refusing to
fulfill it.
  Count: 15
  Percent (%): 0.29
 Response Code: 302 - The requested resource resides temporarily under a
different URI (Uniform Resource Identifier).
  Count: 200
  Percent (%): 3.85
 Response Code: 200 - The request completed successfully.
  Count: 4,980
  Percent (%): 95.86
My select test with 25 users had this
Properties
 Test type: Dynamic
 Simultaneous browser connections: 25
 Warm up time (secs): 0
 Test duration: 00:00:06:05
 Test iterations: 200
 Detailed test results generated: Yes
Summary
 Total number of requests: 187
 Total number of connections: 200
 Average requests per second: 0.51
 Average time to first byte (msecs): 30,707.42
 Average time to last byte (msecs): 30,707.42
 Average time to last byte per iteration (msecs): 28,711.44
 Number of unique requests made in test: 1
 Number of unique response codes: 1
Well, having a bandwidth of 392Bps seems *really* low. I mean that is a
very old modem speed (3200 baud).
I'm wondering if you are doing a lot of aggregating in the web server,
and if you couldn't move some of that into the database by using plpgsql
functions.
That would take some of the load off of your IIS servers, and possibly
improve your overall bandwidth.
But I do agree, it looks like the select side is where you are hurting.
If I understand the numbers correctly, you can do 5k inserts in 3min,
but are struggling to do 200 selects in 6min.
John
=:-
Errors Counts
 HTTP: 0
 DNS: 0
 Socket: 26
Additional Network Statistics
 Average bandwidth (bytes/sec): 392.08
 Number of bytes sent (bytes): 64,328
 Number of bytes received (bytes): 78,780
 Average rate of sent bytes (bytes/sec): 176.24
 Average rate of received bytes (bytes/sec): 215.84
 Number of connection errors: 0
 Number of send errors: 13
 Number of receive errors: 13
 Number of timeout errors: 0
Response Codes
 Response Code: 200 - The request completed successfully.
  Count: 187
  Percent (%): 100.00

Joel



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Final decision

2005-04-27 Thread Dave Page
 

 -Original Message-
 From: Joshua D. Drake [mailto:[EMAIL PROTECTED] 
 Sent: 27 April 2005 17:46
 To: Dave Page
 Cc: Josh Berkus; Joel Fradkin; PostgreSQL Perform
 Subject: Re: [PERFORM] Final decision
 
  It is? No-one told the developers...
 
 We have mentioned it on the list.

Err, yes. But that's not quite the same as core telling us the current
driver is being replaced.

Regards, Dave.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Final decision

2005-04-27 Thread Josh Berkus
Dave, folks,

 Err, yes. But that's not quite the same as core telling us the current
 driver is being replaced.

Sorry, I spoke off the cuff.I also was unaware that work on the current 
driver had renewed.   Us Core people are not omnicient, believe it or don't.

Mind you, having 2 different teams working on two different ODBC drivers is a 
problem for another list ...

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


ODBC driver overpopulation (was Re: [PERFORM] Final decision)

2005-04-27 Thread Alvaro Herrera
On Wed, Apr 27, 2005 at 08:09:27PM -0700, Josh Berkus wrote:

 Mind you, having 2 different teams working on two different ODBC drivers is a 
 problem for another list ...

Only two?  I thought another commercial entity was also working on their
own ODBC driver, so there may be three of them.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
Always assume the user will do much worse than the stupidest thing
you can imagine.(Julien PUYDT)

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: ODBC driver overpopulation (was Re: [PERFORM] Final decision)

2005-04-27 Thread Joshua D. Drake
Mind you, having 2 different teams working on two different ODBC drivers is a 
problem for another list ...

Only two?  I thought another commercial entity was also working on their
own ODBC driver, so there may be three of them.
Well I only know of one company actually working on ODBC actively and 
that is Command Prompt, If there are others I would like to hear about 
it because I would rather work with someone than against them.

Sincerely,
Joshua D. Drake



--
Your PostgreSQL solutions provider, Command Prompt, Inc.
24x7 support - 1.800.492.2240, programming, and consulting
Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit
http://www.commandprompt.com / http://www.postgresql.org
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] Final decision

2005-04-27 Thread Joshua D. Drake
Dave Page wrote:
 


-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED] 
Sent: 27 April 2005 17:46
To: Dave Page
Cc: Josh Berkus; Joel Fradkin; PostgreSQL Perform
Subject: Re: [PERFORM] Final decision


It is? No-one told the developers...
We have mentioned it on the list.

Err, yes. But that's not quite the same as core telling us the current
driver is being replaced.
Well I don't think anyone knew that the current driver is still being 
maintained?

Sincerely,
Joshua D. Drake

Regards, Dave.

--
Your PostgreSQL solutions provider, Command Prompt, Inc.
24x7 support - 1.800.492.2240, programming, and consulting
Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit
http://www.commandprompt.com / http://www.postgresql.org
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: ODBC driver overpopulation (was Re: [PERFORM] Final decision)

2005-04-27 Thread Bruce Momjian
Joshua D. Drake wrote:
 Mind you, having 2 different teams working on two different ODBC drivers is 
 a 
 problem for another list ...
  
  
  Only two?  I thought another commercial entity was also working on their
  own ODBC driver, so there may be three of them.
 
 Well I only know of one company actually working on ODBC actively and 
 that is Command Prompt, If there are others I would like to hear about 
 it because I would rather work with someone than against them.

Well, you should talk to Pervasive because they have a team working on
improving the existing driver.  I am sure they would want to work
together too.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Final decision

2005-04-27 Thread Bruce Momjian
Joshua D. Drake wrote:
 Dave Page wrote:
   
  
  
 -Original Message-
 From: Joshua D. Drake [mailto:[EMAIL PROTECTED] 
 Sent: 27 April 2005 17:46
 To: Dave Page
 Cc: Josh Berkus; Joel Fradkin; PostgreSQL Perform
 Subject: Re: [PERFORM] Final decision
 
 
 It is? No-one told the developers...
 
 We have mentioned it on the list.
  
  
  Err, yes. But that's not quite the same as core telling us the current
  driver is being replaced.
 
 Well I don't think anyone knew that the current driver is still being 
 maintained?

We have been looking for someone to take over ODBC and Pervasive agreed
to do it, but there wasn't a big announcement about it.  I have
discussed this with them.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 8: explain analyze is your friend