Re: [GENERAL] more anti-postgresql FUD

2006-10-14 Thread Thomas Kellerer

[EMAIL PROTECTED] wrote on 11.10.2006 16:54:

Do a simple test to see my point:

1. create table test (id int4, aaa int4, primary key (id));
2. insert into test values (0,1);
3. Execute "update test set aaa=1 where id=0;" in an endless loop

I just did the test on PostgreSQL 7.4.12 and MySQL 5.0.22 (MyISAM,
sorry had no configured InnoDB). Ubuntu 6.0.6, AMD64, 2GB, default
database settings.

MySQL performs very well, approximately 15000-2 updates per second
with no degradation of performance.



Just a follow up: if you base your choice of DBMS on this test, you have to 
chose HSQLDB. I just ran this test on my WinXP AMD64 box, and it performed 
constantly at ~4 updates per second.


Thomas



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Resetting Serial Column Sequence Number

2006-10-14 Thread Shane Ambler

Andreas Kretschmer wrote:

Andreas Kretschmer <[EMAIL PROTECTED]> schrieb:


Adam <[EMAIL PROTECTED]> schrieb:


I just emptied my table and I want all my new inserts to start with a
'location_id' of '1'. The table is named "locations" with a SERIAL column
"location_id"


If you want to start at 1 why not
SELECT setval('locations_location_id_seq', 1);


I tried the below SQL to rest the sequence ID but it's not working.  What am I
doing wrong?
 
SELECT setval('locations_location_id_seq', (SELECT max(location_id) + 1 FROM

locations));

The table locations are empty? Yeah, select max(location_id) from an
empty table is NULL, not 0. And NULL+1 -> NULL. You can't set a Sequence
to NULL, that makes no sense.


Btw, to avoid this, you can use coalesce():
SELECT setval('locations_location_id_seq', (SELECT coalesce(max(location_id),0) 
+ 1 FROM locations));

coalesce returns the first non-null value, either the result from max()
or the second parameter, 0.


Andreas


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] time type strange behaviour

2006-10-14 Thread Michael Fuhr
On Tue, Oct 10, 2006 at 05:16:14PM +0100, garry saddington wrote:
> On Tue, 2006-10-10 at 11:58 -0400, Tom Lane wrote:
> > garry saddington <[EMAIL PROTECTED]> writes:
> > > This definition does not insert time zone. If I use without time zone
> > > then the time zone is inserted.
> > > If I input a time like: 01:05 AM then on select I get something like:
> > > 1970/01/01 01:05:00:00.
> > 
> > Better look again --- if you get that output, the column is most
> > certainly not a time column --- it must be timestamp.  Perhaps you
> > got confused about which table is which?
> > 
> No, there is no confusion, I have dropped it, re-made it and tested it
> again, same result. I know it sounds odd but this is what happens!

You said you were using psycopg and Zope, which implies that you're
using Python.  What versions of those things are you using?  Might
the unexpected results be coming from one of those components?  If
you connect to the database with psql and issue a query from there,
what do you get?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] encoding problem

2006-10-14 Thread jef peeraer

jef peeraer schreef:

i never thought i would be bblocked by an encoding problem :-(
My database is in LATIN1 , i have entries like this in a table called 
gemeenten

Column |   Type   |   Modifiers
---+--+ 


 id| integer  | serial
 gemeente  | text | not null
 postcode  | smallint | not null
 provincies_id | integer  |

This data is copied from a dump from that table

9780Quévy70407
9781Quévy-le-Grand70407
9782Quévy-le-Petit70407

So, the accents are there. But with my web page, which is set to 
ISO-8859-1, i don't get the accents.

The web-pages are build with XUL, where i set the charset to ISO-8859-1,
but communication with the server is through  XMLHttpRequest.
Do I have to specify the charset as well in the communication between 
server and client ? Or where else could it go wrong.


jef peeraer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



thanks for the help. The problem was lying in the fact that use 
json_encode procedure from PHP. This procedure requires that the string 
to be encoded be in UTF-8 format. So basically, my database stays in 
LATIN1 and i set client encoding to UTF-8, as well as all web pages.



jef peeraer

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] problem with using O_DIRECT

2006-10-14 Thread Andrew Sullivan
On Thu, Oct 12, 2006 at 12:13:02AM -0400, Ye Qin wrote:
> psql: could not connect to server: Connection refused
>   Is the server running locally and accepting
>   connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
> 
> Any advice?

Presumably, the server doesn't start?  What does your logfile say? 
(Not every method works on every platform, and you might have run
into an incompatible combination.)

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Unfortunately reformatting the Internet is a little more painful 
than reformatting your hard drive when it gets out of whack.
--Scott Morris

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] List of supported 64bit OS

2006-10-14 Thread Andrew Sullivan
On Thu, Oct 12, 2006 at 01:46:53PM +0300, Stanislaw Tristan wrote:
> Thanks for the answer!
> We'll order a 2 x Opteron2xxx series (Dual Core) and the memory will be 
> 16-32 Gb. This server is only for DB - non other services such as hosting, 
> mail and so on.

You didn't discuss your disc configuration or interface.  That's
going to be an important component.  Think about it carefully.

> I'm not system integrator, but the project manager and interesting about:
> - existing the free OS that ideally supports hardware above in conjunction 
> with PostgreSQL 8.x that will use dual core and big memory

I would look at FreeBSD, myself.  But look _very carefully_ at the
hardware it actually supports before buying anything.  People get
burned all the time by buying hardware that is not well supported,
and then having to go through contortions to make the operating
system work well.  Choose the OS first, then buy the hardware to
suit, not the other way 'round.  (You do this, effectively, when you
buy Sun or IBM RS/6000 gear, don't forget.)

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Users never remark, "Wow, this software may be buggy and hard 
to use, but at least there is a lot of code underneath."
--Damien Katz

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] more anti-postgresql FUD

2006-10-14 Thread Chris Mair

> > it would be cool if you could at least:
> >
> >  - bundle your updates into transactions of, say, 1000 updates at a time
> >i.e. wrap a BEGIN; END; around a 1000 of them
> >  - run postgresql with fsync off, since you're using MyISAM
> >  - run PostgreSQL at least 8, since you're running MySQL 5
> >
> > I'd bet MySQL would still be faster on such an artificial, single user
> > test, but not *that much* faster.
> 
> I'm quite sure the results will be very close to what I get before even
> if I do all of the above. My post was not about MySQL vs PostgreSQL. It
> was about very fast performance degradation of PostgreSQL in case of
> large number of updates provided vacuum is not used.
> 
> > If you don't want to install 8.0, could you maybe at least do the first
> > two items (shouldn't be a lot of work)...?
> >
> > Which client are you using? Just mysql/psql or some API?
> 
> C API


Ok,
I did some tests at last on this using the above 3 suggestions.
I ran on 8.1.4 on Linux with fsync=off and did 300k updates bundled
into 100 updates / 1 transaction (100 turned out to be a sweeter
spot than 1000). 

The box was comparable to yours, I think: 1xOpteron 2.2GHz, 2xSATA
RAID0 (yes, I know...), 1GB RAM

Details and results are here:
http://www.1006.org/misc/20061014_pgupdates_bench/

The interesting part is the graph that shows updates / sec real time
vs. running total of updates:
http://www.1006.org/misc/20061014_pgupdates_bench/results.png

Let's start with the red crosses: that's without vacuum, and yes,
you're right: PG's performance degrades.

But, it doesn't degrade quite as bad as you mentioned
(you mentioned 1600u/s for the first 10k, then 200u/s for
the first 100k). At 100k I'm still at 2700u/s down
from ~8000u/s. Only after ~140k updates my line drops
quicker. I obviously bump into some limit given by my
setup there. The thing is totally CPU-bound by the way.

Ok.
So, this a very bizarre load for PostgreSQL, especially
without any vacuum.

Let's add some vacuum: every 50k (green x) or even every 10k
(blue *) updates - which is a very reasonable thing do to for
this type of load.

With vacuum, I get a stable performance all the way up to
300k updates. Rates are 4700 u/s or even 8500 u/s.

Note the curves show no drops when vacuum is active.

Out of curiosity I did a run having autovacuum visit the db
every 30 seconds (purple squares): even without any special
effort to find good vacuum spots, I can get a good 3300
updates/sec all the way up to 300k updates!

I'd dare to say that if you just ran ZABBIX on 8.1 with
autovacuum on with a shortish interval (30 sec?) you'd
get rid of your performance problems. Time to update
the documentation after all? ;)

Bye,
Chris.



-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] more anti-postgresql FUD

2006-10-14 Thread Joshua D. Drake

> I'd dare to say that if you just ran ZABBIX on 8.1 with
> autovacuum on with a shortish interval (30 sec?) you'd
> get rid of your performance problems. Time to update
> the documentation after all? ;)

I would be curious to see what would happen if you added to vacuum_*
parameters to the equation (such as the delay).

Joshua D. Drake


> 
> Bye,
> Chris.
> 
> 
> 


-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread Joshua D. Drake
Hello,

Command Prompt has been teaching alot of classes lately, and one of the
questions that I received recently was:

What is the use case for a READ ONLY transaction?

I haven't been able to come up with a good answer. Anyone got a use case
for this feature? I know the community didn't implement it for giggles.

Sincerely,

Joshua D. Drake


-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread Tom Lane
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> What is the use case for a READ ONLY transaction?

> I haven't been able to come up with a good answer. Anyone got a use case
> for this feature? I know the community didn't implement it for giggles.

No, we implemented it because it's required by the SQL spec.

I'm not too sure about use-cases either.  It certainly seems pretty
useless from a protection standpoint.  It might be that some other
DBMSes like to know about READ ONLY so they can optimize transaction
processing, but Postgres doesn't care.  (We do the equivalent optimization
by noting at COMMIT time whether you actually made any DB changes,
which we can determine basically for free by seeing if the xact emitted
any WAL records ...)

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread Joshua D. Drake
Tom Lane wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>> What is the use case for a READ ONLY transaction?
> 
>> I haven't been able to come up with a good answer. Anyone got a use case
>> for this feature? I know the community didn't implement it for giggles.
> 
> No, we implemented it because it's required by the SQL spec.
> 
> I'm not too sure about use-cases either.  It certainly seems pretty
> useless from a protection standpoint.  It might be that some other
> DBMSes like to know about READ ONLY so they can optimize transaction
> processing, but Postgres doesn't care.  (We do the equivalent optimization
> by noting at COMMIT time whether you actually made any DB changes,
> which we can determine basically for free by seeing if the xact emitted
> any WAL records ...)

Thank you, that's what I needed.

Sincerely,

Joshua D. Drake




-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread David Fetter
On Sat, Oct 14, 2006 at 11:35:12AM -0700, Joshua D. Drake wrote:
> Hello,
> 
> Command Prompt has been teaching alot of classes lately, and one of the
> questions that I received recently was:
> 
> What is the use case for a READ ONLY transaction?

It would be handy for things like pgpool and Continuent, which could
reliably distinguish up front the difference between a transaction
that can write and one that can safely be sliced up and dispatched to
read-only databases.

Cheers,
D
> I haven't been able to come up with a good answer. Anyone got a use case
> for this feature? I know the community didn't implement it for giggles.
> 
> Sincerely,
> 
> Joshua D. Drake
> 
> 
> -- 
> 
>=== The PostgreSQL Company: Command Prompt, Inc. ===
> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
>Providing the most comprehensive  PostgreSQL solutions since 1997
>  http://www.commandprompt.com/
> 
> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
> 
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq

-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread Michael Fuhr
On Sat, Oct 14, 2006 at 11:35:12AM -0700, Joshua D. Drake wrote:
> What is the use case for a READ ONLY transaction?

I use read-only transactions as a safety net for interactive sessions
when I want to avoid modifying anything accidentally.  Here's an
example:

CREATE ROLE foo LOGIN PASSWORD 'password';
CREATE ROLE foo_ro LOGIN PASSWORD 'password';
ALTER ROLE foo_ro SET default_transaction_read_only TO on;
GRANT foo TO foo_ro;

The foo_ro role now has the same privileges as foo but it can't
modify anything because its transactions are read-only by default.
Using GRANT/REVOKE would be more secure (foo_ro could set
default_transaction_read_only to off and then do anything that foo
could do) but you'd have to remember to set the correct privileges
on every object the read-only role might need to examine; this would
be easy to automate but you'd still have to remember to do it.  When
the intent is to prevent "oops" mistakes rather than to provide
real security, using read-only transactions can be convenient.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread Tom Lane
David Fetter <[EMAIL PROTECTED]> writes:
> On Sat, Oct 14, 2006 at 11:35:12AM -0700, Joshua D. Drake wrote:
>> What is the use case for a READ ONLY transaction?

> It would be handy for things like pgpool and Continuent, which could
> reliably distinguish up front the difference between a transaction
> that can write and one that can safely be sliced up and dispatched to
> read-only databases.

I don't think that works for PG's interpretation of READ ONLY, though.
IIRC we let a "read only" transaction create and modify temp tables.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread Michael Fuhr
On Sat, Oct 14, 2006 at 03:42:48PM -0400, Tom Lane wrote:
> David Fetter <[EMAIL PROTECTED]> writes:
> > On Sat, Oct 14, 2006 at 11:35:12AM -0700, Joshua D. Drake wrote:
> >> What is the use case for a READ ONLY transaction?
> 
> > It would be handy for things like pgpool and Continuent, which could
> > reliably distinguish up front the difference between a transaction
> > that can write and one that can safely be sliced up and dispatched to
> > read-only databases.
> 
> I don't think that works for PG's interpretation of READ ONLY, though.
> IIRC we let a "read only" transaction create and modify temp tables.

Am I missing something then?

test=> BEGIN READ ONLY;
BEGIN
test=> CREATE TEMPORARY TABLE foo (x integer);
ERROR:  transaction is read-only

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread A. Kretschmer
am  Sat, dem 14.10.2006, um 13:35:21 -0600 mailte Michael Fuhr folgendes:
> On Sat, Oct 14, 2006 at 11:35:12AM -0700, Joshua D. Drake wrote:
> > What is the use case for a READ ONLY transaction?
> 
> I use read-only transactions as a safety net for interactive sessions
> when I want to avoid modifying anything accidentally.  Here's an
> example:
> 
> CREATE ROLE foo LOGIN PASSWORD 'password';
> CREATE ROLE foo_ro LOGIN PASSWORD 'password';
> ALTER ROLE foo_ro SET default_transaction_read_only TO on;
> GRANT foo TO foo_ro;

Great.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] BEGIN WORK READ ONLY;

2006-10-14 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/14/06 14:35, Michael Fuhr wrote:
> On Sat, Oct 14, 2006 at 11:35:12AM -0700, Joshua D. Drake wrote:
>> What is the use case for a READ ONLY transaction?
> 
> I use read-only transactions as a safety net for interactive sessions
> when I want to avoid modifying anything accidentally.  Here's an
> example:
> 
> CREATE ROLE foo LOGIN PASSWORD 'password';
> CREATE ROLE foo_ro LOGIN PASSWORD 'password';
> ALTER ROLE foo_ro SET default_transaction_read_only TO on;
> GRANT foo TO foo_ro;
> 
> The foo_ro role now has the same privileges as foo but it can't
> modify anything because its transactions are read-only by default.

Another benefit (with ISOLATION LEVEL SERIALIZABLE) is that you are
guaranteed to have unchanging source data, no matter how many ways
you aggregate, join and WHERE it.

As Tom notes, other RDBMSs do pre-query optimizations.  SET TRANS
READ ONLY tells the engine that these statements won't have to take
out concurrent write locks, and can thus take a different, faster
code path.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFMUbHS9HxQb37XmcRAu1FAJ9jBwddmyS5V0IQgbeZYS8Jv85W/wCgpeAf
j3jNyYxx7RWT74ed5YrfNLA=
=rLJe
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] more anti-postgresql FUD

2006-10-14 Thread Merlin Moncure

On 10/14/06, Chris Mair <[EMAIL PROTECTED]> wrote:

Ok,
I did some tests at last on this using the above 3 suggestions.
I ran on 8.1.4 on Linux with fsync=off and did 300k updates bundled
into 100 updates / 1 transaction (100 turned out to be a sweeter
spot than 1000).


My postgresql 'magic number' is 150.  150 is the point at which I stop
getting meangingful improvements on two important cases: simple
update/insert transactions like yours and also where the performance
improvement on fetching multiple rows level's off.  In other words,
selecting 100k rows in 150 record chunks is marginally slower then
selecting the whole thing at once (and single record selects is of
course much slower).  In code, the number 150 is called 'merlin's
constant' :)

however, its a pretty safe bet zabbix is not doing updates grouped in
transactions like that.  on the other hand, the updates are not so
localized either.


Details and results are here:
http://www.1006.org/misc/20061014_pgupdates_bench/


wow, great chart!

8500 updates/sec is really spectacular.  It proves that mvcc bloat on
small tables is controllable.  On large tables, the bloat is usually
not as much of a concern and can actually be a good thing.  You also
proved, in my opinion conclusively, that running vacuum in high update
environments is a good thing.


With vacuum, I get a stable performance all the way up to
300k updates. Rates are 4700 u/s or even 8500 u/s.


It looks like with careful tuning 10k could be cracked.  Also, while
mvcc provides certain scnenarios that have to be worked around, you
also get its advantages.  Updates lock only the record being written
to and only to other writers.  mysql ISAM does full table
locking...which is going to perform better in a 2p server with 100
users?  4p and 1000 users?

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql 6.13

2006-10-14 Thread Reece Hart




On Fri, 2006-10-13 at 14:54 +0100, Richard Huxton wrote:

Greg's mostly right, but the Vic-20 port is available as a type-in 
program listing in issue 3 of "Your Computer". 


Don't bother. I already typed it in and have it on cassette tape here somewhere. I'll send it via parcel post with two 13 cent stamps as soon as I finish watching War Games on my betamax.

Is Carter still president?

-Reece





-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0









Re: [GENERAL] Postgresql 6.13

2006-10-14 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/14/06 18:04, Reece Hart wrote:
> On Fri, 2006-10-13 at 14:54 +0100, Richard Huxton wrote:
>> Greg's mostly right, but the Vic-20 port is available as a type-in
>> program listing in issue 3 of "Your Computer".
> 
> Don't bother. I already typed it in and have it on cassette tape here
> somewhere. I'll send it via parcel post with two 13 cent stamps as soon
> as I finish watching War Games on my betamax.
> 
> Is Carter still president?

No, Reagan is.  WG was released in 1983 and the Vic-20 in 1981.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFMYQ+S9HxQb37XmcRAkwYAKDrCn9tmYZBB0QD6UKib9dU0awQsACg7yEA
9gQtk18NrzU9S8ZkuRZzOUo=
=LpVC
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] A query planner that learns

2006-10-14 Thread Guy Rouillier
Jim C. Nasby wrote:
 
> Just recording the query plan and actual vs estimated rowcounts would
> be a good start, though. And useful to DBA's, provided you had some
> means to query against it.  

If the DBMS stores the incoming query (or some parsed version of it) and
identifies frequency of usage over time, it can then spend spare cycles
more deeply analyzing frequently used queries.  Many DB servers have
usage patterns just like end-user workstations (not all, I realize):
they are busy for predictable periods of the day and fairly idle at
other times.  To provide acceptable response times, DBMSs don't have the
luxury of analyzing numerous query paths when queries are received.  But
they could do this offline so that the next time it sees the same query,
it can use a better-optimized plan.

Storing the query itself is probably a better idea than storing the
plan, since the plan may change over time.

-- 
Guy Rouillier


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] a math function with error handling

2006-10-14 Thread Arthur Molina

Hi there,

  I am new in this group and I've been working a lot in this function 
but it didnt work good enought until now. Here is the thing: I use some 
sql strings that calcultes a coll with another and I get the results. 
For exemple I have two tables that has many float, texts or numeric colls.


Like Tab01 with V01 as varchar(10), V02 as numeric, V03 as float; and 
Tab02 with F01 as varchar(20), F02 as float, F03 as float, F04 as 
numeric. In the colls of varchar there will be times that there is a 
number and others not.


There will be times that I will need to do a sum or division or any 
other math. Just like this:

SELECT Tab01.V01 + Tab02.F03 FROM Tab01, Tab02
SELECT Tab01.V02 / Tab02.F01 FROM Tab01, Tab02

Well here is the thing, I get some errors by doing this because we can 
have any results. So I need a function that tests this like:

evalthis(Tab01.V02 / Tab02.F01);

If I send something like 25 / '5' I get the result, 5
but if I send 25 / '0' I get null (division_by_zero)
or when I send 25 / 'textcrap' I get null too

Is there any chance to get something like this?
sorry for my bad English

regards,
Arthur


___ 
Novidade no Yahoo! Mail: receba alertas de novas mensagens no seu celular. Registre seu aparelho agora! 
http://br.mobile.yahoo.com/mailalertas/ 




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] a math function with error handling

2006-10-14 Thread Chris Mair

> If I send something like 25 / '5' I get the result, 5
> but if I send 25 / '0' I get null (division_by_zero)
> or when I send 25 / 'textcrap' I get null too

You might want to look at pl/pgsql exception handlers:
http://www.postgresql.org/docs/8.1/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

Trapping division by zero is given as an example there...


Bye, Chris.


-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] A query planner that learns

2006-10-14 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/13/06 10:47, John D. Burger wrote:
> Erik Jones wrote:
> 
[snip]
> But with both approaches, the planner is just using the static
> statistics gathered by ANALYZE to estimate the cost of each candidate
> plan, and these statistics are based on sampling your data - they may be
> wrong, or at least misleading.  (In particular, the statistic for total
> number of unique values is frequently =way= off, per a recent thread
> here.  I have been reading about this, idly thinking about how to
> improve the estimate.)

What about an ANALYZE FULL , reading every record in the
table, and ever node in every index, storing in pg_statistic (or
some new, similar table) such items as the AVG and STD of the number
of records in each page, and b-tree depth, keys per node, records
per key and per segment?  Maybe even "average distance between pages
in the tablespace".

This would let the optimizer know things like "the value of the
field which is first segment of an index (and which is the only part
of the index in the WHERE clause) describes 75% of the rows in the
table, and the records are all packed in tightly in the pages, and
the pages are close together", so the optimizer could decide "a
table scan would be much more efficient".

In some ways, this would be similar in functionality to the existing
histogram created by ANALYZE, but would provide a slightly different
picture.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFMcu5S9HxQb37XmcRAvVJAJ0VFfEoxwrKn15VqPaZz54SNY4tPACg47zB
r3hZ+HqHE/1bCJK/xNZzNRE=
=OP9+
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend