Re: [SQL] psql encoding problem

2005-03-02 Thread Ragnar Hafstað
On Mon, 2005-02-28 at 20:48 +, T E Schmitz wrote:

> INSERT INTO item (name,retail_price) VALUES  ('Cheese Soufflé',7.95,);
> 
> (I presume you see the accented character in *Soufflé*)
> 
> psql comes back with "invalid byte sequence for encoding "UNICODE": 0xe9"
> If I do this via DbVisualizer, the record is inserted fine.

might be the client encoding.

test=# CREATE DATABASE unitest with ENCODING='UNICODE';
CREATE DATABASE
test=# \connect unitest
You are now connected to database "unitest".
unitest=# create table a (n text);
CREATE TABLE
unitest=# insert into a values ('Cheese Soufflé is cool');
ERROR:  invalid byte sequence for encoding "UNICODE": 0xe92069
unitest=# set client_encoding='LATIN1';
SET
unitest=# insert into a values ('Cheese Soufflé is cool');
INSERT 7533697 1
unitest=# select * from a;
   n

 Cheese Soufflé is cool
(1 row)

unitest=#


gnari



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Postgres performance

2005-03-02 Thread Michael Fuhr
On Tue, Mar 01, 2005 at 02:52:31AM -0800, mauro wrote:

> select_range_key2 89224
> select_range_prefix   89054
> update_of_primary_key_many_keys   20495

These look suspect, especially the first two, and they account for
over 78% of the total.  Do you know what the table definitions and
queries look like?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Postgres performance

2005-03-02 Thread Richard Huxton
mauro wrote:
Not always, AFAICT. The four most common reasons why PG tests slower 
than Mysql are:
1. You haven't configured or have misconfigured PostgreSQL.
2. You are testing a MySQL-tuned application (lots of small, simple 
queries, no views, no subselects etc)
3. You are only testing one connection (try 10,20,50 simultaneous users 
and see who wins then).
4. You are not testing the transaction-safe storage systems in MySQL

See if you can answer some of the questions above and I'm sure we'll be 
able to get your database server running smoothly.
Hi, 
  I've used the benchmark
http://genecensus.org/yeast/cluster/database/mysql/sql-bench/ (ok,
it's without bench on views, sub-select, transaction,..)
Other points about this benchmark:
1. It's a good 5 years old. Nothing wrong in that, but the fact that it 
hasn't been updated in that time doesn't bode well. If nothing else, it 
is designed to test PostgreSQL version 6.x
2. As you say, it doesn't actually use any of the features of a modern 
database.
3. Although vacuum is mentioned, it's not obvious to me that it's being 
run. Also, I don't see any analyze run of the populated tables.
4. It wasn't immediately obvious to me how the tests were dealing with 
varying amounts of data being cached on different runs.
5. I couldn't see how many simultaneous connections were being tested.
6. In fact, I couldn't find a clear rationale about what these tests 
were supposed to simulate - what sort of environment.

The database files are in stripe (RAID 0) on two SATA hd (transfer
rate 50Mb), the filesystem is reiserfs (3.6 format - with checks off),
no optimation on I/O scheduler,
Largely irrelevant for these particular tests.
> DBMS are in default configuration (so
I don't benefit nobody).
If you're running with default configuration, you'll want to compare the 
two on a PII-200 with 32MB of RAM. That's roughly the default settings 
for PG's config. PG isn't designed to be run with the default 
configuration settings, it's designed to run almost anywhere.

Total time:
Pgsql: 7h 20'
MySQL: 14' (!!)
This is the configuration where is running Postgres 8.0 and MySql:
[snipped long list of hardware details/run results]
What do you think about this?
I think you didn't read my last message. I'll quote the relevent points 
again:
- begin quote -
Not always, AFAICT. The four most common reasons why PG tests slower 
than Mysql are:
1. You haven't configured or have misconfigured PostgreSQL.
2. You are testing a MySQL-tuned application (lots of small, simple 
queries, no views, no subselects etc)
3. You are only testing one connection (try 10,20,50 simultaneous users 
and see who wins then).
4. You are not testing the transaction-safe storage systems in MySQL
- end quote -

How many of these points apply to the benchmark you used? (Hint - it 
looks like all 4 to me).

Of course, if, on your production systems you:
1. Don't intend to configure your database system
2. Don't want views/triggers/subselects/partial indexes/functional 
indexes/...etc
3. Only have one simultaneous user
4. Don't use transactions and don't mind an inconsistent database.

In that case, these test results are relevant, and the right choice is 
clearly MySQL.

If you want to actually come up with some useful test figures, you'll 
want to:
1. Spend a reasonable amount of time learning how to setup and configure 
each system.
2. Understand your users' requirements, and design the tests accordingly.
3. Actually use the database to do what it is designed for.
4. Make sure you aren't using SQL structures that favour one database 
system over another (or have one schema for each database being tested)
5. Account for other factors in your tests - how much time is spent in 
Java/PHP etc. vs time in the database?

Best of luck Mauro, realistic testing is not a simple process and you've 
got a lot of work ahead of you. Don't forget there's the performance 
list that can help with specific problems too.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] Postgres performance

2005-03-02 Thread Richard Huxton
Mauro Bertoli wrote:
Hi, thanks a lot! you are rigth, but I did read your
message ;) 
Yes,
1- I misconfigured PostgreSQL (I thought that was
already configured in base to the released version -
Fedora Core 3 64bit).
2- The bench is, clearly after your precisations, an
MySQL tuned application tests.
3- I think the bench test only one connection, I
didn't see (in a fast reading) no threading request in
the bench code to simulate users requests.
4- I didn't test transaction-safe (that isn't used
explicitly in my application)
Well, do you care whether your data is consistent or not? If not, you 
don't need transactions.

 I understand it isn't simple.. I use the dbms in data
analysis environment and the more time is spent in
query (php is 0.1%) with more sub-selects and maybe
there's, in the same time, from 1 to 1000 users
insert/update data. I tests the dbms with my data
analysis framework simulating an super-extensive
request.
You'll find inserts/updates with lots of users is where PostgreSQL works 
well compared to other systems.

Do you know where I can find an tutorial to configure
hardware dependent Postgres internal values?
There's some useful stuff here:
 http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
and also here:
 http://www.powerpostgresql.com/PerfList
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Multiples schemas

2005-03-02 Thread lucas
Hi,
Is there a way to construct a multi schema in my data base?
 Something like:
  mysystem.finances.money.tables
  mysystem.finances.money.functions
  mysystem.finances.credits.tables
  mysystem.finances.credits.functions
  mysystem.amount.products..
  

Or can I use another database like:
 createdb DB1
 createdb DB2
 psql DB1
 select * from DB2.schema.table

Or i need to construct the tables in the same database and the same schema like:
 mysystemdb.amount.products
 mysystemdb.amount.vendors
 mysystemdb.amount.clients
 mysystemdb.finances.money
 

Could I create a multi schema into another schema ??? or is there only one level
for schema sctructs?

Thanks for all

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Postgres performance

2005-03-02 Thread Richard Huxton
Mauro Bertoli wrote:
Hi Richard, thank you for your apreciated answers!!!
- start quote -
 Well, do you care whether your data is consistent or
 not? If not, you 
 don't need transactions.
- end quote -
I don't require transaction because the query aren't
complex and update a single tuple (in SELECT
transactions are useless)
You're still using transactions. One per UPDATE/INSERT granted, but 
you're using them anyway. Even if you don't issue BEGIN...COMMIT. 
Otherwise you don't know your update was written to disk.

- start quote -
 You'll find inserts/updates with lots of users is
 where PostgreSQL works 
 well compared to other systems.
- end quote -
Uhhmm.. this is interesting...

- tutorial links -
Thx, now I read it and test an hardware tuned
configuration... I read that is not very simple... :O
Another question: 
- why postgres release aren't already configured
(hardware tuning)? isn't possible configure it during
installation?
Configured for what? PG can't tell how many disks you have, or how 
you've set them up. It also can't tell whether this machine is a 
dedicated DB server, or sharing space with a webserver. Or part of a 
virtual OS installation and the hardware is shared by 100 other virtual 
OSes.
Occasionally, people do propose an auto-tuning utility at setup, but you 
really need at least a dozen different options to do it properly. Oh, 
and then you'll need to do it for 30 versions of Unix on a variety of 
hardware and Windows too.

- why postgres use a new process for every query ?
(mySQL, if I'm not wrong, use threads... I think its
faster)
Using a separate process for each means a problem in one process only 
affects that process. Threads aren't necessarily much faster (except on 
Windows) and in any case that only affects connection time.

- why connection time is slower? (compared to mySQL)?
See above, but it's still not bad. If connection time is a limiting 
factor for you, then you have a very strange or very large workload. You 
might want to explore pgpool for connection pooling if you have a large 
website to avoid having a lot of idle connections though.

- why postgres require analyze? (mySQL, if I'm not
wrong, don't require it)
PG's planner is statistics-based. That is, it can tell that if you have 
a list of English surnames then "Smith" occurs more than "zgwasq". In 
some cases reading the whole table might be quicker than going to the 
index many times.
The analyse scans (a percentage of) the whole table to see if these 
statistics have changed. This is different from a VACUUM which recovers 
space where rows have been deleted or updated.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] psql encoding problem

2005-03-02 Thread Joel Fradkin
I had the same issue using odbc, but .net you can use encodeing = unicode,
so not sure what you are using to do the connection.
Since I am using ODBC with my ASP I had to switch from Unicode to SQL_ASCHII
for my data base.
In effect it tells the database you don’t know about the encoding and makes
some of the routines like upper not work properly on extended chars.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of T E Schmitz
Sent: Monday, February 28, 2005 3:48 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] psql encoding problem

Hello,

I am trying to insert the following record:

INSERT INTO item (name,retail_price) VALUES  ('Cheese Soufflé',7.95,);

(I presume you see the accented character in *Soufflé*)

psql comes back with "invalid byte sequence for encoding "UNICODE": 0xe9"
If I do this via DbVisualizer, the record is inserted fine.

Is there any way around this problem?

-- 


Regards/Gruß,

Tarlika Elisabeth Schmitz

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Multiples schemas

2005-03-02 Thread Peter Eisentraut
Am Mittwoch, 2. MÃrz 2005 12:30 schrieb [EMAIL PROTECTED]:
> Could I create a multi schema into another schema ??? or is there only one
> level for schema sctructs?

No and yes.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Postgres performance

2005-03-02 Thread Scott Marlowe
On Tue, 2005-03-01 at 04:52, mauro wrote:
> > Not always, AFAICT. The four most common reasons why PG tests slower 
> > than Mysql are:
> > 1. You haven't configured or have misconfigured PostgreSQL.
> > 2. You are testing a MySQL-tuned application (lots of small, simple 
> > queries, no views, no subselects etc)
> > 3. You are only testing one connection (try 10,20,50 simultaneous users 
> > and see who wins then).
> > 4. You are not testing the transaction-safe storage systems in MySQL
> > 
> > See if you can answer some of the questions above and I'm sure we'll be 
> > able to get your database server running smoothly.
> Hi, 
>   I've used the benchmark
> http://genecensus.org/yeast/cluster/database/mysql/sql-bench/ (ok,
> it's without bench on views, sub-select, transaction,..)
> The database files are in stripe (RAID 0) on two SATA hd (transfer
> rate 50Mb), the filesystem is reiserfs (3.6 format - with checks off),
> no optimation on I/O scheduler, DBMS are in default configuration (so
> I don't benefit nobody). Total time:
> Pgsql: 7h 20'
> MySQL: 14' (!!)

Why is a dragster faster than a freight train?  Because it only has to
run for 5 or 6 seconds and you expect the engine to implode on ever
fourth run.  The freight train, on the other hand, has to run day after
day and deliver its cargo without damage.

The reason MySQL can be so fast is that it's not really a database in
the classical sense.  It does floating point maths on exact numeric
types.  It does almost no error checking, and if you lose power during
updates all your data could quite easily be gone.

While it's a fine storage system for certain content management tasks,
it's not reliable enough for things like accounting or where the answers
have to be right.

The reason PostgreSQL is slower is because it (and by extension the team
behind it) cares about your data. 

Here's a list of the things MySQL will gladly do wrong:

http://sql-info.de/mysql/gotchas.html

I wouldn't trust such a database for an kind of mission critical system
that handled important data, and anyone who does is gambling against the
house odds.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] table constraints

2005-03-02 Thread Casey T. Deccio
On Tue, 2005-03-01 at 09:56 -0700, Greg Patnude wrote:
> foreign keys and primary keys have to be defined as unique at the
> table /
> column level if you want to implement a check constraint -- your
> contrived
> example doesn't stand up all that well -- If you want to use
> constraints -- 
> then your database schema should conform to traditional RDBMS theory
> and
> data normalization by having  primary and foreign keys instead of just
> trying to create arbitrary contraints on a non-normalized schema and
> implement constraints as a user-defined function...
> 

You are correct.  I did not take the time to write in these constraints
in the contrived example because I was rapidly trying to put together
something that would simply illustrate the problem.  They were/are in
place in my actual schema.  Thanks,

Casey


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] definative way to place secs from epoc into timestamp column

2005-03-02 Thread Bret Hughes
I give up.  I have STFW and STFM and still do not feel like I have a
good way to update/insert into a timestamp w/o TZ column with an integer
representing seconds from epoch.  I am adding functionality to a php app
that does a fair amount of work with time and am currently using
abstime($timestamp). $timestamp is a php timestamp.  I found this on the
web somewhere since I find no reference to a function abstime.  There is
significant hits relating to abstime in the docs but it all seems to
refer to an internal data type of low res time data.

Help.  I would really like to do this in the most efficient way possible
but would like it be not likely to break in future releases.

Bret



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] psql encoding problem

2005-03-02 Thread T E Schmitz
Hello Ragnar,
Ragnar Hafstað wrote:
On Mon, 2005-02-28 at 20:48 +, T E Schmitz wrote:

INSERT INTO item (name,retail_price) VALUES  ('Cheese Soufflé',7.95,);
psql comes back with "invalid byte sequence for encoding "UNICODE": 0xe9"
might be the client encoding.

set client_encoding='LATIN1';
sorted my problem - many thanks!
gnari

--
Regards/Gruß,
Tarlika
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [SQL] definative way to place secs from epoc into timestamp column

2005-03-02 Thread Tom Lane
Bret Hughes <[EMAIL PROTECTED]> writes:
> I give up.  I have STFW and STFM and still do not feel like I have a
> good way to update/insert into a timestamp w/o TZ column with an integer
> representing seconds from epoch.

The docs say:

  Here is how you can convert an epoch value back to a time stamp: 

 SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720 * INTERVAL '1 second';

If you want a timestamp w/o time zone then the right thing depends on
what you think the reference epoch is.  If you do

 SELECT TIMESTAMP WITHOUT TIME ZONE 'epoch' + 982384720 * INTERVAL '1 
second';

then what you will get is the correct equivalent of the Unix timestamp
in GMT time.  If you do the first calculation and then cast to timestamp
w/o time zone then what you will get is a correct equivalent in your
TimeZone setting.  For instance

regression=# show timezone;
 TimeZone
--
 EST5EDT
(1 row)

regression=# SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720 * INTERVAL '1 
second';
?column?

 2001-02-16 23:38:40-05
(1 row)

regression=# SELECT TIMESTAMP WITHOUT TIME ZONE 'epoch' + 982384720 * INTERVAL 
'1 second';
  ?column?
-
 2001-02-17 04:38:40
(1 row)

regression=# SELECT (TIMESTAMP WITH TIME ZONE 'epoch' + 982384720 * INTERVAL '1 
second')::timestamp without time zone;
  timestamp
-
 2001-02-16 23:38:40
(1 row)

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] definative way to place secs from epoc into timestamp

2005-03-02 Thread Bret Hughes
On Wed, 2005-03-02 at 13:52, Tom Lane wrote:
> Bret Hughes <[EMAIL PROTECTED]> writes:
> > I give up.  I have STFW and STFM and still do not feel like I have a
> > good way to update/insert into a timestamp w/o TZ column with an integer
> > representing seconds from epoch.
> 
> The docs say:
> 
>   Here is how you can convert an epoch value back to a time stamp: 
> 
>  SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720 * INTERVAL '1 
> second';
> 
> If you want a timestamp w/o time zone then the right thing depends on
> what you think the reference epoch is.  If you do
> 
>  SELECT TIMESTAMP WITHOUT TIME ZONE 'epoch' + 982384720 * INTERVAL '1 
> second';
> 
> then what you will get is the correct equivalent of the Unix timestamp
> in GMT time.  If you do the first calculation and then cast to timestamp
> w/o time zone then what you will get is a correct equivalent in your
> TimeZone setting.  For instance




Thanks for the feed back tom  I say that but I could not believe that I
have to jump through all those hoops on an insert or update

update mytable set (lasttime =(SELECT TIMESTAMP WITH TIME ZONE 'epoch' +
982384720 * INTERVAL '1 second') )

is this what you are saying I need to do?

also, what is happening with abstime(982384720)?  this works as expected
(by me ).  Is this a bad idea?  I can't believe that all the complicated
string manipulation stuff is there but I have to run a subselect to
insert a numeric value that I suspect is close to how it is stored
anyway.  Of course the last part is a WAG.


Bret



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] definative way to place secs from epoc into timestamp

2005-03-02 Thread Tom Lane
Bret Hughes <[EMAIL PROTECTED]> writes:
> Thanks for the feed back tom  I say that but I could not believe that I
> have to jump through all those hoops on an insert or update

> update mytable set (lasttime =(SELECT TIMESTAMP WITH TIME ZONE 'epoch' +
> 982384720 * INTERVAL '1 second') )

> is this what you are saying I need to do?

You can make a function that embodies whichever semantics you want.

> also, what is happening with abstime(982384720)?  this works as expected
> (by me ).  Is this a bad idea?

It won't be there forever.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Postgres performance

2005-03-02 Thread PFC

The reason PostgreSQL is slower is because it (and by extension the team
behind it) cares about your data.
	Sure, postgres is (a bit but not much) slower for a simple query like  
SELECT * FROM one table WHERE id=some number, and postgres is a lot slower  
for UPDATES (although I heard that it's faster than MySQL InnoDB)... but  
try a query with a join on few tables, even a simple one, and postgres  
will outperform mysql, sometimes by 2x, sometimes 1000 times. I had a case  
with a join between 4 tables, two of them having 50k records ; I was only  
pulling 6 records... mysql spent half a second and postgres 0.5 ms... hell  
!

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Postgres performance

2005-03-02 Thread Scott Marlowe
On Wed, 2005-03-02 at 15:45, PFC wrote:
> > The reason PostgreSQL is slower is because it (and by extension the team
> > behind it) cares about your data.
> 
>   Sure, postgres is (a bit but not much) slower for a simple query like  
> SELECT * FROM one table WHERE id=some number, and postgres is a lot slower  
> for UPDATES (although I heard that it's faster than MySQL InnoDB)... but  
> try a query with a join on few tables, even a simple one, and postgres  
> will outperform mysql, sometimes by 2x, sometimes 1000 times. I had a case  
> with a join between 4 tables, two of them having 50k records ; I was only  
> pulling 6 records... mysql spent half a second and postgres 0.5 ms... hell  
> !

Or better yet, a query like this:

select a.lt ,
b.perspective as YYY_pers,
b.averageresponsetime as YYY_aver,
b.lowestresponsetime as YYY_lowe,
b.highestresponsetime as YYY_high,
b.totalcount as YYY_tota,
c.perspective as XXX_pers,
c.averageresponsetime as XXX_aver,
c.lowestresponsetime as XXX_lowe,
c.highestresponsetime as XXX_high,
c.totalcount as XXX_tota,
d.perspective as BBB_pers,
d.averageresponsetime as BBB_aver,
d.lowestresponsetime as BBB_lowe,
d.highestresponsetime as BBB_high,
d.totalcount as BBB_tota,
e.perspective as AAA_pers,
e.averageresponsetime as AAA_aver,
e.lowestresponsetime as AAA_lowe,
e.highestresponsetime as AAA_high,
e.totalcount as AAA_tota,
f.perspective as CCC_pers,
f.averageresponsetime as CCC_aver,
f.lowestresponsetime as CCC_lowe,
f.highestresponsetime as CCC_high,
f.totalcount as CCC_tota,
g.perspective as ZZZ_pers,
g.averageresponsetime as ZZZ_aver,
g.lowestresponsetime as ZZZ_lowe,
g.highestresponsetime as ZZZ_high,
g.totalcount as ZZZ_tota 
from (
select distinct date_trunc('minutes', lastflushtime) as lt from 
businessrequestsummary
where lastflushtime between '2005-01-01 00:00:00' and '2005-03-31 00:00:00'
) as a 
left join
(   select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='YYY'
)as b on (a.lt=b.lt) 
left join
(   select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='XXX'
)as c on (a.lt=c.lt) 
left join
(   select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='BBB'
)as d on (a.lt=d.lt) 
left join
(   select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='AAA'
)as e on (a.lt=e.lt) 
left join
(   select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='CCC'
)as f on (a.lt=f.lt) 
left join
(   select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='ZZZ'
)as g on (a.lt=g.lt) 

Basically, the more complex the query gets, the worse MySQL generally does, 
since it's query planner
is a pretty simple rules based one.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Postgres performance

2005-03-02 Thread Andrew Sullivan
This sort of discussion should really go onto -performance, but I'm
at pains to stomp out a common misperception.

On Wed, Mar 02, 2005 at 10:45:38PM +0100, PFC wrote:
> 
>   Sure, postgres is (a bit but not much) slower for a simple
>   query like SELECT * FROM one table WHERE id=some number, and

This is true _only if_ nobody else is writing at the same time you
are.  That is, for single-user or read-only databases, MySQL appears
to have a really significant advantage when using the standard MyISAM
table type.  The problem with that table type is that it requires the
_whole table_ be locked during write operations.  

In any case, for any sort of real database work, nobody sane would
use anything except the InnoDB table type.  That's a more reasonable
fruit-comparison than MySQL using MyISAM.  In the latter case, you
may as well compare PostgreSQL to flat file writing. 

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
--Alexander Hamilton

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Postgres performance

2005-03-02 Thread Ian Barwick
On Wed, 02 Mar 2005 09:00:14 -0600, Scott Marlowe
<[EMAIL PROTECTED]> wrote:
(...)
> The reason PostgreSQL is slower is because it (and by extension the team
> behind it) cares about your data.
> 
> Here's a list of the things MySQL will gladly do wrong:
> 
> http://sql-info.de/mysql/gotchas.html

Leaving MySQL or other databases out of the equation for the moment:
the above site is a purely dynamic website (i.e. no static files, not
even images) driven by a PostgreSQL backend. There are several issues
with the underlying application (a DIY hack job ;-) which mean it
isn't as fast as it could be. However, although I haven't been able to
run comparisions with other RDBMSs I find it hard to imagine where
significant speed gains could be made at the database end, especially
if stored procedures are not available (any raw speed increase could
well be eaten up by the need to implement several critical functions
in the application).

Recently I added a function (for another site on the same server,
running from the same database) to generate a blog-style calendar for
a given month to show on which days an article was written. Despite
involving a three-table join with a longish list of join conditions it
proved to be jaw-droppingly fast (a few milliseconds, fast enough not
to have to cache the result anywhere, which is what I was originally
expecting to have to do) and as an added bonus returns the weekday
expressed as an integer, so all the application has to do is a little
formatting to produce the end result.

I've also run a PostgreSQL-based multi-thousand page site (with a
simpler structure) without any complaints speedwise; and when one of
the disks died very nastily during an intensive write operation
(software raid on dodgy hardware) I was even able to rsync the
database files direct from the surviving disk over to a backup server
and restart PostgreSQL there straight off, without any evident
problems. (Disclaimer: it was an emergency, and the data was
non-critical; nevertheless I never found any evidence of corruption).

Ian Barwick

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] Building a database from a flat file

2005-03-02 Thread Casey T. Deccio
A database I am currently using is built and updated periodically from a
flat csv file (The situation is rather unfortunate, but that's all I
have right now).  The schema I use is more complex than the flat file,
so I follow a process to populate the tables with the data from the
file.  First I slurp the whole file into one temporary table, whose
columns correspond to the columns in the file.  Then I DELETE all the
existing rows from the tables in the schema and perform a series of
queries on that table to INSERT and UPDATE rows in the tables that are
in the schema.  Then I DELETE the data from the temporary table.  I do
it this way, rather than trying to synchronize it, because of the
inconsistencies and redundancies in the flat file.

There is more than one problem with this, but the largest is that I
would like to perform this whole database rebuild within one
transaction, so other processes that need to access the database can do
so without noticing the disturbance.  However, performing this set of
events (besides populating the temporary table) within a single
transaction takes a long time--over an hour in some cases.

What are some suggestions to help improve performance with replacing one
set of data in a schema with another?

Casey


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] definative way to place secs from epoc into timestamp

2005-03-02 Thread Bret Hughes
On Wed, 2005-03-02 at 14:26, Tom Lane wrote:
> Bret Hughes <[EMAIL PROTECTED]> writes:
> > Thanks for the feed back tom  I say that but I could not believe that I
> > have to jump through all those hoops on an insert or update
> 
> > update mytable set (lasttime =(SELECT TIMESTAMP WITH TIME ZONE 'epoch' +
> > 982384720 * INTERVAL '1 second') )
> 
> > is this what you are saying I need to do?
> 
> You can make a function that embodies whichever semantics you want.
> 
> > also, what is happening with abstime(982384720)?  this works as expected
> > (by me ).  Is this a bad idea?
> 
> It won't be there forever.
> 
Thanks again for the help Tom.  My solution for those intrepid archive
searchers that follow:

also my first two pgsql functions :)

cat ts2int.sql
drop function int2ts(integer);
drop function ts2int(timestamp without time zone);

create function int2ts(integer) returns timestamp  as '
SELECT ( TIMESTAMP WITH TIME ZONE \'epoch\' + $1 * INTERVAL \'1
second\')::timestamp without time zone;
' language sql;

create function ts2int(timestamp without time zone) returns int as '
select extract( \'epoch\' from $1)::integer;
' language sql;

comment on function int2ts(integer) is   
'convert a unix timestamp based integer to a timestamp without time
zone';

comment on function ts2int(timestamp without time zone) is   
'convert a timstamp without time zone to a unix timstamp based integer';
Thanks again for your patience as I try to get my head around how pg
handles this stuff.  I am getting close to getting my head around it but
seem to have a block on picturing the internals.

Bret



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] definative way to place secs from epoc into timestamp

2005-03-02 Thread Tom Lane
Bret Hughes <[EMAIL PROTECTED]> writes:
> create function int2ts(integer) returns timestamp  as '
> SELECT ( TIMESTAMP WITH TIME ZONE \'epoch\' + $1 * INTERVAL \'1
> second\')::timestamp without time zone;
> ' language sql;

> create function ts2int(timestamp without time zone) returns int as '
> select extract( \'epoch\' from $1)::integer;
> ' language sql;

Looks good as far as it goes.  Two thoughts:

* both functions should probably be marked STRICT STABLE
(not IMMUTABLE, because they depend on the local timezone setting).

* have you considered allowing the numeric values to be float8 instead
of integer?  There doesn't seem any reason to disallow sub-second
precision.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] definative way to place secs from epoc into timestamp

2005-03-02 Thread Michael Glaesemann
On Mar 3, 2005, at 14:42, Bret Hughes wrote:
also my first two pgsql functions :)
cat ts2int.sql
FWIW, there's a patch in the queue for 8.1 that adds a to_timestamp 
function that converts between Unix epoch and timestamp with time zone.

http://momjian.postgresql.org/cgi-bin/pgpatches2
Doesn't help you now, but it'll be there in the future.
Michael Glaesemann
grzm myrealbox com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] how to monitor the archiving process?

2005-03-02 Thread Smit
Hello,

i am using postgreSQL 8.0.1.
i wanted to archive the WAL files.
i had set the 'archive_command' variable in the
posgresql.conf file as 

archive_command = 'cp -i %p /mnt/server/archivedir/%f'

but its not working.
can anybody tell me, is there anything else which
should be done and i missed it out?
and also how to monitor the archiving process?

Thanks in advance,
Smita

Send instant messages to your online friends http://uk.messenger.yahoo.com 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] definative way to place secs from epoc into timestamp

2005-03-02 Thread Bret Hughes
On Thu, 2005-03-03 at 00:25, Tom Lane wrote:
> Bret Hughes <[EMAIL PROTECTED]> writes:
> > create function int2ts(integer) returns timestamp  as '
> > SELECT ( TIMESTAMP WITH TIME ZONE \'epoch\' + $1 * INTERVAL \'1
> > second\')::timestamp without time zone;
> > ' language sql;
> 
> > create function ts2int(timestamp without time zone) returns int as '
> > select extract( \'epoch\' from $1)::integer;
> > ' language sql;
> 
> Looks good as far as it goes.  Two thoughts:
> 
> * both functions should probably be marked STRICT STABLE
> (not IMMUTABLE, because they depend on the local timezone setting).
> 
> * have you considered allowing the numeric values to be float8 instead
> of integer?  There doesn't seem any reason to disallow sub-second
> precision.
> 

well no I had not considered it but am now :)

I need to read some more as to the STRICT STABLE stuff, thanks for the
tips.

Bret


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] definative way to place secs from epoc into timestamp

2005-03-02 Thread Bret Hughes
On Thu, 2005-03-03 at 00:41, Michael Glaesemann wrote:
> 
> On Mar 3, 2005, at 14:42, Bret Hughes wrote:
> 
> > also my first two pgsql functions :)
> >
> > cat ts2int.sql
> 
> FWIW, there's a patch in the queue for 8.1 that adds a to_timestamp 
> function that converts between Unix epoch and timestamp with time zone.
> 
> http://momjian.postgresql.org/cgi-bin/pgpatches2
> 
> Doesn't help you now, but it'll be there in the future.
> 

Cool.  Nice to know I am not the only one.

a RFE would be to let to_timestamp be to a timezone without time zone
and have a to_timestamptz do the time zone thing.  Seems more consistent
and would give me the functionality I am looking for :)

Bret


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])