Re: [GENERAL] tableoid

2004-11-25 Thread Richard Huxton
Jamie Deppeler wrote:
Hi
have a bit of a issue im planning on using tableoid to select the 
appropate table, but im not sure that you can in sql select statement? 
If not is there another approch i could be using?
Can you explain what you are trying to do?
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] why use SCHEMA? any real-world examples?

2004-11-25 Thread Markus Wollny
Hi!

It's really just a convenience-thing to organize your data in a more intuitive 
way. We're running several online magazines, each of those with a sort of 
entity-database, but each with their own articles. So we've just put the 
entity-data in the public schema, whereas the magazine-specific data is going 
in their own schemata. That way we can simply use the very same queries for all 
of our magazines' applications, just by implementing the magazine-schema as a 
variable which is set at query-runtime. 

Kind regards

   Markus

 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] Im Auftrag von 
 Miles Keaton
 Gesendet: Donnerstag, 25. November 2004 06:13
 An: [EMAIL PROTECTED]
 Betreff: [GENERAL] why use SCHEMA? any real-world examples?
 
 I just noticed PostgreSQL's schemas for my first time.
 (http://www.postgresql.org/docs/current/static/ddl-schemas.html) 
 
 I Googled around, but couldn't find any articles describing 
 WHY or WHEN to use schemas in database design.
 
 Since the manual says HOW, could anyone here who has used schemas take
 a minute to describe to a newbie like me why you did?   What benefits
 did they offer you?   Any drawbacks?
 
 Thanks for your time.
 
 - Miles
 
 ---(end of 
 broadcast)---
 TIP 8: explain analyze is your friend
 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] HELP speed up my Postgres

2004-11-25 Thread Anatoly Okishev
SQL:
update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select
mobile_num from LOADED_MOBILE_NUMBERS)
You can try this:
update SUBSCRIPTIONTABLE, LOADED_MOBILE_NUMBERS  set 
SUBSCRIPTIONTABLE.ACTIVEFLAG='Y'
where LOADED_MOBILE_NUMBERS.mobile_num=SUBSCRIPTIONTABLE.mobile_num

Anatoly.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Index work around?

2004-11-25 Thread Bjørn T Johansen
Does 8.0 change the way PostgresSQL uses indexes? I.e. do I still need to use ::int8 to 
make it use indexes in 8.0 as I need in 7.x?

Regards,
BTJ
--
---
Bjørn T Johansen
[EMAIL PROTECTED]   
---
Someone wrote:
I understand that if you play a Windows CD backwards you hear strange Satanic 
messages
To which someone replied:
It's even worse than that; play it forwards and it installs Windows
---
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] select into temp tables withough using EXECUTE in plpgsql

2004-11-25 Thread Richard Huxton
Edmund Kleiser wrote:
So to recap I'm creating a temp table fine.
I'm EXCUTING an insert into the temp table fine.
Then I cannot select from the table
in the form:
SELECT INTO int1 count(distinct(value)) from TEMP1;
The following creates a table, populates it and selects from it using an 
EXECUTE. Does that help?

BEGIN;
CREATE TABLE exectest (a integer, b text, PRIMARY KEY (a));
COPY exectest FROM stdin;
1   aaa
2   bbb
3   ccc
\.
CREATE FUNCTION demo_exec_fn() RETURNS boolean AS '
DECLARE
r RECORD;
BEGIN
FOR r IN EXECUTE ''SELECT * FROM exectest''
LOOP
RAISE NOTICE ''a=%, b=%'', r.a, r.b;
END LOOP;
RETURN true;
END
' LANGUAGE plpgsql;
SELECT demo_exec_fn();
COMMIT;
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] VACUUM ANALYZE question - PostgreSQL performance tests

2004-11-25 Thread juleni

Hello,

   I have the question about VACUUM ANALYZE. I have try to do Postgres
performance tests for selecting large amount of records from DB.
   First I have insert 30.000 records into the 1 table. After this
insert I executed VACUUM ANALYZE query.
   I have a test that retrieves page by page (20 records per page)
all data from a table. It means I'm executing 1500 selects in the cycle
for retrieving each page and I'm retrieving also time duration of some
of this selects.

   PROBLEM IS, that when I start to retrieve records, the performance
is poor. But when I execute manually (from a DB client) query VACUUM
ANALYZE one more time (during retrieving of pages), the performance is
much better.

   Is there also neccessary to call VACUUM ANALYZE also for getting of
better performance for select query?


   Thank you for your answer,
   with best regards,

   Julian Legeny




   Here I attach log reports for 30.000 records:
   =

  Here can be possible to see that time duration of executing final query is 
aproximately same for
  each retrieved page. And performance is not very good.
  I have applied VACUUM ANALYZE during processing test and from the page 1000 
performance
  is better about 2/3.


   a.) First I insert 30.000 records into the DB.

   b.) Then I retrieve page by page all records from the DB.

 I'm executing 2 commands:
 
 1. Command retrieve number of all items that I want to retrieve
page by page:

select count(*) from BF_USER
   where BF_USER.DOMAIN_ID=19 and BF_USER.ID NOT IN(280)

- in the log file is possible to see time duration of this
  select - it is time for Duration for executing count
  statement.

 2. Final query for retrieving particular records specified within
 the LIMIT clause.

select BF_USER.LOGIN_NAME,
   BF_USER.EMAIL,BF_USER.ID,
   BF_USER.MODIFICATION_DATE,
   BF_USER.SUPER_USER,
   BF_USER.GUEST_ACCESS_ENABLED
from BF_USER
where BF_USER.DOMAIN_ID=19 and
  BF_USER.ID NOT IN(280)
order by BF_USER.LOGIN_NAME asc
limit 20 offset 0


   First I execute select COUNT(*) ... query for retrieving number
   of all items that I will retrieve and then when I know this number,
   I can retrieve specified records (used LIMIT for this).


-

INFO: Total duration to create 3 data objects was 1:46.453 which is 281 
items/sec

INFO: Duration for executing count statement for page 1 (at position 1) = 171 ms
INFO: Duration for executing final query = 641 ms
INFO: Total duration = 828 ms

INFO: Duration for executing count statement for page 2 (at position 21) = 156 
ms
INFO: Duration for executing final query = 641 ms
INFO: Total duration = 797 ms

INFO: Duration for executing count statement for page 3 (at position 41) = 140 
ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 765 ms

INFO: Duration for executing count statement for page 4 (at position 61) = 141 
ms
INFO: Duration for executing final query = 640 ms
INFO: Total duration = 797 ms

INFO: Duration for executing count statement for page 5 (at position 81) = 141 
ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 100 (at position 1981) = 
141 ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 101 (at position 2001) = 
141 ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 102 (at position 2021) = 
140 ms
INFO: Duration for executing final query = 594 ms
INFO: Total duration = 750 ms

INFO: Duration for executing count statement for page 103 (at position 2041) = 
140 ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 765 ms

INFO: Duration for executing count statement for page 104 (at position 2061) = 
141 ms
INFO: Duration for executing final query = 609 ms
INFO: Total duration = 750 ms

INFO: Duration for executing count statement for page 105 (at position 2081) = 
141 ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 200 (at position 3981) = 
125 ms
INFO: Duration for executing final query = 641 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 201 (at position 4001) = 
140 ms
INFO: Duration for executing final query = 641 ms
INFO: Total duration = 781 ms

INFO: Duration for executing count statement for page 202 (at position 4021) = 
141 ms
INFO: Duration for executing final query = 609 ms
INFO: Total duration = 750 ms

INFO: Duration for executing count statement for page 203 (at 

[GENERAL] Trigger before insert

2004-11-25 Thread ON.KG
Hi all,

===
CREATE FUNCTION trigger_test_func()
RETURNS trigger
AS '
 DECLARE
 cnt int4;
 
 BEGIN
   SELECT INTO cnt COUNT(*)
   FROM table_test
   WHERE ip = new.ip;

   IF cnt  50 THEN
 -- THERE THE INSERT HAS TO BE STOPED
   END IF;

   RETURN new;
 END;'
LANGUAGE 'plpgsql';

CREATE TRIGGER trigger_test
BEFORE INSERT
ON table_test
FOR EACH ROW
EXECUTE PROCEDURE trigger_test_func();
===

How could i stop Inserting record into table by some condition?

Thanx!


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] why use SCHEMA? any real-world examples?

2004-11-25 Thread Net Virtual Mailing Lists
I am in the middle of a project to convert non-schema databases to a
schema-based system.  The main reason I am doing it is because I need to
do a join on tables between databases, which can only be done with an
contrib module which does not have all the features one might want
(such as use of indexes, etc).

For example:

SELECT a.id, b.name FROM schema1.industry_id a, schema_shared.industries
b WHERE a.industry_id = b.industry_id;
SELECT a.id, b.name FROM schema2.industry_id a, schema_shared.industries
b WHERE a.industry_id = b.industry_id;
SELECT a.id, b.name FROM schema3.industry_id a, schema_shared.industries
b WHERE a.industry_id = b.industry_id;
SELECT a.id, b.name FROM schema4.industry_id a, schema_shared.industries
b WHERE a.industry_id = b.industry_id;
.. etc...

Obviously this prevents replicating schema_shared into every database
whenever it gets updated...

I am sure there are many other uses - they seem very flexible to me so
far, but that's what I'm using it for...

- Greg


I just noticed PostgreSQL's schemas for my first time.
(http://www.postgresql.org/docs/current/static/ddl-schemas.html) 

I Googled around, but couldn't find any articles describing WHY or
WHEN to use schemas in database design.

Since the manual says HOW, could anyone here who has used schemas take
a minute to describe to a newbie like me why you did?   What benefits
did they offer you?   Any drawbacks?

Thanks for your time.

- Miles



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] why use SCHEMA? any real-world examples?

2004-11-25 Thread Richard Huxton
Miles Keaton wrote:
Since the manual says HOW, could anyone here who has used schemas take
a minute to describe to a newbie like me why you did?   What benefits
did they offer you?   Any drawbacks?
Well - it's a namespace feature, so at its simplest it lets you have two 
objects with the same name. It also lets you have permission control 
over them, and provides a convenient way to group items together. For 
example, I usually have a util schema where I keep utility 
functions/views for dba use rather than general users.

For a longer example, you might have a database with two sets of users - 
sales and accounts. They both need to print reports, but not the same 
set of reports. So - you create a reports table with an access code of 
S=sales, A=accounts, *=everyone.

You wrap this with a view my_reports where you supply your user-type 
(S/A) and get a list of reports you can access. However, your 
application needs to supply the user-type and if someone can inject the 
right SQL into your connection, they can gain access to any report.

So - you create 3 schemas: core, sales, accounts. You put the reports 
table into core and two views into sales and accounts, both named 
my_reports and rewritten appropriately. You deny access to reports 
directly, and make sure your application sets its search_path to 
contain the relevant sales/accounts schema. Then SELECT * FROM 
my_reports will show only those reports your login allows.

HTH
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] [HACKERS] Help!

2004-11-25 Thread Richard Huxton
ElayaRaja S wrote:
Hi,
While configuring OpenCRX by using Postgresql i am facing probmelm.
The problem while creating db using the command ( createdb -h
localhost -E utf8 -U system crx-CRX ) .
Erro:
createdb: could not connect to database template1: could not connect
to server:
 Connection refused
  Is the server running on host localhost and accepting
  TCP/IP connections on port 5432?
OK - it's saying it can't connect and asks you whether the server is 
running and accepting connections on localhost:5432.
Is it?
What does service postgresql status say?
Have you checked your pg_hba.conf settings?

Details of Platfor that i am using:
1) Linux 9
No such thing. Do you mean RedHat 9?
2) Postgre SQL 7.4.5
Note: I have doubt whether the error is due to my existing Postgresql
7.3. Means while installing linux there is built in postgresql 7.3.
But i am installed postgresql 7.4.5. Plz let me know its very urgent.
How did you install 7.4.5? How was 7.3 Installed? What errors did you see?
There are RPMs available for RedHat 9 (if that's what you're running) - 
look in the binary folder on the download page. They should install 
without problem, just remove 7.3 before.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] why use SCHEMA? any real-world examples?

2004-11-25 Thread Gregory S. Williamson

As other posters have indicated, there's a convenience factor and an advantage 
to compartmentalizing data. In our case we don't care so much about user rights 
(a very useful aspect in and of itself), but more for performance issues.

We have some applications that use a lot of detailed data about properties in 
different counties. We have a central table that stores their spatial 
attributes and some data about the properties themselves. The table has several 
million rows currently -- selections based on a bounding box are very fast, but 
if we try to get a list of all properties on all streets with names like Elm% 
in a given county, the select is painfully slow as the index (county / street 
in this simplified case) lacks specificity -- any given county yields say a 
half million rows as candidates by county, with hundreds of possible street 
entries, so sequential scans are used.

Hence, I broke out some of the property data that needed to be searched by 
county, with each county in its own schema,and each schema has the same tables 
(so the schema called f10675 has a name_search table that has the same name 
as the f01223 schema, but its own contents.

The search tables all refer to the original data by a unique identifier that is 
common between the schema/search tables and the main store. The search in these 
schema based tables is much faster because the specificity of the index is much 
greater, yielding only dozens or hundreds of candidates out of hundreds of 
thousands of rows. 

The extra space taken by redundant data storage is more than compensated for by 
speed in retrieval.

HTH clarify possibilties,

Greg WIlliamson
DBA
GlobeXplorer LLC

-Original Message-
From:   Miles Keaton [mailto:[EMAIL PROTECTED]
Sent:   Wed 11/24/2004 9:12 PM
To: [EMAIL PROTECTED]
Cc: 
Subject:[GENERAL] why use SCHEMA? any real-world examples?
I just noticed PostgreSQL's schemas for my first time.
(http://www.postgresql.org/docs/current/static/ddl-schemas.html) 

I Googled around, but couldn't find any articles describing WHY or
WHEN to use schemas in database design.

Since the manual says HOW, could anyone here who has used schemas take
a minute to describe to a newbie like me why you did?   What benefits
did they offer you?   Any drawbacks?

Thanks for your time.

- Miles

---(end of broadcast)---
TIP 8: explain analyze is your friend




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Benchmark-Comparison PostGreSQL vs. SQL Server

2004-11-25 Thread Robert Soeding



Hi, this is my 
first question here, and also, it's somewhat delicate. So please be 
patient.

My question is, 
CAN PostGreSQL perform in the SQL Server area when it comes to 
speed?
In other words, 
are there explanations for the results I found (see below)?

Thanks,
Robert

-
Background:
1. I read people 
were using PostGreSQL with TeraBytes of data sometimes, or thousands of users. 
These are things that could easily break SQL Server. - So I thought PostGreSQL 
might be similar fast to SQL Server.

2. I did some 
tests:
Windows XP 
SP2
Several GIGs free 
harddisk, ~400 MB free RAM
Java 1.5 / 
JDBC
PostGreSQL 8.0 
beta (through Windows Installer), default configuration, default 
driver
SQL Server 2000 
SP3a, default configuration, JDTS driver
Tablespaces of 
both databases on the same partition
Write-Test: 
Creating tables (slightly modified TCP-W benchmark)
Read-Test: Simple 
SELECT statements on all tables, returning the first 1000 rows (cursor variants: 
read-only and non-locking, resp. updatable and locking)

Results:
Writing: SQL 
Server 25 times faster.
Reading: SQL 
Server 100 times faster.


[GENERAL] How to display structure of a table

2004-11-25 Thread Nageshwar Rao








I have created a table and would
like to display the structure to find out datatype and length.

Just like in Oracle describe
table_name;








Re: [GENERAL] How to display structure of a table

2004-11-25 Thread Chris Green
On Thu, Nov 25, 2004 at 04:08:30PM +0530, Nageshwar Rao wrote:
I have created a table and would like to display the structure to find
out datatype and length.
 
Just like in Oracle describe table_name;

in psql:-

\dt

Basically the \d commands in psql give you much of what DESCRIBE does
for Oracle and mysql.  \? to see all the \ commands.

-- 
Chris Green ([EMAIL PROTECTED])

Never ascribe to malice, that which can be explained by incompetence.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [HACKERS] Help!

2004-11-25 Thread Richard Huxton
ElayaRaja S wrote:
Hi,
  Thanks for your reply. Sorry to contact again to this mail. I am
unable to find the link for general list. Please let me know.
That's fine - I'm cc-ing you to the general list so others can help too.
You can find all the lists at http://www.postgresql.org/lists.html - you 
can subscribe with a web-form too if you would like.

 And i
have answered for your question as follows.
How did you install 7.4.5? How was 7.3 Installed? What errors did you see?
Actually 7.3 is builtin with Red hot Linux  9. But i have installed
with my folder usr/local/pgsql/
OK - download the RPM distribution for PostgreSQL version 7.4.6. You can 
get this from the download page on the main website. Choose a local 
mirror, then check in the binary folder for Red Hat rpms. There are some 
for Red Hat 9 (in the UK mirror definitely, I just checked).

As root, backup any existing databases using pg_dump. Backup the .conf 
files in /var/lib/pgsql/data/.
Stop postgresql 7.3 with service postgresql stop.
Upgrade your postgresql installation with rpm -Uvh postgres*rpm in the 
 same directory as your new rpm files.
Edit your configuration files.
If all works, start postgresql 7.4 with service postgresql start
Check with service postgresql status

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] How to display structure of a table

2004-11-25 Thread Chris Green
On Thu, Nov 25, 2004 at 10:47:33AM +, Chris Green wrote:
 On Thu, Nov 25, 2004 at 04:08:30PM +0530, Nageshwar Rao wrote:
 I have created a table and would like to display the structure to find
 out datatype and length.
  
 Just like in Oracle describe table_name;
 
 in psql:-
 
 \dt
 
That's \dt tablename of course.

 Basically the \d commands in psql give you much of what DESCRIBE does
 for Oracle and mysql.  \? to see all the \ commands.
 

-- 
Chris Green ([EMAIL PROTECTED])

Never ascribe to malice, that which can be explained by incompetence.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Benchmark-Comparison PostGreSQL vs. SQL Server

2004-11-25 Thread Richard Huxton
Robert Soeding wrote:
Hi, this is my first question here, and also, it's somewhat delicate.
So please be patient.
My question is, CAN PostGreSQL perform in the SQL Server area when it
comes to speed? In other words, are there explanations for the
results I found (see below)?
Faster in some cases, slower in others in my experience. Oh, and 
publishing performance comparisons with another database might be in 
breach of your ms-sql server licencing.

Thanks, Robert
- Background: 1. I read people were using PostGreSQL with
TeraBytes of data sometimes, or thousands of users. These are things
that could easily break SQL Server. - So I thought PostGreSQL might
be similar fast to SQL Server.
Some people have very large installations. This obviously isn't on 
Windows, and not necessarily on x86 hardware.

2. I did some tests: Windows XP SP2 Several GIGs free harddisk, ~400
MB free RAM Java 1.5 / JDBC PostGreSQL 8.0 beta (through Windows
Installer), default configuration, default driver SQL Server 2000
SP3a, default configuration, JDTS driver Tablespaces of both
databases on the same partition Write-Test: Creating tables (slightly
modified TCP-W benchmark) Read-Test: Simple SELECT statements on all
tables, returning the first 1000 rows (cursor variants: read-only and
non-locking, resp. updatable and locking)
Results: Writing: SQL Server 25 times faster. Reading: SQL Server 100
times faster.
The figures sound wrong. The Windows port isn't likely to be as fast as 
the *nix versions (certainly not yet) but those figures don't match for 
my experience with PG on Linux.

Unfortunately, although you provide a lot of information, almost none of 
it tells us what the problem is. So -
1. What configuration changes have you made?
2. How many concurrent connections was this?
3. Were you selecting 1000 rows (LIMIT 1000), selecting all the rows 
(and only fetching 1000) or actually defining an SQL cursor.
4. What was the load on the machine - CPU or DISK peaking?
5. What was the RAM usage like?

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Benchmark-Comparison PostGreSQL vs. SQL Server

2004-11-25 Thread Robert Soeding
 1. What configuration changes have you made?

None, both installations are default configured.

 2. How many concurrent connections was this?

One.

 3. Were you selecting 1000 rows (LIMIT 1000), selecting all the rows 
 (and only fetching 1000) or actually defining an SQL cursor.

I used LIMIT 1000, resp. TOP 1000 statements.

 4. What was the load on the machine - CPU or DISK peaking?
 5. What was the RAM usage like? 

CPU ( 10%) and RAM (10%) usage were very low.

- I guess (as you mentioned below) it's the NT file system.
When running PostgreSQL queries I can hear the harddisk buzzing, but not with 
SQL Server queries.

On the other hand, if an application has to fight against the file system, I 
would suppose it to increase RAM and CPU usage significantly.


-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 25, 2004 12:34 PM
To: Robert Soeding
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Benchmark-Comparison PostGreSQL vs. SQL Server

Robert Soeding wrote:
 Hi, this is my first question here, and also, it's somewhat delicate.
 So please be patient.
 
 My question is, CAN PostGreSQL perform in the SQL Server area when it 
 comes to speed? In other words, are there explanations for the results 
 I found (see below)?

Faster in some cases, slower in others in my experience. Oh, and publishing 
performance comparisons with another database might be in breach of your ms-sql 
server licencing.

 Thanks, Robert
 
 - Background: 1. I read people were using PostGreSQL with 
 TeraBytes of data sometimes, or thousands of users. These are things 
 that could easily break SQL Server. - So I thought PostGreSQL might be 
 similar fast to SQL Server.

Some people have very large installations. This obviously isn't on Windows, and 
not necessarily on x86 hardware.

 2. I did some tests: Windows XP SP2 Several GIGs free harddisk, ~400 
 MB free RAM Java 1.5 / JDBC PostGreSQL 8.0 beta (through Windows 
 Installer), default configuration, default driver SQL Server 2000 
 SP3a, default configuration, JDTS driver Tablespaces of both databases 
 on the same partition Write-Test: Creating tables (slightly modified 
 TCP-W benchmark) Read-Test: Simple SELECT statements on all tables, 
 returning the first 1000 rows (cursor variants: read-only and 
 non-locking, resp. updatable and locking)
 
 Results: Writing: SQL Server 25 times faster. Reading: SQL Server 100 
 times faster.

The figures sound wrong. The Windows port isn't likely to be as fast as the 
*nix versions (certainly not yet) but those figures don't match for my 
experience with PG on Linux.

Unfortunately, although you provide a lot of information, almost none of it 
tells us what the problem is. So - 1. What configuration changes have you made?
2. How many concurrent connections was this?
3. Were you selecting 1000 rows (LIMIT 1000), selecting all the rows (and only 
fetching 1000) or actually defining an SQL cursor.
4. What was the load on the machine - CPU or DISK peaking?
5. What was the RAM usage like?

--
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Benchmark-Comparison PostGreSQL vs. SQL Server

2004-11-25 Thread Richard Huxton
Robert Soeding wrote:
1. What configuration changes have you made?
None, both installations are default configured.
You'll want to do at least some tuning on PG. Try the URL below for a
quick introduction - just the basic stuff is a good start.
  http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
2. How many concurrent connections was this?
One.
OK. That keeps things nice and simple.
3. Were you selecting 1000 rows (LIMIT 1000), selecting all the
rows (and only fetching 1000) or actually defining an SQL cursor.

I used LIMIT 1000, resp. TOP 1000 statements.
OK. The reason I asked was that if you ask for 100 rows, then PG
will find them all and return them all in a bunch. Most other servers
return the first row once it's available.
4. What was the load on the machine - CPU or DISK peaking? 5. What
was the RAM usage like?
CPU ( 10%) and RAM (10%) usage were very low.
- I guess (as you mentioned below) it's the NT file system. When
running PostgreSQL queries I can hear the harddisk buzzing, but not
with SQL Server queries.
If you're seeing hard-disk activity that means the data isn't cached.
On the other hand, if an application has to fight against the file
system, I would suppose it to increase RAM and CPU usage
significantly.
It shouldn't be fighting the file system, but it does use it, and rely 
on it for caching (rather than bypassing your filesystem cache). For 
MS-SQL server I'm guessing you're allocating a lot of memory to SQL 
server and not much to the file-cache. For PG you'll want it the other 
way around.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Trigger before insert

2004-11-25 Thread Richard Huxton
ON.KG wrote:
How could i stop Inserting record into table by some condition?
RETURN null when using a before trigger. Or raise an exception to abort 
the whole transaction.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Trigger before insert

2004-11-25 Thread ON.KG
Hi!

 How could i stop Inserting record into table by some condition?

RH RETURN null when using a before trigger. Or raise an exception to abort
RH the whole transaction.

Thanx ;)
RETURN NULL works so as i need


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Benchmark-Comparison PostGreSQL vs. SQL Server

2004-11-25 Thread Shridhar Daithankar
On Thursday 25 Nov 2004 6:22 pm, Richard Huxton wrote:

  On the other hand, if an application has to fight against the file
  system, I would suppose it to increase RAM and CPU usage
  significantly.

 It shouldn't be fighting the file system, but it does use it, and rely
 on it for caching (rather than bypassing your filesystem cache). For
 MS-SQL server I'm guessing you're allocating a lot of memory to SQL
 server and not much to the file-cache. For PG you'll want it the other
 way around.

Do we have some sort of document about how caching on windows works? It is 
very simple on linux and BSDs but for other OSs, I haven't seen many 
suggestions.

I am sure OS specific hints would help OP a lot.

 Shridhar



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] why use SCHEMA? any real-world examples?

2004-11-25 Thread Daniel Martini
Hi,

Citing Miles Keaton [EMAIL PROTECTED]:
 I just noticed PostgreSQL's schemas for my first time.
 (http://www.postgresql.org/docs/current/static/ddl-schemas.html) 
 
 I Googled around, but couldn't find any articles describing WHY or
 WHEN to use schemas in database design.

When your data are too similar to be split into two databases but
at the same time too different to fit into common tables, a schema
comes in handy to keep your db tidy.
 
 Since the manual says HOW, could anyone here who has used schemas take
 a minute to describe to a newbie like me why you did?

We had agricultural experiments running here at our institute. We had
both field experiments outside and pot experiments in the greenhouse.
The data collected was mostly the same for both sets of experiments
(plant nutrient content, growth parameters like shoot length...), but
the number of samples taken per measured parameter was quite different
(e.g. for the pot experiments, nutrient data was available from each
and every plant, whereas we took samples in the field only from a subset
of plants). So both sets of experiments did not fit into one clean
normalized, relational model. On the other hand, it was quite desirable
to have all the data in one db, to be able to run queries across both
datasets at the same time from one connection. Schemas provided a nice
way to keep this all clean and simple.

 What benefits
 did they offer you?

Clean, logical organization of one projects data in one db.

 Any drawbacks?

If you have tables with the same name in several schemas, only
the ones, which are in the first schema in the search path are
shown on \dt from psql. Not a major problem, but keep this in mind
when designing the db.

Regards,
Daniel

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] SPI memory overrun details

2004-11-25 Thread Katsaros Kwn/nos
Hi,

I read in the documentation of SPI functions that an SPI_cursor is used
to avoid memory overrun in cases where a query returns many rows. I'd
like to learn more about this. Is their any place that I could find more
detailed information (e.g. max size of results, threshold - available
memory etc.)? 

I searched the archives but nothing.

Regards,
Ntinos Katsaros


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Insert (ignore) with unique constraint on

2004-11-25 Thread Victor Ciurus
Hi all,

What I'am trying to do using Postgresql is to make and INSERT into
table A (2 fields) (select * from Table B) - (2 fields + unique btree
Index) or a COPY table_B from FILE using The problem is that
everytime I get duplicates from table A into table B the insert will
stop with the typical error: 'ERROR:  duplicate key violates unique
constraint string_idx'

Q: Is there a way (like in MySQL's INSERT IGNORE) of going on with the
INSERT / COPY FROM ... even if a dupplicate / unique constraint
violation shows on! Somekind of a 'exception/error treating' routine
in SQL or PG-PL?

Any help will be highly appreciated.

Regards,
Victor

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Benchmark-Comparison PostGreSQL vs. SQL Server

2004-11-25 Thread Richard Huxton
Shridhar Daithankar wrote:
Do we have some sort of document about how caching on windows works? It is 
very simple on linux and BSDs but for other OSs, I haven't seen many 
suggestions.

I am sure OS specific hints would help OP a lot.
Microsoft's own resources are good for this sort of stuff (apologies if 
URLs wrap):

http://www.microsoft.com/technet/prodtechnol/windows2000serv/maintain/optimize/wperfch7.mspx
http://www.microsoft.com/resources/documentation/Windows/2000/server/reskit/en-us/Default.asp?url=/resources/documentation/Windows/2000/server/reskit/en-us/core/fnec_evl_ACKS.asp
The simplest setting is the simple application/file-sharing switch 
detailed in article #2 (which was introduced in NT4 iirc).

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] table name in pl/pgsql

2004-11-25 Thread ON.KG
New question:

i have tables like
table_20041124,
table_20041125,
etc...

i'm trying to make function (for example):
=
CREATE FUNCTION get_count(text, text)
RETURNS int2 AS '
  DECLARE
cnt int4;
  BEGIN
SELECT INTO cnt COUNT(*)
FROM table_$1   -- That doesn't work
WHERE key = $2;

RETURN cnt;
  END;'
LANGUAGE 'plpgsql';
=

call this function by:

=
SELECT get_count(20041124, something);
=

string in funstion -  FROM table_$1

how could i get a final correct table name here?

Thanx!


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] VACUUM ANALYZE question - PostgreSQL performance tests

2004-11-25 Thread Julian Legeny
Hello,

   I have the question about VACUUM ANALYZE. I have try to do Postgres
performance tests for selecting large amount of records from DB.
   First I have insert 30.000 records into the 1 table. After this
insert I executed VACUUM ANALYZE query.
   I have a test that retrieves page by page (20 records per page)
all data from a table. It means I'm executing 1500 selects in the cycle
for retrieving each page and I'm retrieving also time duration of some
of this selects.

   PROBLEM IS, that when I start to retrieve records, the performance
is poor. But when I execute manually (from a DB client) query VACUUM
ANALYZE one more time (during retrieving of pages), the performance is
much better.

   Is there also neccessary to call VACUUM ANALYZE also for getting of
better performance for select query?


   Thank you for your answer,
   with best regards,

   Julian Legeny




   Here I attach log reports for 30.000 records:
   =

  Here can be possible to see that time duration of executing final query is 
aproximately same for
  each retrieved page. And performance is not very good.
  I have applied VACUUM ANALYZE during processing test and from the page 1000 
performance
  is better about 2/3.


   a.) First I insert 30.000 records into the DB.

   b.) Then I retrieve page by page all records from the DB.

 I'm executing 2 commands:
 
 1. Command retrieve number of all items that I want to retrieve
page by page:

select count(*) from BF_USER
   where BF_USER.DOMAIN_ID=19 and BF_USER.ID NOT IN(280)

- in the log file is possible to see time duration of this
  select - it is time for Duration for executing count
  statement.

 2. Final query for retrieving particular records specified within
 the LIMIT clause.

select BF_USER.LOGIN_NAME,
   BF_USER.EMAIL,BF_USER.ID,
   BF_USER.MODIFICATION_DATE,
   BF_USER.SUPER_USER,
   BF_USER.GUEST_ACCESS_ENABLED
from BF_USER
where BF_USER.DOMAIN_ID=19 and
  BF_USER.ID NOT IN(280)
order by BF_USER.LOGIN_NAME asc
limit 20 offset 0


   First I execute select COUNT(*) ... query for retrieving number
   of all items that I will retrieve and then when I know this number,
   I can retrieve specified records (used LIMIT for this).


-

INFO: Total duration to create 3 data objects was 1:46.453 which is 281 
items/sec

INFO: Duration for executing count statement for page 1 (at position 1) = 171 ms
INFO: Duration for executing final query = 641 ms
INFO: Total duration = 828 ms

INFO: Duration for executing count statement for page 2 (at position 21) = 156 
ms
INFO: Duration for executing final query = 641 ms
INFO: Total duration = 797 ms

INFO: Duration for executing count statement for page 3 (at position 41) = 140 
ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 765 ms

INFO: Duration for executing count statement for page 4 (at position 61) = 141 
ms
INFO: Duration for executing final query = 640 ms
INFO: Total duration = 797 ms

INFO: Duration for executing count statement for page 5 (at position 81) = 141 
ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 100 (at position 1981) = 
141 ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 101 (at position 2001) = 
141 ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 102 (at position 2021) = 
140 ms
INFO: Duration for executing final query = 594 ms
INFO: Total duration = 750 ms

INFO: Duration for executing count statement for page 103 (at position 2041) = 
140 ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 765 ms

INFO: Duration for executing count statement for page 104 (at position 2061) = 
141 ms
INFO: Duration for executing final query = 609 ms
INFO: Total duration = 750 ms

INFO: Duration for executing count statement for page 105 (at position 2081) = 
141 ms
INFO: Duration for executing final query = 625 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 200 (at position 3981) = 
125 ms
INFO: Duration for executing final query = 641 ms
INFO: Total duration = 766 ms

INFO: Duration for executing count statement for page 201 (at position 4001) = 
140 ms
INFO: Duration for executing final query = 641 ms
INFO: Total duration = 781 ms

INFO: Duration for executing count statement for page 202 (at position 4021) = 
141 ms
INFO: Duration for executing final query = 609 ms
INFO: Total duration = 750 ms

INFO: Duration for executing count statement for page 203 (at 

Re: [GENERAL] table name in pl/pgsql

2004-11-25 Thread Adam Witney

I think you would have to do it something like this, although whether the
SELECT INTO works in an EXECUTE context I am not sure (note, completely
untested code!)

CREATE FUNCTION get_count(text, text) RETURNS int2 AS '
DECLARE
  cnt int4;
BEGIN
  
  EXECUTE ''SELECT INTO cnt COUNT(*) FROM table_'' || $1 || '' WHERE key =
'' || $2;

  RETURN cnt;
END;'
LANGUAGE 'plpgsql';





 New question:
 
 i have tables like
 table_20041124,
 table_20041125,
 etc...
 
 i'm trying to make function (for example):
 =
 CREATE FUNCTION get_count(text, text)
 RETURNS int2 AS '
 DECLARE
   cnt int4;
 BEGIN
   SELECT INTO cnt COUNT(*)
   FROM table_$1   -- That doesn't work
   WHERE key = $2;
 
   RETURN cnt;
 END;'
 LANGUAGE 'plpgsql';
 =
 
 call this function by:
 
 =
 SELECT get_count(20041124, something);
 =
 
 string in funstion -  FROM table_$1
 
 how could i get a final correct table name here?
 
 Thanx!
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
  http://archives.postgresql.org


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] table name in pl/pgsql

2004-11-25 Thread Richard Huxton
Adam Witney wrote:
I think you would have to do it something like this, although whether the
SELECT INTO works in an EXECUTE context I am not sure (note, completely
untested code!)
CREATE FUNCTION get_count(text, text) RETURNS int2 AS '
DECLARE
  cnt int4;
BEGIN
  
  EXECUTE ''SELECT INTO cnt COUNT(*) FROM table_'' || $1 || '' WHERE key =
'' || $2;
That won't work either, you'll need to user FOR..IN..EXECUTE:
CREATE TABLE exectest (a integer, b text, PRIMARY KEY (a));
COPY exectest FROM stdin;
1   aaa
2   bbb
3   ccc
\.
CREATE FUNCTION demo_exec_fn() RETURNS boolean AS '
DECLARE
r RECORD;
BEGIN
FOR r IN EXECUTE ''SELECT * FROM exectest''
LOOP
RAISE NOTICE ''a=%, b=%'', r.a, r.b;
END LOOP;
RETURN true;
END
' LANGUAGE plpgsql;
SELECT demo_exec_fn();
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] table name in pl/pgsql

2004-11-25 Thread Tino Wildenhain
Hi,

Am Donnerstag, den 25.11.2004, 19:42 +0300 schrieb ON.KG:
 New question:
 
 i have tables like
 table_20041124,
 table_20041125,
 etc...
 
 i'm trying to make function (for example):
 =
 CREATE FUNCTION get_count(text, text)
 RETURNS int2 AS '
   DECLARE
 cnt int4;
   BEGIN
 SELECT INTO cnt COUNT(*)
 FROM table_$1   -- That doesn't work
 WHERE key = $2;
 
 RETURN cnt;
   END;'
 LANGUAGE 'plpgsql';
 =
 
 call this function by:
 
 =
 SELECT get_count(20041124, something);
 =
 
 string in funstion -  FROM table_$1
 
 how could i get a final correct table name here?

You need to build your query in your function
as a big string and pass it to EXECUTE

(See
http://www.postgresql.org/docs/current/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
and
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING
)

however encoding data into table names does not sound so elegant
for me - why not make it a real column?
The advantage would be you'd have a real query and let postgres
compile a plan for it - which is not possible for execute.

If you are concerned on index size you can always use partial
indices based on your table number.

Regards
Tino


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] table name in pl/pgsql

2004-11-25 Thread CoL
hi,
ON.KG wrote:
New question:
i have tables like
table_20041124,
table_20041125,
etc...
i'm trying to make function (for example):
=
CREATE FUNCTION get_count(text, text)
RETURNS int2 AS '
  DECLARE
cnt int4;
  BEGIN
SELECT INTO cnt COUNT(*)
FROM table_$1   -- That doesn't work
WHERE key = $2;
RETURN cnt;
  END;'
LANGUAGE 'plpgsql';
=
call this function by:
=
SELECT get_count(20041124, something);
=
string in funstion -  FROM table_$1
how could i get a final correct table name here?
You can use execute for dynamic sql.
CREATE FUNCTION get_count(text, text) RETURNS int2 AS '
declare rec record;
begin
 for rec in execute ''select COUNT(*) as num from table_''||$1||'' 
where key=||$2 '';
 loop
  return rec.num;
 end loop;
return;
end;

PS: anyway, you want returns int2 , but you declared int4 :)
C.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Happy Thanksgiving

2004-11-25 Thread Joshua D. Drake
Hello,
Command Prompt, Inc. would like to thank everyone in the
community for making PostgreSQL what it is. The best Open
Source Database, period.
Happy Thanksgiving everyone. We hope you have a safe
and happy holiday.
Sincerely,
Joshua D. Drake
President
Command Prompt, Inc.
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] SPI memory overrun details

2004-11-25 Thread Tom Lane
Katsaros Kwn/nos [EMAIL PROTECTED] writes:
 I read in the documentation of SPI functions that an SPI_cursor is used
 to avoid memory overrun in cases where a query returns many rows. I'd
 like to learn more about this. Is their any place that I could find more
 detailed information (e.g. max size of results, threshold - available
 memory etc.)? 

You could look at plpgsql's use of SPI cursors for an example.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] VACUUM ANALYZE question - PostgreSQL performance tests

2004-11-25 Thread Tom Lane
Julian Legeny [EMAIL PROTECTED] writes:
PROBLEM IS, that when I start to retrieve records, the performance
 is poor. But when I execute manually (from a DB client) query VACUUM
 ANALYZE one more time (during retrieving of pages), the performance is
 much better.

I don't think this has anything to do with executing an additional
vacuum analyze.  I think you're seeing the planner switch from an
indexscan plan to a sort-based plan.  EXPLAIN ANALYZE output for
the query with different offset settings would tell you more.

In general, though, that whole approach sucks and you should get rid of
it.  The backend still has to compute all the rows you are skipping with
OFFSET; there is not some magic in there to let it jump to the right
place.  You'd be far better off to use a cursor and incrementally FETCH
from the cursor.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Using IN with subselect

2004-11-25 Thread Dave Smith
I have a query with an in subquery like

where x in (select x from y);

Now the subquery is not related to the outer query so it always returns
the same set. Is this subselect executed each time or just once? If it
is executed each time, if I create a function would that then be only
executed once? 

-- 
Dave Smith
CANdata Systems Ltd
416-493-9020


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] why use SCHEMA? any real-world examples?

2004-11-25 Thread Greg Stark

  Since the manual says HOW, could anyone here who has used schemas take
  a minute to describe to a newbie like me why you did?   What benefits
  did they offer you?   Any drawbacks?

I suspect the consumer of this feature for whom it would make the biggest
difference would be shrinkwrapped packages. For example, if you have a
database and want to add on a logging package, a billing package, and a CMS
system and you want to store the data in your existing database so you can
integrate them all closely together, then it would be nice to at least keep
the database tables in separate namespaces.

-- 
greg


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] How can I change a cast from explicit only to implicit?

2004-11-25 Thread Julian Scarfe
In 7.2.x

template1=# select point('1'::text, '2'::text);
 point
---
 (1,2)
(1 row)

but in 7.4.x

template1=# select point('1'::text, '2'::text);
ERROR:  function point(text, text) does not exist
HINT:  No function matches the given name and argument types. You may need
to add explicit type casts.

  List of casts
 Source type | Target type |  Function
|   Implicit?
-+-+
-+---
...
 text| double precision| float8
| no

OK, so to make the cast work without explicit casts in the SQL, I need the
text to float8 (or another suitable numeric type) cast to be implicit.  But:

template1=# create cast (text as float8) with function float8(text) as
implicit;
ERROR:  cast from type text to type float8 already existsoat8(text) as
implicit;

template1=# drop cast (text as float8);
ERROR:  cannot drop cast from text to double precision because it is
required by the database system

So how can I force a built-in cast to become implicit?

Thanks

Julian Scarfe



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Using IN with subselect

2004-11-25 Thread Martijn van Oosterhout
On Thu, Nov 25, 2004 at 12:19:23PM -0500, Dave Smith wrote:
 Well here is explain. I would guess that it is executed each time ..
 function any different?

Hmm, if it's only executed once I would expect it to be an InitPlan.
Are you sure it's not correlated?

If you want to make sure, put the subquery in the FROM clause, that it
definitly will only be run once.

snip

Hope this helps,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpFWMPKwWSZk.pgp
Description: PGP signature


Re: [GENERAL] Using IN with subselect

2004-11-25 Thread Dave Smith
Well here is explain. I would guess that it is executed each time ..
function any different?

HashAggregate  (cost=288.32..288.32 rows=1 width=32)
   -  Hash IN Join  (cost=288.18..288.31 rows=1 width=32)
 Hash Cond: ((outer.gl_num)::text =
lpad(ltrim((inner.account_num)::text, '0'::text), 9, ' '::text))
 -  Subquery Scan journal_all  (cost=282.36..282.45 rows=2
width=64)
   -  Unique  (cost=282.36..282.43 rows=2 width=159)
 -  Sort  (cost=282.36..282.36 rows=2 width=159)
   Sort Key: objectid, owner_oid, source_code,
posting_date, control_num, reference, gl_num, gl_amt, distributed_amt,
comment, operator_id, branch_id, company_id
   -  Append  (cost=0.00..282.35 rows=2
width=159)
 -  Subquery Scan *SELECT* 1 
(cost=0.00..265.24 rows=1 width=159)
   -  Index Scan using journal_9 on
journal  (cost=0.00..265.23 rows=1 width=159)
 Index Cond: (company_id =
1000)
 Filter: ((posting_date =
'2004-01-01'::date) AND (posting_date = '2004-01-31'::date))
 -  Subquery Scan *SELECT* 2 
(cost=0.00..17.10 rows=1 width=159)
   -  Index Scan using
journal_hist_7 on journal_hist  (cost=0.00..17.09 rows=1 width=159)
 Index Cond: (company_id =
1000)
 Filter: ((posting_date =
'2004-01-01'::date) AND (posting_date = '2004-01-31'::date))
 -  Hash  (cost=5.83..5.83 rows=1 width=13)
   -  Index Scan using glmast_index3 on glmast 
(cost=0.00..5.83 rows=1 width=13)
 Index Cond: ((company_id = 1000) AND
((control_type)::text = 'F'::text))

On Thu, 2004-11-25 at 12:11, Martijn van Oosterhout wrote:
 Running EXPLAIN over the query will tell you...
 
 On Thu, Nov 25, 2004 at 11:49:06AM -0500, Dave Smith wrote:
  I have a query with an in subquery like
  
  where x in (select x from y);
  
  Now the subquery is not related to the outer query so it always returns
  the same set. Is this subselect executed each time or just once? If it
  is executed each time, if I create a function would that then be only
  executed once? 
  
  -- 
  Dave Smith
  CANdata Systems Ltd
  416-493-9020
  
  
  ---(end of broadcast)---
  TIP 6: Have you searched our list archives?
  
 http://archives.postgresql.org
-- 
Dave Smith
CANdata Systems Ltd
416-493-9020


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Using IN with subselect

2004-11-25 Thread Martijn van Oosterhout
Running EXPLAIN over the query will tell you...

On Thu, Nov 25, 2004 at 11:49:06AM -0500, Dave Smith wrote:
 I have a query with an in subquery like
 
 where x in (select x from y);
 
 Now the subquery is not related to the outer query so it always returns
 the same set. Is this subselect executed each time or just once? If it
 is executed each time, if I create a function would that then be only
 executed once? 
 
 -- 
 Dave Smith
 CANdata Systems Ltd
 416-493-9020
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org

-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgplWWSmVNQJT.pgp
Description: PGP signature


Re: [GENERAL] Using IN with subselect

2004-11-25 Thread Tom Lane
Dave Smith [EMAIL PROTECTED] writes:
 Well here is explain. I would guess that it is executed each time ..
 function any different?

 HashAggregate  (cost=288.32..288.32 rows=1 width=32)
-  Hash IN Join  (cost=288.18..288.31 rows=1 width=32)
  -  Subquery Scan journal_all  (cost=282.36..282.45 rows=2 width=64)
  -  Hash  (cost=5.83..5.83 rows=1 width=13)
-  Index Scan using glmast_index3 on glmast (cost=0.00..5.83 
 rows=1 width=13)

No ... this plan says to scan glmast once, load the selected rows into
an in-memory hash table, then scan journal_all once and probe the hash
table for matches.  It looks like a pretty decent choice of plan to me.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] How can I change a cast from explicit only to implicit?

2004-11-25 Thread Julian Scarfe
From: Tom Lane [EMAIL PROTECTED]

  So how can I force a built-in cast to become implicit?

 If you're intent on doing that, you can change its entry in pg_cast.
 But I think you'd be far better off to fix your application.  Implicit
 casts across type categories have a habit of kicking in when you least
 expected it, causing the backend to adopt surprising and unpleasant
 interpretations of straightforward-looking queries.  If you check the
 pgsql-bugs archives you will find some of the examples that prompted us
 to change this cast to non-implicit...

It's a fair point, Tom.  Having developed the code with 7.2, we are looking
for the simplest way to recreate backward compatible behaviour in 7.4.  Sure
enough, we'll flag the code for revision next time it gets an overhaul, but
a single change to the database would be preferable in the short term.   I
think the current default of these casts being explicit only is sensible.

Thanks

Julian



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Using IN with subselect

2004-11-25 Thread Dave Smith
That's what I wanted it to do I just did not understand how to read the
explain. So is it HashAggregate that means this already loaded?

On Thu, 2004-11-25 at 12:57, Tom Lane wrote:
 Dave Smith [EMAIL PROTECTED] writes:
  Well here is explain. I would guess that it is executed each time ..
  function any different?
 
  HashAggregate  (cost=288.32..288.32 rows=1 width=32)
 -  Hash IN Join  (cost=288.18..288.31 rows=1 width=32)
   -  Subquery Scan journal_all  (cost=282.36..282.45 rows=2 
  width=64)
   -  Hash  (cost=5.83..5.83 rows=1 width=13)
 -  Index Scan using glmast_index3 on glmast 
  (cost=0.00..5.83 rows=1 width=13)
 
 No ... this plan says to scan glmast once, load the selected rows into
 an in-memory hash table, then scan journal_all once and probe the hash
 table for matches.  It looks like a pretty decent choice of plan to me.
 
   regards, tom lane
-- 
Dave Smith
CANdata Systems Ltd
416-493-9020


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] What is alias_list_srl() ?

2004-11-25 Thread Fred Fung



Hi,

I am running 
PostgreSQL 7.4.5 and I notice the following 2 sets of error messages generated 
by the postmastereverything I do aquery through my frontend 
application program (the applicationexecutes the sameblock of codes 
everytime, except the queryingcriteria is different depending on my 
input):

 
ERROR: relation "serialreg" does not 
exist ERROR: syntax 
error at or near "MODE" at character 10

and

 
ERROR: relation "serialreg" does not 
exist ERROR: function 
alias_list_srl() does not exist

Nothing is crashed 
andthese errors do not seem to affect the query 
results.

What do these errors 
mean and how can I correct them ? Do they have any impact on the performance of 
my queries (I am doing a performance test on a large volume of 
data)?


TIA.


Fred




Re: [GENERAL] Using IN with subselect

2004-11-25 Thread Greg Stark

Dave Smith [EMAIL PROTECTED] writes:

 That's what I wanted it to do I just did not understand how to read the
 explain. So is it HashAggregate that means this already loaded?

The HashAggregate Node is doing a GROUP BY command. (or the implicit GROUP BY
if you used an aggregate function without one.)

The Hash IN Join is a particular type of join that behaves the way Tom
describes. Each type of join behaves differently. A Nested Loop join would
execute one side once and the other many times. A Merge join executes both
sides once.

You could also run explain analyze instead of just explain and look at the
loops number that would tell you how many times the node actually was
executed.

  Dave Smith [EMAIL PROTECTED] writes:
   Well here is explain. I would guess that it is executed each time ..
   function any different?
  
   HashAggregate  (cost=288.32..288.32 rows=1 width=32)
  -  Hash IN Join  (cost=288.18..288.31 rows=1 width=32)
-  Subquery Scan journal_all  (cost=282.36..282.45 rows=2 
   width=64)
-  Hash  (cost=5.83..5.83 rows=1 width=13)
  -  Index Scan using glmast_index3 on glmast 
   (cost=0.00..5.83 rows=1 width=13)
  
  No ... this plan says to scan glmast once, load the selected rows into
  an in-memory hash table, then scan journal_all once and probe the hash
  table for matches.  It looks like a pretty decent choice of plan to me.

-- 
greg


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Index work around?

2004-11-25 Thread Neil Conway
On Thu, 2004-11-25 at 10:24 +0100, Bjørn T Johansen wrote:
 do I still need to use ::int8 to 
 make it use indexes in 8.0 as I need in 7.x?

That should no longer be necessary.

-Neil



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] What is alias_list_srl() ?

2004-11-25 Thread Neil Conway
On Thu, 2004-11-25 at 15:00 -0500, Fred Fung wrote:
 I am running PostgreSQL 7.4.5 and I notice the following 2 sets of
 error messages generated by the postmaster everything I do a query
 through my frontend application program

The source of the errors is your frontend application, not PostgreSQL. 

  ERROR:  relation serialreg does not exist

Your application is submitting a query that references a table
(serialreg) that does not exist.

  ERROR:  syntax error at or near MODE at character 10

Without seeing the query that produces this, it's difficult to say what
the problem is. Try enabling statement logging and reporting the query
that causes the error.

  ERROR:  function alias_list_srl() does not exist

Again, your application is trying to invoke a user-defined function that
does not exist, so this is a problem with your application (or your
configuration), not PostgreSQL itself.

-Neil



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] RSS date still broken

2004-11-25 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
how I already wrote, the RSS feed report as date:
1 jan 1970 for all entries.

Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBpnIY7UpzwH2SGd4RAjOHAJ9NdZO7+zJNDzm1dlwriLAyXYPXowCeLFqs
bfQQ/iX4sgcdYQZVK+1IXYs=
=FE3A
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] off Topic: Looking for a good sql query

2004-11-25 Thread Jamie Deppeler
Hi,
This post is a bit off topic im looking a good sql book can someone give 
me a recommendation?

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] row-level deadlock problem

2004-11-25 Thread Kamil Kaczkowski
Hello.
I'm running postgresql 7.4.6 on linux 2.4.21(Redhat Enterpise 3).
I have problems with deadlocks caused by(at least I think so) row-level
locks and I can't find the reason.
First I thought this has something with fk constraints, but removing it didn't 
change anything.
Here is simplified schema of my table:

CREATE TABLE stats (
counter integer,
color varchar(6),
shape varchar(6),
size integer,
d date
);

There are non-unique btree indexes on color,shape and size. There is no
primary key.

This table is modified in plpgsql function, launched like this:
# SELECT updatestats('red');
All statement run in auto-commit mode, there is no explicit BEGIN/COMMIT 
anywhere.
Function updatestats goes like this:

CREATE FUNCTION updatestats(text) RETURNS integer AS '
DECLARE
color_var ALIAS FOR $1;
BEGIN
UPDATE stats SET counter=counter+1 WHERE color=color_var AND shape IS 
NULL AND d=current_date;
IF NOT FOUND THEN
INSERT INTO stats (color,counter,d) 
VALUES(color_var,1,current_date);
END IF;
RETURN 1;
END;
' LANGUAGE plpgsql;

Everything is ok until function updatestats is called frequently,  ~ 3 times 
per second.
Then I get following error:
postgres[2247]: [89-1] ERROR:  deadlock detected
postgres[2247]: [89-2] DETAIL:  Process 2247 waits for ShareLock on transaction 
148407635; blocked by process 2248.
postgres[2247]: [89-3] Process 2248 waits for ShareLock on transaction 
148407641; blocked by process 2247.
postgres[2247]: [89-4] CONTEXT:  PL/pgSQL function updatestats line 4 at SQL 
statement
Last query for both childs is the same:
UPDATE stats SET counter=counter+1 WHERE color=$1 AND shape IS NULL AND 
d=current_date
called from: SELECT updatestats('red');
It always locks at first UPDATE statement.

I don't understand where is a deadlock possibility in such simple function. I 
know that waiting for share lock on transaction means waiting for row-level 
lock acquired by this transaction. There's no explicit locking, no SELECT FOR 
UPDATE statements, all fk constraints has been dropped.
Table stats is also modified by other functions, but I have deadlocks only for 
statements calling updatesstats, always two calls with the same 'color' 
argument.
Am I missing something obvious? I have no idea what can cause these deadlocks 
and how to avoid them.
Number of deadlock events during one day is so big that it looks like it 
happens everytime two updatestats function are running concurrently.
All sugestions are welcomed, thanks in advance.
--
Kamil Kaczkowski
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] pg_dump and languages

2004-11-25 Thread Adrian Klaver
I used the pg_dump from Postgres 8.0beta5 to dump the data from a version 
7.4.0 database. Both databases are located on SuSE Linux machines. The 
pg_restore to the 8.0 version went very well with the exception of the 
languages. I have plpgsql and plsh installed in the 7.4.0 database. pg_dump 
handled the dependencies for plpgsql alright but not for plsh. Searching the 
archives showed a previous problem with languages installed to pg_catalog 
which is were plsh is installed. However, I thought this had been solved. Is 
this correct? The other thought that occurred to me is that plsh is an 
untrusted language. I am dumping and restoring as user postgres so my 
understanding is that this should overcome any permissions issues. Again am I 
correct in thinking this?
-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] off Topic: Looking for a good sql query

2004-11-25 Thread Jim Seymour
Jamie Deppeler [EMAIL PROTECTED] wrote:
 
 Hi,
 
 This post is a bit off topic im looking a good sql book can someone give 
 me a recommendation?

I was pleased with...

The Practical SQL Handbook
Third Edition
Judith S. Bowman, Sandra L. Emerson,  Marcy Darnovsky
Addison-Wesley Developers Press
A Division of Addison Wesley Longman, Inc.
ISBN: 0-201-44787-8 (softcover, incl. CD-ROM)

There's a 4th Edition out now.  Here's Bookpool's listing:

http://www.bookpool.com/.x/mrbet2p3z1/ss?qs=The+Practical+SQL+Handbookx=0y=0

I'm toying with the idea of selling, donating or trading-in my 3rd
edition and picking-up the 4th.

Jim

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] off Topic: Looking for a good sql query

2004-11-25 Thread felix
On Fri, Nov 26, 2004 at 01:19:59PM +1100, Jamie Deppeler wrote:

 This post is a bit off topic im looking a good sql book can someone give 
 me a recommendation?

I am a middling SQL guy; I have used functions, triggers, outer joins,
etc, but not often.  But I've been doing that for years :-) my
personal favorite is SQL Visual Quickstart Guide, Chris Fehily,
Peachpit Press, 2002, ISBN 0-201-11803-0.  It has lots of examples, it
remains steadily boring and consistent throughout, it goes from really
basic stuff to pretty fancy stuff, and most precious of all, it covers
five databases and tells you exactly how every single example differs
from db to db --

MS Access
MS SQL Server
MySQL
Oracle
PostgreSQL

If I only had one SQL book, this would be it.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] rules

2004-11-25 Thread Jamie Deppeler
This may be a bit of stupid question but it is the first time i have 
played around with rules and i am trying to convert a sql statement into 
a rule

sample sql querty
INSERT INTO schema.table2
 (
field1,
field2,
field3,
field4,
field5,
field6,
field7,
field8,
field9,
)
VALUES
(
SELECT
table1.name,
table1.notes,
table1.hrs,
table1.days,
table3.value,
table31.value,
table4.ratename,
table4.maxhrs,
table4.appliesafter,
table4.rate,
table5.value,
table5.name
FROM
(inner joins)
Where
primary = number
)
which i would like to make into a rule if possible
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] PostgreSQL slow after VACUUM

2004-11-25 Thread Nikola Milutinovic
Hi all.
I have the following:
- Mandrake Linux 9.1
- PostgreSQL 7.3.2 MDK5
There is one DB and one DB user. The DB is cleared and loaded with the 
data of same volume each month (monthly report). The volume is not small 
and it usually takes 3 hours to load. Loading is done with SQL files 
which use transactions, 10,000 SQL statements per transaction.

A couple of days ago, disk became full, since we were not doing VACUUM 
on the DB at all. So, I deleted all records from the 3 tables the DB has 
and performed VACUUM FULL ANALYZE. This reclaimed the space.

My problem is that the load is now taking (to my estimate) 20 times more!
Anything I could do to find out what's going on? There is nothing in the 
logs that I can see.

Nix.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] rules

2004-11-25 Thread Daniel Martini
Hi,

Citing Jamie Deppeler [EMAIL PROTECTED]:
 sample sql querty
 
 INSERT INTO schema.table2
   (
  field1,
  field2,
  field3,
  field4,
  field5,
  field6,
  field7,
  field8,
  field9,
 )
 VALUES
 (
  SELECT
  table1.name,
  table1.notes,
  table1.hrs,
  table1.days,
  table3.value,
  table31.value,
  table4.ratename,
  table4.maxhrs,
  table4.appliesafter,
  table4.rate,
  table5.value,
  table5.name
  FROM
   (inner joins)
  Where
   primary = number
 )
 
 which i would like to make into a rule if possible

First of all, you should fix your query. Problems I see:
- You're inserting more values than you specified fields.
- On postgresql you don't have to use the
  insert into foo (...) values (select ...)
  construct, you can instead say:
  insert into foo (...) select ...
For the rule stuff:
- do you want it to happen on update, on delete or on insert? You should
  be clear about that.

You might want to read Bruce Momjian's book PostgreSQL - Introduction
and Concepts (available in your book store or on the web - I leave it
up to you to google it up or to order it). It's a really fine book for
beginners and helped me a lot to learn how to write rules and do other
stuff with postgresql. 

Regards,
Daniel

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])