Re: [GENERAL] how to speed up query

2007-06-13 Thread Martijn van Oosterhout
On Mon, Jun 11, 2007 at 03:01:08PM +0300, Andrus wrote:
   delete from firma1.rid where dokumnr not in (select dokumnr from
 firma1.dok)
 
 Yes, it is nonsensial. However, this command should run fast even if it is 
 nonsensial.

For future reference, I beleive the problem is the NOT IN. It has this
feature where if any of the rows it searches has a NULL, it will
return FALSE for *all* rows. So the whole table has to be scanned to
check that there arn't any NULLs, before a single row can be returned.
This is why it can't be converted to a join.

Now, you may argue that in your case this doesn't apply, which may be
true, but it's always been a difficult construct to optimise... (and
somewhat surprising for people with they didn't realise the
null-effect). The most efficient way you write this is with an OUTER
JOIN.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] Use of PROFILE in Makefiles

2007-06-13 Thread Mayuresh Nirhali

Hello,

I could not build pg8.2.4 on Solaris x86 because the variable PROFILE 
was set in my path to desktop profile script.

Pg makfiles (Makfile.global) appends value of PROFILE if set to CFLAGS.

I was wondering if there is any particular reason for having PROFILE 
appended to CFLAGS.
If this setting is platform specific then will it help moving that to 
platform specific makefiles ??



Thanks
Mayuresh


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Plperl create contstraint trigger

2007-06-13 Thread Chris Coleman
Hi,

I have written a trigger in plperl (I could not use pgsql due to the
nature of the trigger).  I know CREATE CONSTRAINT TRIGGER is not
intended for general use, but it is pretty important that this trigger
be run on commit of the transaction, as it needs to use data in other
tables that can only be considered final on commit.

When I run the trigger then this error appears in the logs:

postgres FATAL:  BeginInternalSubTransaction: unexpected state END

It is caused by the first call to the spi_exec_query($sql) function.
Would I be correct in assuming that plperl language triggers are not
capable of running from a CREATE CONSTRAINT TRIGGER command?

When the trigger is set up as a normal per row trigger then it executes
without any problems.

Many thanks
Chris


Chris Coleman
Programmer 
Information Systems
Room PKL1 Phone 369
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

This e-mail is confidential and may be read only by the intended recipient.
If you are not the intended recipient, please do not forward, copy or take
any action based on it and, in addition, please delete this email and
inform the sender.
We cannot be sure that this e-mail or its attachments are free from
viruses.  In keeping with good computing practice, please ensure that
you take adequate steps to check for any viruses.  Before replying
or sending any email to us, please consider that the internet is inherently
insecure and is an inappropriate medium for certain kinds of information.
We reserve the right to access and read all e-mails and attachments
entering or leaving our systems.

Registered office: Eurocom House, Ashbourne Road, Derby DE22 4NB Company 
number: 01574696. 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] sql server to postgreSQL

2007-06-13 Thread Ashish Karalkar
Hello All,

I have a long list of sql server queries that needs to me remoulded in to  
postgres format

can anyone please suggest me any  tool that will convert sqlserver query into 
postgresql query except 
SwisSQL - SQLOne Console 3.0 



Thanks in advance

With regards
Ashish Karalkar

Re: [GENERAL] PostGreSQL for a small Desktop Application

2007-06-13 Thread Magnus Hagander
On Mon, Jun 11, 2007 at 10:44:38AM -0700, Gabriele wrote:
 I'm going to develop a medium sized business desktop client server
 application which will be deployed mostly on small sized networks and
 later eventually, hopefully, on medium sized networks.
 It will probably be developed using C#.
 
 I do need a solid DBMS wich can work with .Net framework. I do know
 PostGreSQL is a good DBMS in general (it sports most of the advanced
 DBMS features, transactions and stored procedure included) but i
 wonder if it is suited for my application.
 
 Knowledge base of my users is very low and servers will be standard
 class desktop computers most probably ran on Windows XP (and Vista
 later on, i suspect).
 The service should be enough lightweight to be ran on such server
 and I need silent installation and configuration because i can't
 expect my user to be able to configure a DBMS.

Silent installation is available and works fine. There will be a bit of
overhead, especially in disk usage, compared to an embedded database. But
it's much better than most commercial ones, like MSDE.
See http://pginstaller.projects.postgresql.org/silent.html.


 Additionally i need a passable to good data provider to interface
 PostGreSQL with .Net which possibly provide better performance than
 ODBC (don't know if it exists and i hope it is free).

npgsql (http://pgfoundry.org/projects/npgsql) works very well for .net.


 Anyway performance shoudn't be a big issue, i expect low concurrency
 level (less than 10 users) and low to medium volume of rows and
 queries. If more users and more data are needed for especially big
 customer i can simply suggest bigger and dedicated server. (different
 problems will arise for the aggregated data which will feed the web
 application, but for these we will have a real server).
 
 Is PostGreSQL suited for such use? If not which alternatives are there
 to be used? When using PostGreSQL in such a way is there any
 suggestion to be followed? Links to sources which i may find
 interesting (how to make a silent install, basic hardware
 requirements, so on).

Your other option would be to use an embedded database like SQLite. It has
a much smaller footprint, but is of course also a lot less capable. But it
could be worthwhile to check it out.

As for suggestions, well, don't forget to run autovacuum and to schedule
your backups properly. You can certainly not expect your users to do that
:) Also, plan and test in advance a method for upgrading the installations
- you'll always want to be at the latest release in a branch as soon as
possible after the release.

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] setof or array as input parameter to postgresql 8.2 functions

2007-06-13 Thread Jyoti Seth
Hi,

 

I have to pass a set of values and arrays in postgresql 8.2 functions. But I
am not getting any help on that. Please let me know if any one has idea.

 

Thanks,

Jyoti 



Re: [GENERAL] Apparent Wraparound?

2007-06-13 Thread g . hintermayer
On Jun 8, 3:23 pm, [EMAIL PROTECTED] (Alvaro Herrera) wrote:
 Gunther Mayer wrote:
  Hi there,

  I just found the following message in my logs:

  Jun  8 10:38:38 caligula postgres[56868]: [1-1] : LOG:  could not
  truncate directory pg_subtrans: apparent wraparound

  Should I be worried or can I just ignore this one? My database is still
  small (a pg_dumpall bzippe'd is still around 500KB) so I doubt that I'm
  affected by any transaction id wraparound problems. I also vacuum
  analyze once a day and have pg_autovacuum turned on.

 What version are you running?  This seems to match the description of a
 bug fixed for 8.2 and 8.1.5:


I noticed the same message in my logfiles (once on each of two
servers). I'm running 8.1.8, and the server's been running flawless
for about 2 months.

Gerhard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [SQL] setof or array as input parameter to postgresql 8.2 functions

2007-06-13 Thread Pavel Stehule

Hello

maybe:

create function foo(varchar[][]) returns void as $$ begin end$$
language plpgsql;

postgres=# select foo(array[array[1,2], array[2,2]]::varchar[][]);
foo
-

(1 row)

Regards
Pavel Stehule

2007/6/13, Jyoti Seth [EMAIL PROTECTED]:





Hi,



I have to pass a set of values and arrays in postgresql 8.2 functions. But I
am not getting any help on that. Please let me know if any one has idea.



Thanks,

Jyoti


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Johannes Konert

Hi pgsql-list-members,
I currently write a small script that deletes outdated xlog-files from 
my backup-location.
Because I do not want to rely on creation-date, I found it usable to use 
the result of

ln | sort -g -r
Thus the newest WAL xlog-file is on top and I can delete all not needed 
files at the bottom of the list.


My question: Is it for ALL cases guaranteed, that the naming of the 
WAL-files in $PGDATA/pg_xlog always produces a higher number for a 
newer file?

What happens if the 24hexdigits reach upper bound?

Thank your for your replies on that issue of postgresql inner working model.
Regards Johannes

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] setof or array as input parameter to postgresql 8.2 functions

2007-06-13 Thread Albe Laurenz
 I have to pass a set of values and arrays in postgresql 8.2 
 functions. But I am not getting any help on that. Please let 
 me know if any one has idea.

Something like this?

CREATE OR REPLACE FUNCTION sample(avalue integer, anarray text[])
   RETURNS void LANGUAGE plpgsql STRICT AS
$$DECLARE
   i integer;
BEGIN
   -- I can access the value
   i := avalue;
   -- I can access the array
   i := anarray[2];
END;$$;

Called like

SELECT sample(3, ARRAY[1, 2, 3]);

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Alvaro Herrera
Johannes Konert wrote:
 Hi pgsql-list-members,
 I currently write a small script that deletes outdated xlog-files from 
 my backup-location.
 Because I do not want to rely on creation-date, I found it usable to use 
 the result of
 ln | sort -g -r
 Thus the newest WAL xlog-file is on top and I can delete all not needed 
 files at the bottom of the list.

Warning, this is NOT SAFE to do.  You should NEVER delete outdated
xlog files, unless you appreciate RANDOM CORRUPTION of your data.


Not sure how those caps sneaked in there, sorry about that.

Have a nice day,

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] psql : Error: Cannot stat /pgdata/8.2/main

2007-06-13 Thread Tom Lane
Joost Kraaijeveld [EMAIL PROTECTED] writes:
 I have moved my database files from their default location to their own
 partition on with their own controller and disks. PostgreSQL works OK
 and I can connect with Pgadmin (Debian Lenny AMD64, PostgreSQL 8.2.4).

 When I want to connect with psql however (with a non-root account) I get
 the following:

 panoramix:~$ psql -d my_database
 Error: Cannot stat /pgdata/8.2/main

 /pgdata/8.2/main is the location where the database files are actually
 located.

psql itself has no business touching the database directory, and a quick
search of the source code shows no instance of Cannot stat anywhere in
released PG sources.

I think you are being burnt by some misbehavior of Debian's wrapper
patches, and a complaint directed there is the next step for you.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Use of PROFILE in Makefiles

2007-06-13 Thread Tom Lane
Mayuresh Nirhali [EMAIL PROTECTED] writes:
 I was wondering if there is any particular reason for having PROFILE 
 appended to CFLAGS.

The usual use-case is to build a profilable backend with
make PROFILE=-pg all

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Kevin Hunter

Hello List,

Short version:  I want pointers to feature comparisons of Postgres vs  
Oracle.  Can the list help?


Long version:

I'm working with a student on a project for school.  I'm trying to  
teach right methods of thinking and doing things, such as making  
the database/data model the authoritative source rather than adding  
code to the application layer.


I originally had him code his project for Postgres, but for reasons  
beyond our control we've had to move to Oracle.  In designing the  
schema we have need of a constraint that checks values in other  
tables.  The way that I currently know how to do this in Postgres is  
with PLpgSQL functions.  Then I add something like


CONSTRAINT away_team_is_playing CHECK ( NOT teamIsPlaying 
( awayteamid, timeid ) )


to the table schema.  No big deal, except that it seems Oracle can't  
use anything other than a simple column constraint.  He can't use any  
custom functions like he could in Postgres, and we've yet to find a  
solution to do what he needs.


I didn't immediately find anything last night on the postgresql.org  
website, or a wider Google search.


So, motivation aside, what I'm wanting is a couple of pointers to  
feature comparisons of Postgres vs Oracle.  What else is going to  
bite him while he works on this project?  Would be handy to have this  
reference since neither of us are really DB wizards.  (Besides!   
Isn't it good to tout what Postgres does better than it's  
competition? :-) )


Thanks,

Kevin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Plperl create contstraint trigger

2007-06-13 Thread Tom Lane
Chris Coleman [EMAIL PROTECTED] writes:
 When I run the trigger then this error appears in the logs:
 FATAL:  BeginInternalSubTransaction: unexpected state END

See
http://archives.postgresql.org/pgsql-committers/2007-05/msg00383.php

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostGreSQL for a small Desktop Application

2007-06-13 Thread Owen Hartnett

At 12:34 PM +0200 6/13/07, Magnus Hagander wrote:

On Mon, Jun 11, 2007 at 10:44:38AM -0700, Gabriele wrote:

 I'm going to develop a medium sized business desktop client server
 application which will be deployed mostly on small sized networks and
 later eventually, hopefully, on medium sized networks.
 It will probably be developed using C#.

 I do need a solid DBMS wich can work with .Net framework. I do know
 PostGreSQL is a good DBMS in general (it sports most of the advanced
 DBMS features, transactions and stored procedure included) but i
 wonder if it is suited for my application.

 Knowledge base of my users is very low and servers will be standard
 class desktop computers most probably ran on Windows XP (and Vista
 later on, i suspect).
 The service should be enough lightweight to be ran on such server
 and I need silent installation and configuration because i can't
 expect my user to be able to configure a DBMS.


Silent installation is available and works fine. There will be a bit of
overhead, especially in disk usage, compared to an embedded database. But
it's much better than most commercial ones, like MSDE.
See http://pginstaller.projects.postgresql.org/silent.html.



 Additionally i need a passable to good data provider to interface
 PostGreSQL with .Net which possibly provide better performance than
 ODBC (don't know if it exists and i hope it is free).


npgsql (http://pgfoundry.org/projects/npgsql) works very well for .net.



 Anyway performance shoudn't be a big issue, i expect low concurrency
 level (less than 10 users) and low to medium volume of rows and
 queries. If more users and more data are needed for especially big
 customer i can simply suggest bigger and dedicated server. (different
 problems will arise for the aggregated data which will feed the web
 application, but for these we will have a real server).

 Is PostGreSQL suited for such use? If not which alternatives are there
 to be used? When using PostGreSQL in such a way is there any
 suggestion to be followed? Links to sources which i may find
 interesting (how to make a silent install, basic hardware
 requirements, so on).


Your other option would be to use an embedded database like SQLite. It has
a much smaller footprint, but is of course also a lot less capable. But it
could be worthwhile to check it out.

As for suggestions, well, don't forget to run autovacuum and to schedule
your backups properly. You can certainly not expect your users to do that
:) Also, plan and test in advance a method for upgrading the installations
- you'll always want to be at the latest release in a branch as soon as
possible after the release.


I'm in almost exactly the original poster's boat.  I introduced 
PostgreSQL into a rewrite of an old application that ran with an 
Access backend, and I'm certainly glad I moved up.


There are a couple of .Net interfaces.  The OLE one is OK, but seemed 
a lot slower than Npgsql, which really is pretty good performance, at 
least for my usage.  I've used the ODBC interface for using Crystal 
Reports, but not enough so I can measure its performance. I'd 
recommend Npgsql.


Right now my backend is running on a Macintosh Xserve, which does a 
backup cron task every night, and rsyncs the backups over to another 
offsite location.  Since my database isn't that big (~30MB), this 
isn't a big deal.


I've run Postgres as a server on both Windows and Mac, and it seems 
to make a bigger strain on Windows performance than on Mac 
performance (probably because the scheduler for multi-tasking on the 
Mac suits it better).  I'll probably use windows as the installed 
backend, but I might just drop in a Mac Mini if that's problematic 
(not from Windows, but trying to get space on a client's server).


-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Regular expressions in procs

2007-06-13 Thread Steve Manes
I apologize if I'm having a rookie brain block, but is there a way to 
massage a string inside a proc to, for instance, strip it of all 
non-alpha characters using a regular expression?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Rodrigo De León
On Jun 13, 8:57 am, [EMAIL PROTECTED] (Kevin Hunter) wrote:
 So, motivation aside, what I'm wanting is a couple of pointers to
 feature comparisons of Postgres vs Oracle.  What else is going to
 bite him while he works on this project?  Would be handy to have this
 reference since neither of us are really DB wizards.  (Besides!
 Isn't it good to tout what Postgres does better than it's
 competition? :-) )

This might help a bit on the SQL side:

Comparison of different SQL implementations
http://troels.arvin.dk/db/rdbms/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] psql : Error: Cannot stat /pgdata/8.2/main

2007-06-13 Thread Leif B. Kristensen
On Wednesday 13. June 2007 15:45, Tom Lane wrote:
psql itself has no business touching the database directory, and a
 quick search of the source code shows no instance of Cannot stat
 anywhere in released PG sources.

I think you are being burnt by some misbehavior of Debian's wrapper
patches, and a complaint directed there is the next step for you.

FWIW, 'Cannot stat' is usually coming from the cp command, and indicates 
that it can't find the source file.
-- 
Leif Biberg Kristensen | Registered Linux User #338009
http://solumslekt.org/ | Cruising with Gentoo/KDE
My Jazz Jukebox: http://www.last.fm/user/leifbk/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Regular expressions in procs

2007-06-13 Thread Rodrigo De León
On Jun 13, 9:02 am, [EMAIL PROTECTED] (Steve Manes) wrote:
 I apologize if I'm having a rookie brain block, but is there a way to
 massage a string inside a proc to, for instance, strip it of all
 non-alpha characters using a regular expression?

regexp_replace() could work for you, see:
http://www.postgresql.org/docs/8.2/static/functions-string.html
http://www.postgresql.org/docs/8.2/static/functions-matching.html


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Scott Marlowe

Alvaro Herrera wrote:

Johannes Konert wrote:
  

Hi pgsql-list-members,
I currently write a small script that deletes outdated xlog-files from 
my backup-location.
Because I do not want to rely on creation-date, I found it usable to use 
the result of

ln | sort -g -r
Thus the newest WAL xlog-file is on top and I can delete all not needed 
files at the bottom of the list.



Warning, this is NOT SAFE to do.  You should NEVER delete outdated
xlog files, unless you appreciate RANDOM CORRUPTION of your data.
  
I think he's talking about deleting pg_xlog files that are being used 
for PITR from the backup machine after they've been applied.


But I'm not sure that's really what he meant or not.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Greg Smith

On Wed, 13 Jun 2007, Alvaro Herrera wrote:


Johannes Konert wrote:

I currently write a small script that deletes outdated xlog-files from
my backup-location.


Warning, this is NOT SAFE to do.  You should NEVER delete outdated
xlog files, unless you appreciate RANDOM CORRUPTION of your data.


He's talking about wiping out the ones on the backup server, so I think 
Johannes means erasing the old archived logs on the secondary here.  That 
can screw up your backup if you do it wrong, but it's not an all-caps 
worthy mistake.


On Wed, 13 Jun 2007, Johannes Konert wrote:

Because I do not want to rely on creation-date,


No, you want to rely on creation date, because then this problem goes 
away.  The idea you should be working toward is that you identify when 
your last base backup was started after it's copied to the secondary, and 
then you can safely delete any archived logs file on the secondary from 
before that time.  Instead of doing ls | sort -g -r you should be doing 
something like looping over the files in a bash shell script and using

[ -ot first xlog in base backup ] to determine which files to delete.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Greg Smith
http://troels.arvin.dk/db/rdbms/ is where I go when I have to figure out 
how to cope with someone's MySQL mess [this week:  it lets you put an 
arbitrary integer into a boolean column?  seriously?]; it's also handy for 
comparing against Oracle.


There is a helpful table 
http://www-css.fnal.gov/dsg/external/freeware/mysql-vs-pgsql.html I refer 
to sometimes.  It's from March of 2005 so several pieces are out of date.


Kevin Kline's SQL in a Nutshell also has some helpful suggestions on 
syntax differences between the major SQL dialects, but it's even older.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] PostGreSQL for a small Desktop Application

2007-06-13 Thread David Gardner
I'm not much of a .Net guy, but the pgsql-ODBC driver works rather well for me. 
Take a look at:
http://psqlodbc.projects.postgresql.org/howto-csharp.html

As for Windows XP, isn't there some limit to the number of incoming network 
connections?

---
David Gardner, IT
The Yucaipa Companies
(310) 228-2855

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gabriele
Sent: Monday, June 11, 2007 10:45 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostGreSQL for a small Desktop Application

I'm going to develop a medium sized business desktop client server
application which will be deployed mostly on small sized networks and
later eventually, hopefully, on medium sized networks.
It will probably be developed using C#.

I do need a solid DBMS wich can work with .Net framework. I do know
PostGreSQL is a good DBMS in general (it sports most of the advanced
DBMS features, transactions and stored procedure included) but i
wonder if it is suited for my application.

Knowledge base of my users is very low and servers will be standard
class desktop computers most probably ran on Windows XP (and Vista
later on, i suspect).
The service should be enough lightweight to be ran on such server
and I need silent installation and configuration because i can't
expect my user to be able to configure a DBMS.
Additionally i need a passable to good data provider to interface
PostGreSQL with .Net which possibly provide better performance than
ODBC (don't know if it exists and i hope it is free).

Anyway performance shoudn't be a big issue, i expect low concurrency
level (less than 10 users) and low to medium volume of rows and
queries. If more users and more data are needed for especially big
customer i can simply suggest bigger and dedicated server. (different
problems will arise for the aggregated data which will feed the web
application, but for these we will have a real server).

Is PostGreSQL suited for such use? If not which alternatives are there
to be used? When using PostGreSQL in such a way is there any
suggestion to be followed? Links to sources which i may find
interesting (how to make a silent install, basic hardware
requirements, so on).

Thank you!


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Johannes Konert

Greg Smith wrote:
He's talking about wiping out the ones on the backup server, so I 
think Johannes means erasing the old archived logs on the secondary 
here.  That can screw up your backup if you do it wrong, but it's not 
an all-caps worthy mistake.
yes, that's what I am talking about related to 
http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html.

Sorry, if that did not came out clearly enough.


On Wed, 13 Jun 2007, Johannes Konert wrote:

Because I do not want to rely on creation-date,


No, you want to rely on creation date, because then this problem goes 
away. 
Truely right...if I can gurantee, that the file-dates of my archived 
WAL-files do have proper timestamps. If the timestamps once are messed 
up and all have the same timestamp (due to a Windows-copy or something 
else foolish), then the delete-script might delete the wrong files...
The idea you should be working toward is that you identify when your 
last base backup was started after it's copied to the secondary, and 
then you can safely delete any archived logs file on the secondary 
from before that time.  Instead of doing ls | sort -g -r you should 
be doing something like looping over the files in a bash shell script 
and using

[ -ot first xlog in base backup ] to determine which files to delete.

right; but as I said, then I rely on file-dates.
But during the day I came out with an solution: I store the WAL-files 
with the time-stamp of archiving in their file-name. Thus I can order 
and delete them safely.
Your hint was the one, that helped me to find that solution - so thanks 
for that, Greg.and the others.


Regards,
Johannes


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Johannes Konert

Johannes Konert wrote:
But during the day I came out with an solution: I store the WAL-files 
with the time-stamp of archiving in their file-name. Thus I can order 
and delete them safely.
Your hint was the one, that helped me to find that solution - so 
thanks for that, Greg.and the others. 
That solution has still a problem: It workes fine in case that the 
WAL-naming restarts with 0001, because the attached 
timestamp in name would still make it possible to identify the file as 
being a newer one as , but there is still the 
problem with shifts in time itself.
If someone corrects the servers computer-time/date to a date before 
current time (e.g. set the clock two hours back), then the newer WAL 
files will have an older timestamp and will be deleted by accident.


Thus now I increase the number of characters of the filename to infinite 
and the last 24 characters are the WAL file name. Thus the archived 
filenames ~always~ increase in naming and all backup files before the 
last base backup can be safely identified not relying on computer 
timestamps or with the risk of a restart in naming by postgresql.
I hope this solutions only border is the 255 character restriction of 
file-name lengthbut if that one will be reached in future times I am 
sure longer names are possible :)


Regards Johannes


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PostGreSQL for a small Desktop Application

2007-06-13 Thread Gabriele
I will probably try PostGreSQL for my needs as it seem powerful enough
and easy to ship to my potential customers.
For the data provider i will try npgsql as you and others suggested,
as it seem enough reliable and stable to be used efficiently.

I will probably use newsgroups again later if and when i will have
more specific issue, in the meanwhile i go back to links you have
provided me to read faqs for pgsql and npgsql.

Thanks everybody for your help and suggestions.





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] how to speed up query

2007-06-13 Thread Andrus

  delete from firma1.rid where dokumnr not in (select dokumnr from
firma1.dok)



For future reference, I beleive the problem is the NOT IN. It has this
feature where if any of the rows it searches has a NULL, it will
return FALSE for *all* rows. So the whole table has to be scanned to
check that there arn't any NULLs, before a single row can be returned.
This is why it can't be converted to a join.


Thank you.
As I understand, only way to optimize the statement

delete from firma1.rid where dokumnr not in (select dokumnr from
firma1.dok);

assuming that  firma1.dok.dokumnr does not contain null values is to change
it to

CREATE TEMP TABLE mydel AS
SELECT r.dokumnr
FROM rid r
LEFT JOIN dok d USING (dokumnr)
WHERE d.dokumnr IS NULL;
DELETE FROM rid USING mydel WHERE rid.dokumnr =mydel.dokumnr;
drop table mydel;


I run the following commands (first number of minutes from script start) in
my script:

18 Duration 2,9 minutes: ALTER TABLE dok ADD PRIMARY KEY (dokumnr)
...
81 Duration 9,6 minutes:  CREATE INDEX rid_dokumnr_idx ON rid (dokumnr)
...
101 Duration 10,5 minutes:  analyze
...
113  Duration 11 minutes: CREATE TEMP TABLE mydel AS
SELECT r.dokumnr
FROM rid r
LEFT JOIN dok d USING (dokumnr)
WHERE d.dokumnr IS NULL

122 Duration 9,6 minutes:  DELETE FROM rid USING mydel WHERE rid.dokumnr 
=mydel.dokumnr


133 Duration 11 minutes:  ALTER TABLE rid ADD FOREIGN KEY (dokumnr) 
REFERENCES dok

 ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE

When I run command

CREATE TEMP TABLE mydel AS
SELECT r.dokumnr
FROM rid r
LEFT JOIN dok d USING (dokumnr)
WHERE d.dokumnr IS NULL

from pgAdmin, it takes 1 second.
When I run this command from script it takes 11 minutes!

Any idea why running this command from script takes 11 minutes?

I have created
indexes on dok and rid and ran analyze before using CREATE TEMP TABLE

So I expect that CREATE TEMP TABLE command must take same time to run from
script and from pgAdmin.

My script in running in single transaction.
Should I use commit after index creation or after ANALYZE command?


In pgAdmin

explain analyze  SELECT r.dokumnr
FROM rid r
LEFT JOIN dok d USING (dokumnr)
WHERE d.dokumnr IS NULL

returns

Hash Left Join  (cost=7760.27..31738.02 rows=1 width=4) (actual
time=2520.904..2520.904 rows=0 loops=1)
  Hash Cond: (r.dokumnr = d.dokumnr)
  Filter: (d.dokumnr IS NULL)
  -  Seq Scan on rid r  (cost=0.00..17424.24 rows=202424 width=4) (actual
time=0.032..352.225 rows=202421 loops=1)
  -  Hash  (cost=6785.01..6785.01 rows=56101 width=4) (actual
time=211.150..211.150 rows=56079 loops=1)
-  Seq Scan on dok d  (cost=0.00..6785.01 rows=56101 width=4)
(actual time=0.021..147.805 rows=56079 loops=1)
Total runtime: 2521.091 ms

Andrus. 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Greg Smith

On Wed, 13 Jun 2007, Johannes Konert wrote:

If someone corrects the servers computer-time/date to a date before current 
time (e.g. set the clock two hours back), then the newer WAL files will have 
an older timestamp and will be deleted by accident.


This should never happen; no one should ever touch the clock by hand on a 
production system.  The primary and backup server should both be 
syncronized via NTP.  If you're thinking about clock changes for daylight 
savings time, those shouldn't have any effect on timestamps, which should 
be stored in UTC.  If you're on Windows, I recommend reading 
http://searchwinit.techtarget.com/tip/0,289483,sid1_gci1241193,00.html and 
http://www.wilsonmar.com/1clocks.htm if you're not familiar with how 
UTC/NTP insulate you from this issue.  On many types of systems that 
process time-sensitive data, an administrator adjusting the clock manually 
is considered a dangerous event that is specificly scheduled so issues 
like you're concerned about don't happen--and someone who tinkers with the 
clock without following that procedure would be in serious trouble.


You're working hard to worry about problems that should be eliminated by 
the overall design of your system.  If you can't trust your system clocks 
and that files are being copied with their attributes intact, you should 
consider thinking about how to resolve those problems rather than working 
around them.  It's not just PostgreSQL that will suffer from weird, 
unpredictable behavior in a broken environment like that.  Giving a 
Windows example, if you're running in a Windows Domain configuration, if 
the client time drifts too far from the server you can get The system 
cannot log you on due to the following error:  There is a time difference 
between the Client and Server. when trying to login.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] changing the /tmp/ lock file?

2007-06-13 Thread Ben
I'm trying to impliment an automatic failover system, and am running into 
the problem that when I try to start multiple postgres clusters on the 
same box, postgres will not start if /tmp/.s.PGSQL.5432.lock exists. Can I 
change the file it's looking for via an option? Nothing seemed obvious 
from a quick review of the docs, short of (presumably) changing the port 
number, which I don't want to do.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] how to speed up query

2007-06-13 Thread Tom Lane
Andrus [EMAIL PROTECTED] writes:
 from pgAdmin, it takes 1 second.
 When I run this command from script it takes 11 minutes!

 Any idea why running this command from script takes 11 minutes?

Different plans maybe?  Try EXPLAIN ANALYZE in both cases.
Do you have work_mem set the same in both cases?

 My script in running in single transaction.
 Should I use commit after index creation or after ANALYZE command?

Hmm, there are some extra cycles involved in examining not-yet-committed
tuples, but I hardly see how that would create a discrepancy of this
size.  Check the plans first.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] changing the /tmp/ lock file?

2007-06-13 Thread Tom Lane
Ben [EMAIL PROTECTED] writes:
 I'm trying to impliment an automatic failover system, and am running into 
 the problem that when I try to start multiple postgres clusters on the 
 same box, postgres will not start if /tmp/.s.PGSQL.5432.lock exists. Can I 
 change the file it's looking for via an option? Nothing seemed obvious 
 from a quick review of the docs, short of (presumably) changing the port 
 number, which I don't want to do.

That lock file exists specifically to keep you from doing that (ie,
having more than one postmaster listening on the same port).  Trying
to defeat the lock is not a good idea.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] changing the /tmp/ lock file?

2007-06-13 Thread Greg Smith

On Wed, 13 Jun 2007, Ben wrote:

when I try to start multiple postgres clusters on the same box, postgres 
will not start if /tmp/.s.PGSQL.5432.lock exists.


There can only be one program listening on a specific port at a time.  If 
you want multiple clusters on the same box, each of them has to be given 
their own port number.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Frank Wittig
Hello Johannes,

Johannes Konert schrieb:
 Thus the newest WAL xlog-file is on top and I can delete all not needed
 files at the bottom of the list.

You're using pg_controldata to figure out which file's serial is older
than the latest redo checkpoint.
In case of restart of the slave server PgSQL needs all files that were
archived beginning with the one right after the latest redo checkpoint
or it will fail to sync to its master.


 What happens if the 24hexdigits reach upper bound?

Did you calculate you question? I assume no.

24 Hex digits means 24^16 unique file names. Assuming your server saves
a WAL file each second (you should review your config it it does) it
takes (24^16)/(60*60*24*365)=3.84214066×10^14 years to reach the upper
bound. (Plase forgive me ignoring leap years ;))
I assume that there will be a system change before that date so counting
will start over again. ;)

Greetings,
Frank Wittig



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] changing the /tmp/ lock file?

2007-06-13 Thread Ben
Why would that be a problem if each is configured to listen on different 
addresses?


But maybe a better question to ask would be how people are doing failover 
in the case where you have two servers, each handling a seperate set of 
data and acting as backup for each other. I fully expect things to go 
slower during failover periods, but in my case, that's better than 
doubling my hardware.


On Wed, 13 Jun 2007, Tom Lane wrote:


Ben [EMAIL PROTECTED] writes:

I'm trying to impliment an automatic failover system, and am running into
the problem that when I try to start multiple postgres clusters on the
same box, postgres will not start if /tmp/.s.PGSQL.5432.lock exists. Can I
change the file it's looking for via an option? Nothing seemed obvious
from a quick review of the docs, short of (presumably) changing the port
number, which I don't want to do.


That lock file exists specifically to keep you from doing that (ie,
having more than one postmaster listening on the same port).  Trying
to defeat the lock is not a good idea.

regards, tom lane



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] recursive function

2007-06-13 Thread Karen Springer

Hi,

I am struggling to write my first recursive function and think I'm 
missing something basic.  I have written 2 functions that work if I pass 
over one parameter, but if I try to pass over a series of parameters say 
in a view for every field in the table it seems to run but never 
displays data.


I have a table of built units.

tbl_BuiltAssemblies
ParentBarCode varchar(12)
ChildBarCode varchar(12)

I need to find the end ParentBarCode (Top Level) for each child.  So if 
I have


Parent   Child
12
23
24
35

If I feed the function child 5, 4, 3, or 2, I need to get parent 1.  
Since these are built units, each child can only be in one parent.


So far I've written this function which works great when passing over 
one parameter.


CREATE OR REPLACE FUNCTION 
Production_Tracking.GetTopLevelParent_WithView(varchar)

RETURNS TEXT AS '

DECLARE
   childBarCode ALIAS FOR $1;
   parentBarCode TEXT;
   topLevelParentBarCode TEXT;
BEGIN

   SELECT INTO parentBarCode
  tbl_BuiltAssemblies.ParentBarCode
   FROM Production_Tracking.tbl_BuiltAssemblies
   WHERE tbl_BuiltAssemblies.ChildBarCode = childBarCode;

   topLevelParentBarCode = parentBarCode;

   WHILE FOUND LOOP
   SELECT INTO parentBarCode
  tbl_BuiltAssemblies.ParentBarCode
   FROM Production_Tracking.tbl_BuiltAssemblies
   WHERE tbl_BuiltAssemblies.ChildBarCode = parentBarCode;
   IF NOT(parentBarCode IS NULL) THEN
   topLevelParentBarCode = parentBarCode;
   END IF;
   END LOOP;

   RETURN (topLevelParentBarCode)::TEXT;

END;
' LANGUAGE 'plpgsql';

I have also written this too which again works great if I pass over one 
parameter.  (I would add a Level field to this  get the max level 
eventually, but I don't want to spend more time on it until I know I'm 
on the right track.)


CREATE OR REPLACE FUNCTION 
Production_Tracking.GetTopLevelParent_WithView_1(varchar)

RETURNS SETOF Production_Tracking.cattree AS'

DECLARE
   childbarcode ALIAS FOR $1;
   parentbarcode Production_Tracking.cattree%ROWTYPE;
   toplevelparentbarcode Production_Tracking.cattree%ROWTYPE;
BEGIN

   FOR parentbarcode IN SELECT ParentBarCode, childbarcode AS 
InitialChild
   FROM Production_Tracking.tbl_BuiltAssemblies WHERE 
tbl_BuiltAssemblies.ChildBarCode = childbarcode LOOP


   RETURN NEXT parentbarcode;

   FOR toplevelparentbarcode IN SELECT ParentBarCode, 
childbarcode AS InitialChild
   FROM 
Production_Tracking.GetTopLevelParent_WithView_1(parentbarcode.ParentBarCode) 
LOOP


   RETURN NEXT toplevelparentbarcode;

   END LOOP;
  
   END LOOP;


   RETURN;

END;
' LANGUAGE 'plpgsql';

Here are examples of the views I've tried

SELECT tbl_BuiltAssemblies.ChildBarCode,
  
Production_Tracking.GetTopLevelParent_WithView(tbl_BuiltAssemblies.ChildBarCode) 
AS TopLevelParent

  FROM Production_Tracking.tbl_BuiltAssemblies;

and

SELECT tbl_BuiltAssemblies.ChildBarCode,
  
Production_Tracking.GetTopLevelParent_WithView_1(tbl_BuiltAssemblies.ChildBarCode) 
AS parents

  FROM Production_Tracking.tbl_BuiltAssemblies;

These views seem to run, but never display data.

Any help would be most appreciated.

Thanks,
Karen



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostGreSQL for a small Desktop Application

2007-06-13 Thread Oisin Glynn

David Gardner wrote:

I'm not much of a .Net guy, but the pgsql-ODBC driver works rather well for me. 
Take a look at:
http://psqlodbc.projects.postgresql.org/howto-csharp.html

As for Windows XP, isn't there some limit to the number of incoming network 
connections?

---
David Gardner, IT
The Yucaipa Companies
(310) 228-2855

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gabriele
Sent: Monday, June 11, 2007 10:45 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostGreSQL for a small Desktop Application

I'm going to develop a medium sized business desktop client server
application which will be deployed mostly on small sized networks and
later eventually, hopefully, on medium sized networks.
It will probably be developed using C#.

I do need a solid DBMS wich can work with .Net framework. I do know
PostGreSQL is a good DBMS in general (it sports most of the advanced
DBMS features, transactions and stored procedure included) but i
wonder if it is suited for my application.

Knowledge base of my users is very low and servers will be standard
class desktop computers most probably ran on Windows XP (and Vista
later on, i suspect).
The service should be enough lightweight to be ran on such server
and I need silent installation and configuration because i can't
expect my user to be able to configure a DBMS.
Additionally i need a passable to good data provider to interface
PostGreSQL with .Net which possibly provide better performance than
ODBC (don't know if it exists and i hope it is free).

Anyway performance shoudn't be a big issue, i expect low concurrency
level (less than 10 users) and low to medium volume of rows and
queries. If more users and more data are needed for especially big
customer i can simply suggest bigger and dedicated server. (different
problems will arise for the aggregated data which will feed the web
application, but for these we will have a real server).

Is PostGreSQL suited for such use? If not which alternatives are there
to be used? When using PostGreSQL in such a way is there any
suggestion to be followed? Links to sources which i may find
interesting (how to make a silent install, basic hardware
requirements, so on).

Thank you!


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq
  
On the network connections there are limits but the only one that I have 
hit is open file shares or named pipe connections. I have been able to 
get the number of open sockets very high in testing (though I may be 
breaking the license agreement!) to the point where I get an odd buffer 
space error (I actually posted on one of the lists about it previously) 
but that was under extreem load to breakpoint conditions.


FYI
We are running MS IIS, PostgreSQL, Oracle XE, all of our code(7-8 C++ 
Services) and a VM image of FC6 on a good XP Pro machine with no 
complaints so far and the system is providing a real time service in 
which delays/interruptions would be noticed.


For high end systems we use Windows 2003 Server

Oisin Glynn  


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] how to speed up query

2007-06-13 Thread Andrus

I cannot make much sense of this information. I can see no reason why
your script should take 11 minutes, while executing it from pgAdmin
would take only a second. How do you run the script?


I'm running my script from VFP client applicaton.
Application sends every statement to server separately using ODBC driver.

table creation, data loading, primary key creation, index creation, analyze
and problematic CREATE TABLE TEMP command
all ran in single transaction.
Should I commit transactions after analyze command or after index creation?

server logs shows:

2007-06-13 03:19:43 LOG:  checkpoints are occurring too frequently (21
seconds apart)
2007-06-13 03:19:43 HINT:  Consider increasing the configuration parameter
checkpoint_segments.
2007-06-13 03:20:02 LOG:  checkpoints are occurring too frequently (19
seconds apart)
2007-06-13 03:20:02 HINT:  Consider increasing the configuration parameter
checkpoint_segments.
2007-06-13 03:20:22 LOG:  checkpoints are occurring too frequently (20
seconds apart)
2007-06-13 03:20:22 HINT:  Consider increasing the configuration parameter
checkpoint_segments.
2007-06-13 03:21:18 LOG:  checkpoints are occurring too frequently (23
seconds apart)
2007-06-13 03:21:18 HINT:  Consider increasing the configuration parameter
checkpoint_segments.
2007-06-13 03:49:10 ERROR:  deadlock detected
2007-06-13 03:49:10 DETAIL:  Process 3280 waits for AccessExclusiveLock on
relation 233893 of database 233756; blocked by process 2508.
Process 2508 waits for ShareUpdateExclusiveLock on relation 233988 of
database 233756; blocked by process 3280.
2007-06-13 03:49:10 STATEMENT:  ALTER TABLE desktop ADD FOREIGN KEY
(alamklass) REFERENCES andmetp ON UPDATE CASCADE  DEFERRABLE

and script terminates after 5.5 hours running yesterday night.
I will re-start computer and try again.

Can increasing checkpint_segments increase speed significantly ?


After your suggested change my database creation script runs 6 hours.


Is that down from the 14 hours you mentioned before? Which would be an
amazing 8 hours faster?


I had a number of DELETE .. WHERE NOT IN commands.
I changed all them to CREATE TEMP TABLE ...   DELETE


I used query

SELECT relname as Table_Name, relpages * 8/1024 as size_in_mb
 FROM pg_class
 where  relpages * 8/10240
 ORDER BY relpages DESC


Looks like a useful query. Compare with:
SELECT pg_size_pretty(pg_database_size(' bilkaib'))


SELECT pg_size_pretty(pg_database_size('mydb'))

returns

828 MB



SELECT pg_size_pretty(pg_relation_size(' bilkaib'))


returns 100 MB


SELECT pg_size_pretty(pg_total_relation_size(' bilkaib'))


returns 171 MB

relpages * 8/1024 and pg_relation_size(oid) return in some cases very
different result, no idea why.

For one index relpages returns size about 6 MB  but pg_relation_size returns
only 2152 kB



Biggest database (bilkaib) load time is 8 minutes, it contains 329000
records.
Total data loading time is approx 49 minutes.


You mean table, not database?


Yes, I meant table.


Remaining 5 hours are used for index and key creation. This seems too
much.

It might be worth checking the order in which you create objects.
Creating relevant indices before using complex queries is one thing to
look for.


I created primary key on dok(dokumnr), index on rid(dokumnr) and  ran
analyze before running
this CREATE TEMP TABLE command.


If that still runs so slow it's probably indication that your RDBMS is
in dire need of more RAM. Look to your setup in postgresql.conf.
As everything runs slow, you should look to your hardware, system
configuration and PostgreSQL setup. Do you have enough RAM (you
mentioned 2 GB) and does PostgreSQL get its share? (- setup in
postgresql.conf). There is probably a bottleneck somewhere.


If CREATE TEMP TABLE from pgAdmin takes 1 sec and from script 11 minues I
do'nt think this is hardware related.


If that does not solve your problem, post your setup or your script -
whichever you suspect to be the problem


The script which creates 800 MB database  is big.
I can create this script but is anybody interested to look into it ?

Andrus.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] recursive function

2007-06-13 Thread Pavel Stehule

Hello

please, look on
http://people.planetpostgresql.org/merlin/index.php?/archives/2-Dealing-With-Recursive-Sets-With-PLPGSQL.html

Regards
Pavel Stehule

2007/6/13, Karen Springer [EMAIL PROTECTED]:

Hi,

I am struggling to write my first recursive function and think I'm
missing something basic.  I have written 2 functions that work if I pass
over one parameter, but if I try to pass over a series of parameters say
in a view for every field in the table it seems to run but never
displays data.

I have a table of built units.

tbl_BuiltAssemblies
ParentBarCode varchar(12)
ChildBarCode varchar(12)

I need to find the end ParentBarCode (Top Level) for each child.  So if
I have

Parent   Child
12
23
24
35

If I feed the function child 5, 4, 3, or 2, I need to get parent 1.
Since these are built units, each child can only be in one parent.

So far I've written this function which works great when passing over
one parameter.

CREATE OR REPLACE FUNCTION
Production_Tracking.GetTopLevelParent_WithView(varchar)
RETURNS TEXT AS '

DECLARE
childBarCode ALIAS FOR $1;
parentBarCode TEXT;
topLevelParentBarCode TEXT;
BEGIN

SELECT INTO parentBarCode
   tbl_BuiltAssemblies.ParentBarCode
FROM Production_Tracking.tbl_BuiltAssemblies
WHERE tbl_BuiltAssemblies.ChildBarCode = childBarCode;

topLevelParentBarCode = parentBarCode;

WHILE FOUND LOOP
SELECT INTO parentBarCode
   tbl_BuiltAssemblies.ParentBarCode
FROM Production_Tracking.tbl_BuiltAssemblies
WHERE tbl_BuiltAssemblies.ChildBarCode = parentBarCode;
IF NOT(parentBarCode IS NULL) THEN
topLevelParentBarCode = parentBarCode;
END IF;
END LOOP;

RETURN (topLevelParentBarCode)::TEXT;

END;
' LANGUAGE 'plpgsql';

I have also written this too which again works great if I pass over one
parameter.  (I would add a Level field to this  get the max level
eventually, but I don't want to spend more time on it until I know I'm
on the right track.)

CREATE OR REPLACE FUNCTION
Production_Tracking.GetTopLevelParent_WithView_1(varchar)
RETURNS SETOF Production_Tracking.cattree AS'

DECLARE
childbarcode ALIAS FOR $1;
parentbarcode Production_Tracking.cattree%ROWTYPE;
toplevelparentbarcode Production_Tracking.cattree%ROWTYPE;
BEGIN

FOR parentbarcode IN SELECT ParentBarCode, childbarcode AS
InitialChild
FROM Production_Tracking.tbl_BuiltAssemblies WHERE
tbl_BuiltAssemblies.ChildBarCode = childbarcode LOOP

RETURN NEXT parentbarcode;

FOR toplevelparentbarcode IN SELECT ParentBarCode,
childbarcode AS InitialChild
FROM
Production_Tracking.GetTopLevelParent_WithView_1(parentbarcode.ParentBarCode)
LOOP

RETURN NEXT toplevelparentbarcode;

END LOOP;

END LOOP;

RETURN;

END;
' LANGUAGE 'plpgsql';

Here are examples of the views I've tried

 SELECT tbl_BuiltAssemblies.ChildBarCode,

Production_Tracking.GetTopLevelParent_WithView(tbl_BuiltAssemblies.ChildBarCode)
AS TopLevelParent
   FROM Production_Tracking.tbl_BuiltAssemblies;

and

 SELECT tbl_BuiltAssemblies.ChildBarCode,

Production_Tracking.GetTopLevelParent_WithView_1(tbl_BuiltAssemblies.ChildBarCode)
AS parents
   FROM Production_Tracking.tbl_BuiltAssemblies;

These views seem to run, but never display data.

Any help would be most appreciated.

Thanks,
Karen



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Stefan Kaltenbrunner
Kevin Hunter wrote:

[...]
 I originally had him code his project for Postgres, but for reasons
 beyond our control we've had to move to Oracle.  In designing the schema
 we have need of a constraint that checks values in other tables.  The
 way that I currently know how to do this in Postgres is with PLpgSQL
 functions.  Then I add something like
 
 CONSTRAINT away_team_is_playing CHECK ( NOT teamIsPlaying( awayteamid,
 timeid ) )
 
 to the table schema.  No big deal, except that it seems Oracle can't use
 anything other than a simple column constraint.  He can't use any custom
 functions like he could in Postgres, and we've yet to find a solution to
 do what he needs.

well doing it that way is usually not a good idea at all (you cannot
actually use arbitrary queries in a CHECK constraint in pg either -
using a function to hide that is cheating the database - oracle might
actually be more(!) clever here not less ...). this why you can get into
all kind of weird situations with losing the integrity of your data or
running into serious issues during dump/restore for example.

What you need to do here is to use a trigger.


Stefan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] changing the /tmp/ lock file?

2007-06-13 Thread Greg Smith

On Wed, 13 Jun 2007, Ben wrote:

Why would that be a problem if each is configured to listen on different 
addresses?


I'm not sure if you can even get bind() to work like that portably, but 
ultimately it doesn't matter anyway.  The benefit of how the current lock 
scheme keeps people from screwing up and running the server twice 
outweighs the value of satisfying the odd case you're asking for here even 
if it were technically possible to do.


But maybe a better question to ask would be how people are doing 
failover in the case where you have two servers, each handling a 
seperate set of data and acting as backup for each other.


A sample might be:

Server 1:  Primary for DB#1 on 5432, secondary for DB#2 on 5433
Server 2:  Primary for DB#2 on 5433, secondary for DB#1 on 5432

Then you would always know to reach DB#1 on port 5432 and DB#2 on port 
5433, regardless of which IP address was active.  Once you've got that in 
place, have incoming connects go to two virtual IP address that normally 
map to the two servers, but which collapses to the same underlying address 
in the case of a failure.  Then you can use any number of IP-based 
failover schemes to implement that.


If you wanted to only expose port 5432 as being the public one, but with 
two addresses, you might remap the port via the capabilities of the 
virtual interface.  In that case, you could actually keep both servers 
running their primary on 5432, but then you'd be stuck with issues like 
making sure the port number was changed every time you moved the entire 
primary config to the secondary.  It's just a easier to manage in several 
aspects if you fix the port numbers per cluster, rather than always have 
5432 be the active one and the IP address determining which cluster you 
get to.


You'd need to give some more OS and general network setup information to 
get any more specific or clever than that.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Frank Wittig
Frank Wittig schrieb:

 24 Hex digits means 24^16 unique file names. Assuming your server saves
 a WAL file each second (you should review your config it it does) it
 takes (24^16)/(60*60*24*365)=3.84214066×10^14 years to reach the upper
 bound.

How embarrassing - I messed up the calculation. It has to be 16^24.
But pg does forge filenames other that that. It uses 2 hex digits to
count segments. After 256 segments counting starts over and the serial
is increased by one. The first 8 positions are the time line which I
will ignore for my new calculation.

So there is an eight hex digits serial for each time line which takes
256 segments. So there are 16^8*256 unique file names. If I assume one
WAL file a second this would reach upper bound (for a single time line)
after slightly more than 136 years.

Please correct me if my assumptions are wrong. But I would say one can
rely on serial file names to increase steadily.
The attached restore.pl uses this assumption to delete all files which
are older than the last redo checkpoint.

Greetings,
Frank Wittig


restore.pl
Description: Perl program


signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Kevin Hunter

At 3:26p -0400 on 13 Jun 2007, Stefan Kaltenbrunner wrote:
The way that I currently know how to do this in Postgres is with  
PLpgSQL functions.  Then I add something like


CONSTRAINT away_team_is_playing CHECK ( NOT teamIsPlaying 
( awayteamid, timeid ) )


to the table schema.


well doing it that way is usually not a good idea at all (you  
cannot actually use arbitrary queries in a CHECK constraint in pg  
either - using a function to hide that is cheating the database -  
oracle might actually be more(!) clever here not less ...). this  
why you can get into all kind of weird situations with losing the  
integrity of your data or running into serious issues during dump/ 
restore for example.


I was /hoping/ for a response like this!  Thanks!  Okay.  I'll bite.   
Why can't they be used in general?  Is it the same problem that the  
trigger has (below)?



What you need to do here is to use a trigger.


From online docs regarding Oracle, this is not 100% safe either:

(http://download-east.oracle.com/docs/cd/B14117_01/appdev.101/b10795/ 
adfns_co.htm)
'To enforce this rule without integrity constraints, you can use a  
trigger to query the department table and test that each new  
employee's department is valid. But this method is less reliable than  
the integrity constraint. SELECT in Oracle Database uses consistent  
read, so the query might miss uncommitted changes from other  
transactions.'


It seems to me that there are certain situations where, especially in  
a highly normalized data model, that you'd /have/ to have multiple  
checks of even other tables.  What theory am I missing if this is not  
the case?


(I'm curious as well for another project on which I'm working that  
does use pg and currently uses a function in just this fashion.)


Thanks,

Kevin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Johannes Konert

Greg Smith wrote:

On Wed, 13 Jun 2007, Johannes Konert wrote:

If someone corrects the servers computer-time/date to a date before 
current time (e.g. set the clock two hours back), then the newer WAL 
files will have an older timestamp and will be deleted by accident.


This should never happen; no one should ever touch the clock by hand 
on a production system.  The primary and backup server should both be 
syncronized via NTP.  If you're thinking about clock changes for 
daylight savings time, those shouldn't have any effect on timestamps, 
which should be stored in UTC.  If you're on Windows,

Its not Windows; it will be Debian Linux.
I completely agree with you that of course our servers synchronize 
themselve via NTP with global time, but we already had the case that - 
for some reasons - NTP did not work and times drift away from each 
other. If you have to manage some servers you might not recognize that a 
NTP daemon does not work anymore or that a new firewall prohibits these 
TCP packages nowand time goes by, because everything seem to work 
just fine.
Then one nice day you realize, that one, two or many of your servers 
just have their own time and you need to bring them back to synchronized 
time while they are online. If you made your applications be aware of 
such effects and use system-nanotime or global counters where possible, 
then even these time-corrections can be handled.
But I agree with you: of course normally this will never happen...but it 
happened once.


You're working hard to worry about problems that should be eliminated 
by the overall design of your system.  If you can't trust your system 
clocks and that files are being copied with their attributes intact, 
you should consider thinking about how to resolve those problems 
rather than working around them.

yes, but still there is a remaining risk in my opinion.
It's not just PostgreSQL that will suffer from weird, unpredictable 
behavior in a broken environment like that.  Giving a Windows example, 
if you're running in a Windows Domain configuration, if the client 
time drifts too far from the server you can get The system cannot log 
you on due to the following error:  There is a time difference between 
the Client and Server. when trying to login.
If we add a new server to the cluster, the application will check times 
as it is in oyur Windows-example, but if it is allready in and working, 
then it cannot simply shutdown in case of time-diffs.


Greg, thanks for your sophisticated hints.
But the thread is going a little off-topic now, I guess :)
The issue with the time-dependency of WAL archiving and deletion 
issolved for me by using a global infinite counter to rely on by now.
I am sure next questions will come before long and I look forward to 
read any hints then, if you and others have time to read them.

Regards Johannes

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_xlog - files are guaranteed to be sequentialy named?

2007-06-13 Thread Johannes Konert

Frank Wittig wrote:

24 Hex digits means 24^16 unique file names. Assuming your server saves
a WAL file each second (you should review your config it it does) it
takes (24^16)/(60*60*24*365)=3.84214066×10^14 years to reach the upper
bound.



(..) It has to be 16^24.
But pg does forge filenames other that that. It uses 2 hex digits to
count segments. After 256 segments counting starts over and the serial
is increased by one. The first 8 positions are the time line which I
will ignore for my new calculation.

So there is an eight hex digits serial for each time line which takes
256 segments. So there are 16^8*256 unique file names. If I assume one
WAL file a second this would reach upper bound (for a single time line)
after slightly more than 136 years.

Please correct me if my assumptions are wrong. But I would say one can
rely on serial file names to increase steadily.
  
Thanks for that answer. That was exactly what I could not immediatelly 
find mentioned in the documentation.
If it is guaranteed - and I understood your comments this way - that the 
naming follows a sequential order, then I agree with you, that this is 
enough for a long time.
I was not sure wether or not the naming follows this rule. Of course I 
calculated the number of possible filenames before, but as I said, I was 
not sure, that Postgresql follows a guaranteed naming convention of 
always increasing WAL filenames.

Anyway, this is now for sure and I will rely on that now.
Regards Johannes

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] how to enforce index usage with +0

2007-06-13 Thread Andrew Sullivan
On Sun, Jun 10, 2007 at 05:32:55AM -0700, Timasmith wrote:
 select s.order_id
 from small_orders_table s, orders o
 where s.find_these_id in
  (select some_id from some_table where some_indexed_field = 'ABC')
  and s.order_id+0 = o.order_id
  and date_trunc('microseconds', o.valid_until_dt)  now()
 
 This should essentially use the s.some_indexed_field as the primary
 index and hit the orders table on the order id.

It will do this automatically if the selectivity of your
some_indexed_field values leans that way.  I think you're probably
trying to outsmart the planner/optimiser here, and that's _usually_
not a good idea.  IT shouldn't make any difference whether you add
that +0 or not, assuming the database is tuned correctly.

I'd be rather more worried about the date_trunc stuff.  You probably
want a functional index on there.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
This work was visionary and imaginative, and goes to show that visionary
and imaginative work need not end up well. 
--Dennis Ritchie

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Ron Johnson

On 06/13/07 15:02, Kevin Hunter wrote:
[snip]


'To enforce this rule without integrity constraints, you can use a 
trigger to query the department table and test that each new employee's 
department is valid. But this method is less reliable than the integrity 
constraint. SELECT in Oracle Database uses consistent read, so the 
query might miss uncommitted changes from other transactions.'


Isn't it *supposed* to mis UNcommitted changes from other transactions?

--
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] changing the /tmp/ lock file?

2007-06-13 Thread Tom Lane
Ben [EMAIL PROTECTED] writes:
 Why would that be a problem if each is configured to listen on different 
 addresses?

Because if they're all on the same port number, they're all trying to
use the same Unix socket.

Now, if you are mostly interested in communicating with them over IP,
you could use unix_socket_directory to force their socket files into
different places.  This'll be a PITA for local use of Unix socket
connections, but if that's what you want ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] how to speed up query

2007-06-13 Thread Tom Lane
Andrus [EMAIL PROTECTED] writes:
 2007-06-13 03:49:10 ERROR:  deadlock detected
 2007-06-13 03:49:10 DETAIL:  Process 3280 waits for AccessExclusiveLock on
 relation 233893 of database 233756; blocked by process 2508.
  Process 2508 waits for ShareUpdateExclusiveLock on relation 233988 of
 database 233756; blocked by process 3280.
 2007-06-13 03:49:10 STATEMENT:  ALTER TABLE desktop ADD FOREIGN KEY
  (alamklass) REFERENCES andmetp ON UPDATE CASCADE  DEFERRABLE

 and script terminates after 5.5 hours running yesterday night.

This might be a good reason not to run the script as a single long
transaction --- it's probably accumulating locks on a lot of different
tables.  Which would be fine if it was the only thing going on, but
evidently it isn't.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Joris Dobbelsteen
-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Kevin Hunter
Sent: woensdag 13 juni 2007 22:03
To: Stefan Kaltenbrunner
Cc: PostgreSQL General List
Subject: Re: [GENERAL] pointer to feature comparisons, please

At 3:26p -0400 on 13 Jun 2007, Stefan Kaltenbrunner wrote:
 The way that I currently know how to do this in Postgres is with 
 PLpgSQL functions.  Then I add something like

 CONSTRAINT away_team_is_playing CHECK ( NOT teamIsPlaying ( 
 awayteamid, timeid ) )

 to the table schema.

 well doing it that way is usually not a good idea at all (you cannot 
 actually use arbitrary queries in a CHECK constraint in pg either - 
 using a function to hide that is cheating the database - 
oracle might 
 actually be more(!) clever here not less ...). this why you can get 
 into all kind of weird situations with losing the integrity of your 
 data or running into serious issues during dump/ restore for example.

I was /hoping/ for a response like this!  Thanks!  Okay.  I'll bite.   
Why can't they be used in general?  Is it the same problem 
that the trigger has (below)?

 What you need to do here is to use a trigger.

 From online docs regarding Oracle, this is not 100% safe either:

(http://download-east.oracle.com/docs/cd/B14117_01/appdev.101/b10795/
adfns_co.htm)
'To enforce this rule without integrity constraints, you can 
use a trigger to query the department table and test that each 
new employee's department is valid. But this method is less 
reliable than the integrity constraint. SELECT in Oracle 
Database uses consistent read, so the query might miss 
uncommitted changes from other transactions.'

For constraints, you don't want that to happen obviously...
In fact, if you run serializable the problems are even bigger.

In Oracle you should use SELECT FOR UPDATE for such constraints. They do
interfere with concurrency a bit, but you can in fact guarentee you
constraints (to a certain better point). It does require a lot of
thought nevertheless and its troublesome to get right.

In PostGreSQL there are more limitations to guarenteeing such
constraint. You can go a long with with SELECT FOR SHARE, but you can
run into problems when using serializable isolation. It's a bit better
on concurrency (it seems), but cannot enforce the constraint up to the
level Oracle can.

It's a tricky subject, it requires a lot of work for a single
constraint. Also you must be very aware of the limitations of such
constructs, since many are impossible to guarentee at this point in
time. In general, the world is less concerned with it.

It seems to me that there are certain situations where, 
especially in a highly normalized data model, that you'd 
/have/ to have multiple checks of even other tables.  What 
theory am I missing if this is not the case?

(I'm curious as well for another project on which I'm working 
that does use pg and currently uses a function in just this fashion.)

They should use triggers. Also sometimes it possible to transform the
database schema in a way that you can enforce the constraint with
build-in (foreign key) constraints.

The general problem with these type of constraints is that they are
assumed to be true at ALL times. However it is possible to violate the
constraint, contradicting the assumption we just made. For triggers
there do not exist such assumptions.
Unless the database is going to support constraints with subqueries
(which is very hard to achieve and quite involved), we cannot rely on
the assuption that constraints are always true. In addition, don't
expect this type of support anytime soon on any opensource/commercial
database.

- Joris


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread PFC

Isn't it *supposed* to mis UNcommitted changes from other transactions?


	Well, if the uncommited change is a DELETE of the row that allowed the  
constraint check to pass, then when this delete is commited, your data is  
no longer consistent.


Consider this :

CREATE TABLE A( attributes INT[], CHECK( is_valid_attributes( attributes  
)) )


CREATE TABLE valid_attributes ( attribute_id INTEGER )

	You want to check that A.attributes is an array of values, the only  
allowed values being stored in valid_attributes table. If you delete a row  
in valid_attributes, many rows in A can become invalid unless you use some  
form of trigger on valid_attributes which would start to look a lot like a  
foreign key ON DELETE trigger. If you insert stuff in A while concurrently  
deleting a row in valid_attributes, you have problems. This is why foreign  
key checks take share locks on referenced tables...



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using the GPU

2007-06-13 Thread Alejandro Torras

Billings, John wrote:
Does anyone think that PostgreSQL could benefit from using the video 
card as a parallel computing device?  I'm working on a project using 
Nvidia's CUDA with an 8800 series video card to handle non-graphical 
algorithms.  I'm curious if anyone thinks that this technology could 
be used to speed up a database?  If so which part of the database, and 
what kind of parallel algorithms would be used?
 


Looking at nvidia's cuda homepage 
(http://developer.nvidia.com/object/cuda.html), I see that the parallel 
bitonic sorting could be used instead of qsort/heapsort/mergesort (I 
don't know which is used)


--
Alejandro Torras


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Ron Johnson

On 06/13/07 16:59, PFC wrote:

Isn't it *supposed* to mis UNcommitted changes from other transactions?


Well, if the uncommited change is a DELETE of the row that allowed 
the constraint check to pass, then when this delete is commited, your 
data is no longer consistent.


The DELETE should block, no?

--
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] inner join problem with temporary tables

2007-06-13 Thread guillermo arias
Hi people, i have a problem with inner join and temporary tablesI have 2 tables: articles and existencesarticles CREATE TABLE public.articles( art_cod character varying(5) NOT NULL DEFAULT ''::character varying, art_descri character varying(20) DEFAULT ''::character varying, CONSTRAINT articles_pkey PRIMARY KEY (art_cod)) "1";"nails""2";"hammers""3";"wood"existences CREATE TABLE public.existences( art_cod character varying(5) DEFAULT ''::character varying, exis_ubic character varying(20) DEFAULT ''::character varying, exis_qty numeric(8) DEFAULT 0) "1";"new york";100"1";"dallas";130"2";"miami";1390"3";"baltimore";390"3";"louisiana";20And a function that is due to relate both tables and give me a list of articles withubication and quantity.Whati do in the function is first load 2 temporary tables, then the inner join.I know this is no the best way, but i would like to know why it does not work. Notice that in ms sql server it works fine.CREATE OR REPLACE FUNCTION public.test1 (out art_cod varchar,out art_descri varchar, out exis_ubic varchar, out exis_qty numeric) returns setof record as$body$select * into temp table t_arti from public.articles;select * into temp table t_exis from public.existences;select a.art_cod,a.art_descri,e.exis_ubic,e.exis_qtyfrom t_arti a inner join t_exis e on a.art_cod= e.art_cod;$body$LANGUAGE 'sql' VOLATILE;When i call the function with this line:select * from modelo.test1()This message appears:ERROR: relation "t_arti" does not existSQL state: 42P01Context: SQL function "test1"Why it does not work???thanks for your helpGet your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.comJoin Linux Discussions! -- http://Community.LinuxWaves.com

Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread PFC
On Thu, 14 Jun 2007 00:09:20 +0200, Ron Johnson [EMAIL PROTECTED]  
wrote:



On 06/13/07 16:59, PFC wrote:

Isn't it *supposed* to mis UNcommitted changes from other transactions?
 Well, if the uncommited change is a DELETE of the row that  
allowed the constraint check to pass, then when this delete is  
commited, your data is no longer consistent.


The DELETE should block, no?


Why ?

	Foreign keys put an ON DELETE trigger on the referenced table besides  
checking the referencing column on insert/update... If you just implement  
a constraint, you only get half the functionality.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] inner join problem with temporary tables

2007-06-13 Thread PFC



This message appears:

ERROR: relation t_arti does not exist
SQL state: 42P01
Context: SQL function test1


Why it does not work???
thanks for your help


	Because plpgsql functions are compiled on first execution and all queries  
are then prepared. All tables are referenced directly in prepared  
statements, not by name. Any prepared statement that refers to dropped  
tables (even dropped temp tables) is thus unfit for consumption.


	This allows queries in plpgsql functions to be extremely fast, but it  
isn't smart enough (yet) to recompile functions when a table the function  
depends on is dropped.


	Just disconnect and reconnect, all prepared plans will be lost, and it  
will work. Or issue your queries directly instead of using a function.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Ron Johnson

On 06/13/07 17:23, PFC wrote:
On Thu, 14 Jun 2007 00:09:20 +0200, Ron Johnson [EMAIL PROTECTED] 
wrote:



On 06/13/07 16:59, PFC wrote:

Isn't it *supposed* to mis UNcommitted changes from other transactions?
 Well, if the uncommited change is a DELETE of the row that 
allowed the constraint check to pass, then when this delete is 
commited, your data is no longer consistent.


The DELETE should block, no?


Why ?

Foreign keys put an ON DELETE trigger on the referenced table 


Foreign keys that silently, automatic DELETE records?

Did I read that correctly?

besides checking the referencing column on insert/update... If you just 
implement a constraint, you only get half the functionality.


But when I define a FK *constraint*, that's all I *want*!

--
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Using the GPU

2007-06-13 Thread Alejandro Torras

Alejandro Torras wrote:

Billings, John wrote:
Does anyone think that PostgreSQL could benefit from using the video 
card as a parallel computing device?  I'm working on a project using 
Nvidia's CUDA with an 8800 series video card to handle non-graphical 
algorithms.  I'm curious if anyone thinks that this technology could 
be used to speed up a database?  If so which part of the database, 
and what kind of parallel algorithms would be used?
 


Looking at nvidia's cuda homepage 
(http://developer.nvidia.com/object/cuda.html), I see that the 
parallel bitonic sorting could be used instead of 
qsort/heapsort/mergesort (I don't know which is used)


I think that the function cublasIsamax() explained at 
http://developer.download.nvidia.com/compute/cuda/0_8/NVIDIA_CUBLAS_Library_0.8.pdf 
can be used to find the maximum of a single precision vector, but 
according with a previous post of Alexander Staubo, this function is 
best suited for fixed-length tuple values.


But could the data be separated into two zones, one for varying-length 
data and other for fixed-length data?
With this approach fixed-length data may be susceptible for more and 
deeper optimizations like parallelization processing.


--
Alejandro Torras


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] inner join problem with temporary tables

2007-06-13 Thread Alvaro Herrera
PFC wrote:
 
 This message appears:
 
 ERROR: relation t_arti does not exist
 SQL state: 42P01
 Context: SQL function test1
 
 
 Why it does not work???
 thanks for your help
 
   Because plpgsql functions are compiled on first execution and all 
   queries  are then prepared. All tables are referenced directly in 
 prepared  
 statements, not by name. Any prepared statement that refers to dropped  
 tables (even dropped temp tables) is thus unfit for consumption.

This is correct but it's not the problem at hand -- notice how the
error message is not talking about an unknown OID.  I think the problem
here is that he is using SELECT INTO, which is different in PL/pgSQL
than what is in plain SQL.  I bet using CREATE TABLE AS instead of
SELECT INTO would work.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Stefan Kaltenbrunner
Kevin Hunter wrote:
 At 3:26p -0400 on 13 Jun 2007, Stefan Kaltenbrunner wrote:
 The way that I currently know how to do this in Postgres is with
 PLpgSQL functions.  Then I add something like

 CONSTRAINT away_team_is_playing CHECK ( NOT teamIsPlaying(
 awayteamid, timeid ) )

 to the table schema.

 well doing it that way is usually not a good idea at all (you cannot
 actually use arbitrary queries in a CHECK constraint in pg either -
 using a function to hide that is cheating the database - oracle might
 actually be more(!) clever here not less ...). this why you can get
 into all kind of weird situations with losing the integrity of your
 data or running into serious issues during dump/restore for example.
 
 I was /hoping/ for a response like this!  Thanks!  Okay.  I'll bite. 
 Why can't they be used in general?  Is it the same problem that the
 trigger has (below)?

ok consider the following scenario:

you have table A with a check constraint saying the a given column
depends on the result of some arbitrary queries on table B (maybe
something as simple as this flag can only be set to true if there are
at least 5 items of this kind in table B).
So after a while you added some data to both tables and A has set that
column to true for a few rows. Now you manipulate B in a way that woudl
cause maybe one constraint to fail in A IF and only IF it get's reevaluated.
One day later your server decides to commit suicide - so you get the
spare one dig out the backup from the day before and try to restore that
dump.
First you will get the schema restored from the backup - next all the
data will be put in into the tables and then ? Bom.
The third major step in restoring the database is readding all
constraints - if the CHECK constraint get's readded it will get
evaluated by the database fully (ie row-by-row) - but because your data
is inconsistent you will get a constraint violation on the now
inconsistent data ...
The trigger would avoid that restore problem because it would not be
invoked in a similiar fashion (ie they trigger on UPDATE/DELETE/INSERT
which are already done at that point of the restore).
That is just one of the most obvious failures - trying to implement that
kind of arbitrary complex query based foreign-key like constraint is
always very difficult.

 
 What you need to do here is to use a trigger.
 
 From online docs regarding Oracle, this is not 100% safe either:
 
 (http://download-east.oracle.com/docs/cd/B14117_01/appdev.101/b10795/adfns_co.htm)
 
 'To enforce this rule without integrity constraints, you can use a
 trigger to query the department table and test that each new employee's
 department is valid. But this method is less reliable than the integrity
 constraint. SELECT in Oracle Database uses consistent read, so the
 query might miss uncommitted changes from other transactions.'
 
 It seems to me that there are certain situations where, especially in a
 highly normalized data model, that you'd /have/ to have multiple checks
 of even other tables.  What theory am I missing if this is not the case?

well - the main point why the trigger is better is because it will cause
you less operational issues, what it cannot guarantee you either is that
 you cannot manipulate table B in a way that would violate the checking
logic of your constraint in table A (well you could use a trigger on A
to validate that in the other direction - but that opens up another big
can of worms).
The best way is to design your schema in a way that you can use real
foreign key constraints to enforce various things or CHECK constraints
that only deal with data in the very same row(ie neither do queries on
other tables nor depend on OTHER rows than the one that is manipulated
in the same table).



Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] inner join problem with temporary tables

2007-06-13 Thread guillermo arias
could you please give me an example?.
How could i make an inner join select with temporary tables?

This function does not work:

REATE OR REPLACE FUNCTION modelo.test2(OUT xart_cod character varying, OUT 
xart_descri character varying)
  RETURNS SETOF record AS
$BODY$
begin
create temp table t_arti as (select art_cod,art_descri from modelo.articulos);
select $1,$2 from t_arti ;
end; 
$BODY$
  LANGUAGE 'plpgsql' VOLATILE;


this is the error message:

ERROR: query has no destination for result data
SQL state: 42601
Hint: If you want to discard the results of a SELECT, use PERFORM instead.
Context: PL/pgSQL function test2 line 4 at SQL statement



_
Get your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.com
Join Linux Discussions! -- http://Community.LinuxWaves.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [SQL] setof or array as input parameter to postgresql 8.2 functions

2007-06-13 Thread Jyoti Seth
Thanks for the solution. With this I am able to pass arrays and
multidimensional arrays in postgresql functions. 

One of my problem is still left I want to pass set of values with different
datatypes.For eg:
I want to pass following values to the function:
1 ajay 1000.12 
2 rita 2300.24
3 leena 1230.78
4 jaya 3432.45

As the values have different data types I have to create three different
arrays. Is there any way with which I can pass this as a single setof
values. 

Thanks,
Jyoti


-Original Message-
From: Pavel Stehule [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 13, 2007 6:25 PM
To: Jyoti Seth
Cc: pgsql-general@postgresql.org; [EMAIL PROTECTED]
Subject: Re: [SQL] setof or array as input parameter to postgresql 8.2
functions

Hello

maybe:

create function foo(varchar[][]) returns void as $$ begin end$$
language plpgsql;

postgres=# select foo(array[array[1,2], array[2,2]]::varchar[][]);
 foo
-

(1 row)

Regards
Pavel Stehule

2007/6/13, Jyoti Seth [EMAIL PROTECTED]:




 Hi,



 I have to pass a set of values and arrays in postgresql 8.2 functions. But
I
 am not getting any help on that. Please let me know if any one has idea.



 Thanks,

 Jyoti


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings