Re: [GENERAL] Suggestion for parameterized queries

2005-03-01 Thread Sebastien FLAESCH
Richard Huxton wrote:
Sebastien FLAESCH wrote:
Hi,
Parameterized queries (PREPARE/EXECUTE), is a great thing, but it 
would even be better if the DECLARE CURSOR could use a PREPAREd 
statement, to declare a server-side cursor with a parameterized 
query, to use the benefits of DECLAREd cursors (no fetching of all the 
result set on the client, use the binary mode, use the scrollable 
option or the FOR UPDATE option).

For now we cannot use DECLARE CURSOR in our PostgreSQL driver because
of this limitation I could build the SQL statement with literal 
values, but since you have now parametrized queries I would prefer to 
use that...

You can define a function that returns a cursor, if that is of any use 
to you.

Thanks for the idea, but actually we need that as a basic SQL feature, 
because we write a database driver.

How to write a generic function that can execute any SELECT statement?
I can imagine that one param of the function would be the SQL text, 
but what about passing a variable list of parameters?

Seb
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] multicolumn GIST index question

2005-03-01 Thread Teodor Sigaev
Did anyone get multi-column GIST indexes working using both
the gist_btree and postgis modules?
It must.
fl=# -- sessionid is a text;  the_geom is a GEOMETRY
fl=# create index testidx2 on user_point_features using gist 
(sessionid,the_geom);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
fl=#
fl=#
fl=#
fl=# create table test3(a text, b text, c text);
CREATE TABLE
fl=# select addgeometrycolumn ('','test3','the_geom','-1','POINT',2);
addgeometrycolumn
--
 public.test3.the_geom SRID:-1 TYPE:POINT DIMS:2
 geometry_column fixed:0
(1 row)
fl=# create index idx_text_text on test3 using gist (a,b);
CREATE INDEX
fl=# create index idx_text_geom on test3 using gist (a,the_geom);
CREATE INDEX
fl=#

Is a sessionid from user_point_features table unique?
Pls, try to compile database with --enable-cassert --enable-debug and send gdb 
output.


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Replication suggestions (weak multi-master)

2005-03-01 Thread Martijn van Oosterhout
Hi,

I've gone through the list of replication solution on the PostgreSQL
website but I think I'm looking for something rather specialised.

The situation is that there are little satellite machines around the
place, seperated by public internet, each writing to their own log
tables. What I want is for each satellite to be able to query an
aggregate table which is simply the UNION ALL of all the little
tables. Timeliness is not critical (although nice) and it needs to
handle satellites going down and rejoining. I'm ok with a master
machine somewhere.

The problems I see with the ones I can find are:
- They require direct remote access to the pg server, not possible in my case
- Use Java, also not an option
- Don't handle the aggregating (need more triggers)

I'm impressed by Slony and I could probably make it do what I want with
enough trickery. But again the need to be able to connect from any
server to any other. To put it in Slony terms, the origin never needs
to move. If the machine is down, nothing needs to be updated.

I'm considering starting with dbmirror and simply writing a script to
make it do what I want. But I'm asking in case someone has a simple
solution I've overlooked.

Thanks in advance,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpNlJu7KHOaO.pgp
Description: PGP signature


Re: [GENERAL] invalid multibyte character for locale

2005-03-01 Thread Frank van Vugt
Hi Tatsuo / Tom,

[TI]
 Apparently your hack does not kill #define USE_WIDE_UPPER_LOWER.

Mmm, I think it does, but mind you, the hack was applied to the first machine 
only (since that was the one with the 'original' buggy glibc causing a 
postmaster crash when using upper() and stuff), while it was the second one 
producing the error. This second machine didn't seem to have problems using 
upper() in earlier versions, but it looks like it does now.

Using the hack on the second machine obviously solves the problem there as 
well, I agree ;)

[TI]
 BTW, the current code for upper/lower etc. seems to be broken.
 PostgreSQL should not use wide-character method if LC_CTYPE is C.

[TL]
 Yeah, we came to that same conclusion a few days ago in another thread.
 I am planning to install the fix cut

Great, no rush, it's an easily avoided issue ;)






-- 
Best,




Frank.

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Problem with pg_hba.conf

2005-03-01 Thread Sumit Rohatgi
In the pg_hba.conf file, I have the following entry:

hostdb1client1192.168.150.234/32md5

Now the problem is: 
   The client having IP 192.168.150.234 is trying to
use phpPgAdmin, but is unable to connect.

The corresponding config.inc.php has this entry:
$conf['servers'][0]['desc'] = 'PostgreSQL';
$conf['servers'][0]['host'] = '192.168.150.234';
$conf['servers'][0]['port'] = 5432;
$conf['servers'][0]['defaultdb'] = 'db';



If the entry is:
host db1 client1  192.168.150.234/0  md5

Then, any client (client with any IP address) can log
in as no MASK specified.

Thank you
celerity12


Yahoo! Messenger - Communicate instantly...Ping 
your friends today! Download Messenger Now 
http://uk.messenger.yahoo.com/download/index.html

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Clay Shirky observation regarding MySQL

2005-03-01 Thread Marco Colombo
On Mon, 28 Feb 2005, Martijn van Oosterhout wrote:
On Mon, Feb 28, 2005 at 01:46:16PM -0600, [EMAIL PROTECTED] wrote:
Hello!
Clay Shirky made a comment about MySQL that I thought the PostgreSQL
community should be aware of:
   http://www.shirky.com/writings/situated_software.html
It's the section (mostly toward the bottom) entitled, The Nature of
Programming, and the Curious Case of MySQL. The whole article is, as
normal, interesting and thought-provoking.
Interesting article, but w.r.t. to the MySQL statement, I read: If you
don't need any of the things that databases are good for (ACID,
transactions, triggers, views) then MySQL is an acceptable choice.
Interesting article, but I'm not much forgiving when a teacher
is so imprecise expecially on software history and qualities.
It is plain wrong that Apache made it easy, and that before apache
a web server was rocket science.
 [...] In the mid-90s, getting a web server running was such a messy
  endeavor that it was a project goal in and of itself. Then Apache came
  along, and so simplified the process that the web server became a simple
  building block for larger things.
I've used both cernd and NCSA httpd, and when I finally switched
to Apache the choice was based on project activity: faster evolution,
more features, better stability, certainly not because it was easier.
Apache is quite a beast to configure from scratch even nowadays,
I'd even say it's _more_ complicated than it used to be (think of the
different server models it supports, and the large number of modules
available). Fore sure, running cernd was not any harder than running
apache is today. Mr. Shirky completely missed the point here.
 [...] MySQL makes the job much easier, so much easier in fact that
  after MySQL, it becomes a different kind of job. There are complicated
  technical arguments for and against using MySQL vs. other databases,
  but none of those arguments matter anymore.
Same goes for MySQL. I don't think MySQL is easier to install,
configure and administer: the tasks are almost the same of other
open source databases. And _definitely_ it's not easier to use!
I'm not following their development much, but when I had to use it
I've _always_ found that missing features do make it harder to use MySQL.
Hell, even NATURAL JOIN is nice sugar for small projects, where you
can name columns and tables so that you write queries so naturally,
hardly having to stop and think. So are foreign keys. These are 
_not_ advanced features that only SQL gurus use: they reduce development
time and save headaches for _most_ application programmers. They
make programming faster, safer, easier.
So, MySQL success is _not_ based on user-friendlyness. Again, wrong
example.

Back in '95, on the Microsoft side, they didn't even know about TCP/IP.
Linux and i386/NetBSD were quite hard to get and install, not to mention
completely unknown to the public. No wonder running a web server was not
for everyone.
Today, we have distributions that come with a ready-to-run web server.
On the Microsoft side, they turned to Unix (NT/2000/2003 is POSIX, and
even Bill used to claim NT is Unix), and to Internet services. You
can run many Unix daemons and they have thier own Web server and SQL
server.
So, Mr. Shirky is right, installing and running a web server, or a RDBMS,
today is a matter of a few mouse clicks. But _not thanks to Apache
and MySQL_ (and to do that _professionally_ is still totally another
matter). They're only small bricks in the comfortable house build by the
open source movement as a whole.
Development teams behind projects such as Apache (which today is a lot
more than a HTTP server) and PostgreSQL pursue the goal of making
good products for _professionals_ to use effectively. Most of the burden
of making technologies available to as many non-guru users as possible
is on distribution makers. If Mr. Shirky wants to set a date, and
say before that and after that, it's the day open source
distrubutions hit the masses. Certainly there's no after Apache and
no after MySQL.
.TM.
--
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/ [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Problem with pg_hba.conf

2005-03-01 Thread Richard Huxton
Sumit Rohatgi wrote:
In the pg_hba.conf file, I have the following entry:
hostdb1client1192.168.150.234/32md5
Now the problem is: 
   The client having IP 192.168.150.234 is trying to
use phpPgAdmin, but is unable to connect.

The corresponding config.inc.php has this entry:
$conf['servers'][0]['desc'] = 'PostgreSQL';
$conf['servers'][0]['host'] = '192.168.150.234';
You're trying to connect from/to the same IP address. Might it be using 
the loopack (127.0.0.1) instead?

Try adding a definition for 127.0.0.1 and see how that goes.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Index size

2005-03-01 Thread Ioannis Theoharis


Hi,

I have created a btree index on a 'int4' attribute of a table.

After i have inserted 1,000,000 raws in my table, i can see that my index
size is 2745 Blocks (8KB each) from pg_class. That means about 21,960 KB
size.

I try to understand hows is this number generated, because thought that
for each new entry in table, there is a new entry in index and that each
entry of the index is:

4 Bytes for the int4 attribute
and
40 Bytes for oid

So 44 * 1,000,000 ~ 42,969 KB

Can anybody inform me where I do the mistake?


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Problem with pg_hba.conf

2005-03-01 Thread celerity12
Please disregard my previous mail 

 In the pg_hba.conf file, I have the following entry:
 
 hostdb1client1192.168.150.234/32md5
 
 Now the problem is: 
The client having IP 192.168.150.234 is trying to
 use phpPgAdmin, but is unable to connect.
 
 The corresponding config.inc.php has this entry:
 $conf['servers'][0]['desc'] = 'PostgreSQL';
 $conf['servers'][0]['host'] = '192.168.150.245';

Please disregard my previous mail 
Thank you
celerity


Yahoo! Messenger - Communicate instantly...Ping 
your friends today! Download Messenger Now 
http://uk.messenger.yahoo.com/download/index.html

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] GUI

2005-03-01 Thread Hrishikesh Deshmukh
Hi All,

I want to spend less time coding and more time running queries against
the DB which i am building. What about QT from TrollTech!! Can that be
used to whip up gui real fast! Dreamweaver?

Hrishi


On Tue, 01 Mar 2005 07:55:04 +, Richard Huxton dev@archonet.com wrote:
 Hrishikesh Deshmukh wrote:
  A GUI for the web running on linux.
 
 Don't forget to CC: the mailing list too. I don't read this email
 address very often - lots of spam.
 
 Your problem isn't a lack of choices, but rather too many.
 
 Perl has plenty of HTML templating systems from the simple
 HTML::Template through to HTML::Mason - more of a website-building
 system. For database access, DBI with DBD::Pg is the standard way to go.
 CPAN is your friend here, of course.
 
 Python has its own set of modules. Start looking here:
   http://www.python.org/moin/WebProgramming
   http://www.python.org/topics/web/HTML.html
 You might want to look at Zope as an application framework too.
 
 Ruby has many people who swear by it, and there's been a lot of fuss
 recently over Ruby on Rails which is supposed to be a very fast
 development setup
   http://www.rubyonrails.org/
 
 Java and its various servelet/jsp structures are too big a topic to
 cover here.
 
 In addition, there are a lot of content-management systems and toolkits
 that might be useful for you. Two worth looking at are Plone and Bricolage.
 
 HTH
 --
   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] GUI

2005-03-01 Thread Sean Davis
If you are into perl, you should definitely look at Class::DBI 
(http://www.class-dbi.com) which is freely available from CPAN (despite 
the .com site).  It treats table rows as objects with methods.  It can 
model foreign key relationships based entirely on the database schema 
(and entirely automatically).  Then, there are simple extensions that 
add methods for generating an HTML form directly from the database, 
validate the resulting queries, etc.  It can be a very fast way to 
develop a web-based application.

Sean
On Mar 1, 2005, at 9:07 AM, Hrishikesh Deshmukh wrote:
Hi All,
I want to spend less time coding and more time running queries against
the DB which i am building. What about QT from TrollTech!! Can that be
used to whip up gui real fast! Dreamweaver?
Hrishi
On Tue, 01 Mar 2005 07:55:04 +, Richard Huxton dev@archonet.com 
wrote:
Hrishikesh Deshmukh wrote:
A GUI for the web running on linux.
Don't forget to CC: the mailing list too. I don't read this email
address very often - lots of spam.
Your problem isn't a lack of choices, but rather too many.
Perl has plenty of HTML templating systems from the simple
HTML::Template through to HTML::Mason - more of a website-building
system. For database access, DBI with DBD::Pg is the standard way to 
go.
CPAN is your friend here, of course.

Python has its own set of modules. Start looking here:
  http://www.python.org/moin/WebProgramming
  http://www.python.org/topics/web/HTML.html
You might want to look at Zope as an application framework too.
Ruby has many people who swear by it, and there's been a lot of fuss
recently over Ruby on Rails which is supposed to be a very fast
development setup
  http://www.rubyonrails.org/
Java and its various servelet/jsp structures are too big a topic to
cover here.
In addition, there are a lot of content-management systems and 
toolkits
that might be useful for you. Two worth looking at are Plone and 
Bricolage.

HTH
--
  Richard Huxton
  Archonet Ltd
---(end of 
broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to 
[EMAIL PROTECTED])

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] GUI

2005-03-01 Thread Richard Huxton
Hrishikesh Deshmukh wrote:
Hi All,
I want to spend less time coding and more time running queries against
the DB which i am building. What about QT from TrollTech!! Can that be
used to whip up gui real fast! Dreamweaver?
Neither of these are really application tools.
Give Ruby on Rails a look - that's probably about as fast as you're 
going to get for simple tools.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Problem with pg_hba.conf

2005-03-01 Thread Richard Huxton
celerity12 wrote:
Please disregard my previous mail 
 In the pg_hba.conf file, I have the following entry:
 
 hostdb1client1192.168.150.234/32md5
 
 Now the problem is: 
The client having IP 192.168.150.234 is trying to
 use phpPgAdmin, but is unable to connect.
Make sure connection logging is turned on in your postgresql.conf file, 
and let us know what that says.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] GUI

2005-03-01 Thread James Thompson
On Tuesday 01 March 2005 08:07 am, Hrishikesh Deshmukh wrote:
 Hi All,

 I want to spend less time coding and more time running queries against
 the DB which i am building. What about QT from TrollTech!! Can that be
 used to whip up gui real fast! Dreamweaver?

 Hrishi

You could use gnue-designer and gnue-forms from www.gnuenterprise.org.  It 
lets you paint the input/query forms by drag and drop fields from the db.  
You can create multi table master/details forms from existing tables in 
seconds that give you insert, update, delete, query capabilities.  You can 
assign triggers to forms to do custom processing.  And if you need more 
customization you can write python scripts using the gnue-common library 
which is the core of our other tools.  I've done that to create custom apps 
that tie our database access system to a ui built with pyqt and qt-designer.

Several of the developers hang out in #gnuenterprise on irc.freenode.net if 
you'd like more info.

Take Care,
James

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Index size

2005-03-01 Thread Tatsuo Ishii
 I have created a btree index on a 'int4' attribute of a table.
 
 After i have inserted 1,000,000 raws in my table, i can see that my index
 size is 2745 Blocks (8KB each) from pg_class. That means about 21,960 KB
 size.
 
 I try to understand hows is this number generated, because thought that
 for each new entry in table, there is a new entry in index and that each
 entry of the index is:
 
 4 Bytes for the int4 attribute
 and
 40 Bytes for oid
 
 So 44 * 1,000,000 ~ 42,969 KB
 
 Can anybody inform me where I do the mistake?

There's no oid in index tuples. There is an 8-byte long header for
each index tuple. Since you are inserting 4-byte long user data, you
index tuples are 12-byte each. Each index tuple needs a pointer in a
block, which is called item pointer and that is 4-byte long. Each
block can hold up to floor((8192-24(page header)-16(special
data))/(12+4)) = 509 tuples. ceil(1,000,000/509) = 1965 is the blocks
you need for your index. In addition to this, you need a meta page
and a root page. So it becomes 1965+1+1 = 1967. Also you need
internal pages, whose numer is hard to guess since it depends on the
actual index tree structure(for example, tree height). From my limited
experience, for 1,000,000 tuples, you will need at least 7 internal
pages. Now the number becomes 1967+7 = 1974. Still it's different from
2745. If you don't have deleted tuples, the difference probably comes
from the fact that a btree index can never be 100% occupied. IMO
1974/2745 = 0.71 seems not so bad.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Field count in a RECORD variable - plpgsql

2005-03-01 Thread Mike Preston



I am building a generic crosstab function in plpgsql, where I can pass in a 
SQL statement and return back the data pivoted with the values of one column 
serving as headers of the return columns. If I use a RECORD variable to 
scroll through the data, is there a way for me to tell the number of 
fieldsreturned by the SQL statement parameter?

thanks in advance for any suggestions,
Mike


Re: [GENERAL] Index size

2005-03-01 Thread Ioannis Theoharis


Thanks a lot.

An other question:

Is there any way to prevent duplicates on btree index attribute,
PERMITTING them on table?




On Tue, 1 Mar 2005, Tatsuo Ishii wrote:

  I have created a btree index on a 'int4' attribute of a table.
 
  After i have inserted 1,000,000 raws in my table, i can see that my index
  size is 2745 Blocks (8KB each) from pg_class. That means about 21,960 KB
  size.
 
  I try to understand hows is this number generated, because thought that
  for each new entry in table, there is a new entry in index and that each
  entry of the index is:
 
  4 Bytes for the int4 attribute
  and
  40 Bytes for oid
 
  So 44 * 1,000,000 ~ 42,969 KB
 
  Can anybody inform me where I do the mistake?

 There's no oid in index tuples. There is an 8-byte long header for
 each index tuple. Since you are inserting 4-byte long user data, you
 index tuples are 12-byte each. Each index tuple needs a pointer in a
 block, which is called item pointer and that is 4-byte long. Each
 block can hold up to floor((8192-24(page header)-16(special
 data))/(12+4)) = 509 tuples. ceil(1,000,000/509) = 1965 is the blocks
 you need for your index. In addition to this, you need a meta page
 and a root page. So it becomes 1965+1+1 = 1967. Also you need
 internal pages, whose numer is hard to guess since it depends on the
 actual index tree structure(for example, tree height). From my limited
 experience, for 1,000,000 tuples, you will need at least 7 internal
 pages. Now the number becomes 1967+7 = 1974. Still it's different from
 2745. If you don't have deleted tuples, the difference probably comes
 from the fact that a btree index can never be 100% occupied. IMO
 1974/2745 = 0.71 seems not so bad.
 --
 Tatsuo Ishii

 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Problem with pg_hba.conf

2005-03-01 Thread Richard Huxton
Don't forget to cc the list.
celerity12 wrote:
I turned on the connection logging:
Server IP  is 192.168.160.213
Client IP  is 192.168.160.212
Its generating the entry for client in the log file
but the IP of client is different( .212 not .213 ...
.213 is server's IP)
Getting this message:

2005-03-01 20:51:36 LOG:  connection received:
host=192.168.160.213 port=3150
2005-03-01 20:51:36 FATAL:  no pg_hba.conf entry for
host 192.168.160.213, user postgres_db, database
template1
It's not likely to be lying - you're getting the connection from 
192.168.160.213 - now you need to find out why.

Have you got a firewall redirecting connections?
Are you running some sort of connection proxy?
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Backupping the table values

2005-03-01 Thread Vitaly Belman
I tried looking into your solution.. However, the DEFERRABLE
INITIALLY DEFERRED doesn't seem to act as I expect it to. I made two
sample tables:


CREATE TABLE functions.temp1
(
  id1 int4 NOT NULL,
  id2 int4,
  CONSTRAINT pk_temp1 PRIMARY KEY (id1),
  CONSTRAINT temp2_id2 FOREIGN KEY (id2) REFERENCES functions.temp2
(id2) ON UPDATE RESTRICT ON DELETE RESTRICT DEFERRABLE INITIALLY
DEFERRED
) 
WITHOUT OIDS;

CREATE TABLE functions.temp2
(
  id2 int4 NOT NULL,
  CONSTRAINT temp2_pk PRIMARY KEY (id2),
  CONSTRAINT temp2_id2_key UNIQUE (id2)
) 
WITHOUT OIDS;


Then I tried to run the following SQL:

begin;
delete from temp2;
delete from temp1;
end;

Based on what the documentation says, this transaction should've
worked, but instead all I get is:

ERROR:  update or delete on temp2 violates foreign key constraint
temp2_id2 on temp1
DETAIL:  Key (id2)=(1) is still referenced from table temp1.

Have I done anything wrong?

On Sat, 26 Feb 2005 19:56:32 -0600, George Essig [EMAIL PROTECTED] wrote:
 On Sat, 26 Feb 2005 16:47:38 +0200, Vitaly Belman [EMAIL PROTECTED] wrote:
  So basically what I have is:
 
  public schema, in which there are two tables, A and B.
  backup schema, in which there are two tables, A and B.
 
  On table A and B in public I add a trigger On Delete which inserts
  the deleted data to the matching tables in the backup scehma.
 
  That'd work fine except the foreign keys problem. In A I have a
  column, B_id that is a foreign key to an id in the B table. Thus
  it means that I have to delete from A before I delete from B.
 
  Inserting into the backup folders, on the other hand, should be
  reversed, from the same reasons.
 
  Anyone has an idea how to solve this?
 
 
 Maybe a DEFERRABLE INITIALLY DEFERRED foreign key would work.  This
 will cause the foreign key constraint to be checked at the end of the
 transaction.  See the manual at:
 
 http://www.postgresql.org/docs/8.0/static/sql-createtable.html
 
 George Essig
 


-- 
 ICQ: 1912453
 AIM: VitalyB1984
 MSN: [EMAIL PROTECTED]
 Yahoo!: VitalyBe

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Field count in a RECORD variable - plpgsql

2005-03-01 Thread Richard Huxton
Mike Preston wrote:
I am building a generic crosstab function in plpgsql, where I can pass
in a SQL statement and return back the data pivoted with the values of
one column serving as headers of the return columns.  If I use a RECORD
variable to scroll through the data, is there a way for me to tell the
number of fields returned by the SQL statement parameter?
No. You'll want to use pltcl or plperl or some similar more-dynamic 
language.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Vacuum time degrading

2005-03-01 Thread Wes
On 2/28/05 6:53 PM, Tom Lane [EMAIL PROTECTED] wrote:

 If you are suffering bloat, the fastest route to a solution would
 probably be to CLUSTER your larger tables.  Although VACUUM FULL
 would work, it's likely to be very slow.

How can there be bloat if there are no deletes or modifies?

Even if there were deletes or modifies (there will be in about another year
and a half), if a vacuum is being performed every night, how can there be
bloat?  The vacuum should release the dead space and it should be reused.
Am I missing something?

 There are currently no deletes or modifies to the database - only inserts.
 
 You *certain* about that?  It's hard to see how the vacuum time wouldn't
 be linear in table size if there's nothing to do and no dead space.

Absolutely sure.  The only case that would approach a delete is if a batch
load fails, the transaction is rolled back.  That very seldom happens.

Why am I running vacuum nightly if I have no deletes or updates, you ask?
Two reasons - to have it in the cron schedule for when there are deletes
(there will never be updates), and as a check on database integrity.  If
there is a database problem, vacuum at least has a chance of flagging it
since it reads the entire database.  This was instigated after we had a
couple of instances of corruption a while back that went undetected for too
long.  I'm also doing a weekly pg_dumpall as an additional check/fallback.

 Again, VACUUM VERBOSE info would be informative (it's sufficient to look
 at your larger tables for this).

I'll set that up to run tonight and see if it gives any clues.  Last night,
vacuum ran over 5 hours.

Wes



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Vacuum time degrading

2005-03-01 Thread Wes
On 2/28/05 6:53 PM, Tom Lane [EMAIL PROTECTED] wrote:

 It's hard to see how the vacuum time wouldn't
 be linear in table size if there's nothing to do and no dead space.

I am doing 'vacuum analyze' rather than just 'vacuum'.  Could that have
anything to do with the non-linear behavior?

Wes



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] multicolumn GIST index question

2005-03-01 Thread Greg Stark

 Ron Mayer wrote:
  Did anyone get multi-column GIST indexes working using both
  the gist_btree and postgis modules?

Multi-column gist indexes are basically useless at this point. The index pages
are split based entirely on the first column, so the index becomes basically
an index on the first column.

There was some discussion on changing this but there wasn't consensus on which
direction to head with it. It may come up again for 8.1 if someone wants to
look at it.

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] row numbering

2005-03-01 Thread Karsten Hilbert
  There are 5 vaccinations in a given vaccination schedule.
  
  Patient had 3 shots.
  
  I want the view to show me that shot 4 and 5 are missing
  without having to enter the cardinality of the vaccination in
  the original data.
 
 For this kind of task you usually want to use a left (or right) join.
I thought so. I tried to get it done that way several
times. I asked on this list more than once, too.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] row numbering

2005-03-01 Thread Karsten Hilbert
  There are 5 vaccinations in a given vaccination schedule.
 
  Patient had 3 shots.
 
  I want the view to show me that shot 4 and 5 are missing
  without having to enter the cardinality of the vaccination in
  the original data.
 
 That sounds like you are trying to abuse the data model, so I'm not 
 surprised that it isn't easily possible.  As the data stored in a table 
 is inherently unordered,
I know. I don't expect rows in tables to be ordered in any
way. 

 you can't really talk about order unless you 
 impose it yourself by way of assigning ordinal numbers or some other 
 sort key to your rows.
Here is the bit of data that I forgot to mention: Those
consecutive immunization rows *are* ordered by a sort key that
the application assigns -- the date of application. So,
basically, what I want to do is the following (high level):

1) read given vaccinations from table
2) order by date_given
3) assign ordinals to the rows in the order obtained by 2)
4) deduce missing shots by joining to another table that
   defines the number of vaccinations in a schedule

Sure, I can do this in client code just fine. I would prefer
to put that into a view, however. Having output row numbers
doesn't make that *easy* but it seems it's the missing link to
making it *possible* in SQL. I am not asking for row numbers
for tables - which is nonsense - but rather for optional
numbering of query result rows.

 Even if you could, say, assign a fixed order to tables or views or 
 actually had some kind of automatic row number available, that would 
 still make the semantics of your data dependent of the particularities 
 of the queries that you use to access it, which doesn't sound like a 
 good idea to me.
Understood. That wasn't what I was after. I poorly presented
the case.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Splitting tables or databases? That's the problem

2005-03-01 Thread v . demartino2
On a 128MB RAM, 450 MHz pentium 3  server with linux gentoo and postgresql
7.4.6 on an office lan we can manage satisfactorily a db containing few
tables with a cumbersome amount of data (each table is around 650 thousand
records with 98 columns) relating to the year 2002.
We use M$-Access and ODBC to access those data.
Now similar data and amounts of records for the year 2003 need to be added.
Then queries to contrast  2003 data vs. 2002 ones will be needed.

In view of the poor hardware at our disposal, is it better from the standpoint
of efficiency  to:

1) Create a new database for 2003 with the same structure of that for 2002;
2) Appending new 2003 data to 2002 data in the same tables;
3) Creating in the original database new, separate tables to contain data
for 2003.

Thanks
Vittorio


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Backupping the table values

2005-03-01 Thread George Essig
On Tue, 1 Mar 2005 17:48:44 +0200, Vitaly Belman [EMAIL PROTECTED] wrote:
 
 CREATE TABLE functions.temp1
 (
   id1 int4 NOT NULL,
   id2 int4,
   CONSTRAINT pk_temp1 PRIMARY KEY (id1),
   CONSTRAINT temp2_id2 FOREIGN KEY (id2) REFERENCES functions.temp2
 (id2) ON UPDATE RESTRICT ON DELETE RESTRICT DEFERRABLE INITIALLY
 DEFERRED
 )
 WITHOUT OIDS;
 
 

Remove 'ON UPDATE RESTRICT ON DELETE RESTRICT' from your create table
statement.

The manual says the following about RESTRICT:

Produce an error indicating that the deletion or update would create
a foreign key constraint violation. This is the same as NO ACTION
except that the check is not deferrable.

George Essig

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Index size

2005-03-01 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 ... Now the number becomes 1967+7 = 1974. Still it's different from
 2745. If you don't have deleted tuples, the difference probably comes
 from the fact that a btree index can never be 100% occupied. IMO
 1974/2745 = 0.71 seems not so bad.

In fact the traditional figure for the steady-state load factor of a
btree index is 2/3rds; that is, after a long sequence of inserts and
deletes you can expect about one-third of each page to be empty space.

If Ioannis' number was taken immediately after a CREATE INDEX operation,
then his index size isn't reflective of any settling to a steady-state
load factor; rather it happens because the CREATE INDEX command
deliberately loads the index leaf pages only 2/3rds full, to avoid a
disproportionate amount of page splitting when normal inserts commence.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] row numbering

2005-03-01 Thread josue
I figured it out, maybe is not the most elegant way but it work for my 
case where only small sets are retrieved

create table foo2 (pk int, valor numeric(12,2), porce numeric(5,2));
insert into foo2 values (1,7893.45,0.4);
insert into foo2 values (5,7893.45,0.3);
insert into foo2 values (9,7893.45,0.3);
select *,
(select count(*) from foo2 as f2
where f2.oid = foo2.oid) as counter
from  foo2;
--
Sinceramente,
Josué Maldonado.
... Un científico es un hombre tan endeble y humano como cualquiera; 
sin embargo, la búsqueda científica puede ennoblecerle, incluso en 
contra de su voluntad. -- Isaac Asimov

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: Fast major-version upgrade (was: [GENERAL] postgresql 8.0 advantages)

2005-03-01 Thread Jim C. Nasby
On Mon, Feb 28, 2005 at 09:27:46PM +0100, Martijn van Oosterhout wrote:
 On Mon, Feb 28, 2005 at 01:36:59PM -0600, Jim C. Nasby wrote:
   I used a straight copy of the filesystem with running database
   (over the net in my case) and immediately after that,
   stop the db and rsync for the last changes. This took only
   10 minutes (compared to 1.5h for the full filesystem copy)
   and I could start up the db in new location.
   
   this could work for you too.
  
  I hadn't thought about using rsync; that's a great idea!
  
  Is there somewhere this could be documented? In an FAQ maybe?
 
 It works only in the special case where the PostgreSQL version number
 is the same and you're running on the same platform. How often are you
 transferring databases like that. Even transferring from i386 to amd64
 wouldn't work like this AFAIUI.

Absolutely true, although in the case of database version PostgreSQL
will check that itself. But in the context this was originally brought
up in (using Sloney to upgrade a machine from 7.4.x to 8.x), it would
work great, and rsync would make a huge difference in downtime.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] pgpool

2005-03-01 Thread Brian Maguire








Is anyone currently using pgpool with production high volume use? I am
interested in your success and challenges.



Brian








[GENERAL] Row fields by position in a cursor

2005-03-01 Thread Mike Preston



I am passing a SQL statement to a function as a parameter and then 
executing it via a call like 

OPEN curs1 FOR EXECUTE sql_str;

Since I won't know in advance the fieldnames being passed by the SQL 
string, I'd like to dereference the column fields by position. Can anybody 
tell me the syntax for doing this? Doesn't seem to be much documentation 
on dereferencing info returned by a cursor. I can fetch into either a 
record variable or a row variable, whichever works.

TIA,
Mike


[GENERAL] Novice Question

2005-03-01 Thread Michael Romagnoli
I am new to postgresql, having previously worked with mysql mostly.
What kind of command would I run if I wanted to copy an entire table 
(along with renaming it, and, of course, all data from the first table - 
some of which is binary)?

Thanks,
-Mike
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Novice Question

2005-03-01 Thread Michael Romagnoli
Sorry, I meant to ask about copying databases, not tables (including all
data in  the database as per below).
Thanks,
-Mike
Michael Romagnoli wrote:
I am new to postgresql, having previously worked with mysql mostly.
What kind of command would I run if I wanted to copy an entire table 
(along with renaming it, and, of course, all data from the first table 
- some of which is binary)?

Thanks,
-Mike
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Novice Question

2005-03-01 Thread Sean Davis
On Mar 1, 2005, at 4:23 PM, Michael Romagnoli wrote:
I am new to postgresql, having previously worked with mysql mostly.
What kind of command would I run if I wanted to copy an entire table 
(along with renaming it, and, of course, all data from the first table 
- some of which is binary)?

Thanks,
-Mike
---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
select * into table new_table from old_table;
That's it.
Sean
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Row fields by position in a cursor

2005-03-01 Thread Tom Lane
Mike Preston [EMAIL PROTECTED] writes:
 Since I won't know in advance the fieldnames being passed by the SQL
 string, I'd like to dereference the column fields by position.  Can
 anybody tell me the syntax for doing this?

There isn't one, at least not in plpgsql.  You might have some success
in one of the other PL languages --- certainly pltcl or 8.0 plperl can
do this.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] sql join question

2005-03-01 Thread Scott Frankel
I want to return all records that match criteria across three separate 
tables
and , in spite of reading up on joins, have so far been unable to 
design a
solution that doesn't require caching a hash table of intermediate 
results.

Here's the situation:
Let's say color names belong to a set of tones, each of which belong to 
a
palette.  A palette can be comprised of multiple tones.  Each tone can
contain multiple color names.  i.e.:

palette palette1
tones   red, green
colors  rose madder, crimson, red ochre, phthalocyanine, leaf green
palette palette2
tones   blue
colors  cerulean
palette palette3
tones   yellow
colors  chrome
Task:  find all color names in each of palette1's tones.
Can this be done in a single SQL statement?  Or does it require storing
the results of a select to find each of the tones that belong to 
palette1, then
separate selects on each resultant tone to yield the 5 color names?

Thanks in advance!
Scott
p.s. Here's my test case sql:
CREATE TABLE palettes (palette_pkey SERIAL PRIMARY KEY,
palette_name text UNIQUE DEFAULT NULL);
INSERT INTO  palettes (palette_name) VALUES ('plt1');
INSERT INTO  palettes (palette_name) VALUES ('plt2');
INSERT INTO  palettes (palette_name) VALUES ('plt3');
CREATE TABLE tones(tone_pkey SERIAL PRIMARY KEY,
tone_name text UNIQUE DEFAULT NULL, palette_pkey integer REFERENCES 
palettes);
INSERT INTO  tones(tone_name, palette_pkey) VALUES ('red', 1);
INSERT INTO  tones(tone_name, palette_pkey) VALUES ('green', 1);
INSERT INTO  tones(tone_name, palette_pkey) VALUES ('blue', 2);
INSERT INTO  tones(tone_name, palette_pkey) VALUES ('yellow', 3);

CREATE TABLE colors   (color_pkey SERIAL PRIMARY KEY,
color_name text UNIQUE DEFAULT NULL, tone_pkey integer REFERENCES 
tones);
INSERT INTO  colors   (color_name, tone_pkey) VALUES ('rose madder', 1);
INSERT INTO  colors   (color_name, tone_pkey) VALUES ('crimson', 1);
INSERT INTO  colors   (color_name, tone_pkey) VALUES ('red ochre', 1);
INSERT INTO  colors   (color_name, tone_pkey) VALUES ('phthalocyanine', 
2);
INSERT INTO  colors   (color_name, tone_pkey) VALUES ('leaf green', 2);
INSERT INTO  colors   (color_name, tone_pkey) VALUES ('cerulean', 3);
INSERT INTO  colors   (color_name, tone_pkey) VALUES ('chrome', 4);

# -1- [ cache results in a hash table for further processing ]
SELECT * FROM tones WHERE palette_pkey = 1;
# yields
# tone_pkey | tone_name | palette_pkey
# ---+---+--
#  1 | red   |1
#  2 | green |1
# -2- [ for each tone returned from step 1 ]
SELECT * FROM colors WHERE tone_pkey = 1;
# yields
# color_pkey | color_name  | tone_pkey
# +-+---
# 1 | rose madder | 1
# 2 | crimson | 1
# 3 | red ochre   | 1
SELECT * FROM colors WHERE tone_pkey = 2;
# yields
# color_pkey |   color_name   | tone_pkey
# ++---
#   4 | phthalocyanine | 2
#   5 | leaf   | 2


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Novice Question

2005-03-01 Thread Bricklen Anderson
Sean Davis wrote:
On Mar 1, 2005, at 4:23 PM, Michael Romagnoli wrote:
I am new to postgresql, having previously worked with mysql mostly.
What kind of command would I run if I wanted to copy an entire table 
(along with renaming it, and, of course, all data from the first table 
- some of which is binary)?

Thanks,
-Mike
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

select * into table new_table from old_table;
That's it.
Sean
you sure about that syntax?
How about:
create table new_table as select * from old_table;
--
___
This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
___
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Splitting tables or databases? That's the problem

2005-03-01 Thread Greg Patnude
Get a better computer to run it on in teh long-term -- that will be your 
best investment



[EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On a 128MB RAM, 450 MHz pentium 3  server with linux gentoo and postgresql
 7.4.6 on an office lan we can manage satisfactorily a db containing few
 tables with a cumbersome amount of data (each table is around 650 thousand
 records with 98 columns) relating to the year 2002.
 We use M$-Access and ODBC to access those data.
 Now similar data and amounts of records for the year 2003 need to be 
 added.
 Then queries to contrast  2003 data vs. 2002 ones will be needed.

 In view of the poor hardware at our disposal, is it better from the 
 standpoint
 of efficiency  to:

 1) Create a new database for 2003 with the same structure of that for 
 2002;
 2) Appending new 2003 data to 2002 data in the same tables;
 3) Creating in the original database new, separate tables to contain data
 for 2003.

 Thanks
 Vittorio


 ---(end of broadcast)---
 TIP 8: explain analyze is your friend
 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Novice Question

2005-03-01 Thread javier wilson
On Tue, 01 Mar 2005 16:30:19 -0500, Michael Romagnoli
[EMAIL PROTECTED] wrote:
 
 Sorry, I meant to ask about copying databases, not tables (including all
 data in  the database as per below).

you can do a pg_dump your_databaseyour_database.dump.sql
and then createdb to create your new database, and finally
pgsql -f your_database.dump.sql new_database

and that's it. you should probably use -Upostgres 
depending on what kind of security you use.

javier

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Trigger email?

2005-03-01 Thread CSN
Is it possible to setup a trigger so that every time a
certain field is changed, an email is sent? Using
pl/pgsql.

Thanks,
CSN




__ 
Do you Yahoo!? 
Yahoo! Mail - You care about security. So do we. 
http://promotions.yahoo.com/new_mail

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] sql join question

2005-03-01 Thread Ragnar Hafstað
On Tue, 2005-03-01 at 13:42 -0800, Scott Frankel wrote:

 [snip problem]

 Task:  find all color names in each of palette1's tones.
 
 Can this be done in a single SQL statement? 

 [snip table examples]

looks like a job for NATURAL JOIN

test=# select color_name
   from palettes 
natural join tones
natural join colors
   where palette_name='plt1';

   color_name

 rose madder
 crimson
 red ochre
 phthalocyanine
 leaf green
(5 rows)


gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Novice Question

2005-03-01 Thread Chris Kratz
create database newdb template olddb; 

works as well.

-Chris

On Tuesday 01 March 2005 05:08 pm, javier wilson wrote:
 On Tue, 01 Mar 2005 16:30:19 -0500, Michael Romagnoli

 [EMAIL PROTECTED] wrote:
  Sorry, I meant to ask about copying databases, not tables (including all
  data in  the database as per below).

 you can do a pg_dump your_databaseyour_database.dump.sql
 and then createdb to create your new database, and finally
 pgsql -f your_database.dump.sql new_database

 and that's it. you should probably use -Upostgres
 depending on what kind of security you use.

 javier

 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] JDBC and Portals . Clarification

2005-03-01 Thread Kris Jurka


On Fri, 25 Feb 2005, Dave Smith wrote:

 Is a portal using the V3 protocol the same as a cursor? I am trying to
 debug a slow query and I want the JDBC driver to use the cursor syntax.
 It says it is using a portal but the performance seems like just a
 regular sql statement. 
 

Yes, portals and cursors are pretty much the same thing.  Depending on 
your query using a cursor may not actually speed it up, consider SELECT * 
FROM tab ORDER BY col; The whole result must be sorted before any row is 
returned.  What it will do in this case is just reduce the memory 
consumption on the client by not fetching all the rows at once.  Also 
please check the list of restrictions for the JDBC driver to actually use 
a portal/cursor backed ResultSet:
 
http://jdbc.postgresql.org/documentation/80/query.html#query-with-cursor

Kris Jurka

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Trigger email?

2005-03-01 Thread Christopher Browne
[EMAIL PROTECTED] (CSN) writes:
 Is it possible to setup a trigger so that every time a
 certain field is changed, an email is sent? Using
 pl/pgsql.

Something _like_ that is possible.

What I would do instead is for the trigger to cause a record to be put
into a table that might be called something like submit_email.

An asynchronous process (that might look for NOTIFY requests!) would
then rummage thru submit_email to find messages it needs to send.

That allows you to keep this sort of thing under some degree of
control.

- It means you're not concurrently spawning 157 MTA connections; the
  mail manager only needs to open _one_ connection

- It means you have some ability to manage how much mail gets sent out
  at once.
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://www.ntlug.org/~cbbrowne/linuxxian.html
A VAX is virtually a computer, but not quite.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] basic temp table question

2005-03-01 Thread L. Fletcher




Hello,

The first time I run a query against this function (using Npgsql):

declare r_cursor1 cursor for

 SELECT * from tmp_table;

begin

 CREATE TEMPORARY TABLE 
tmp_table 
( 
testcol integer ) ON COMMIT 
DROP;

 INSERT INTO 
tmp_table SELECT 0;

 open 
r_cursor1; return 
0;end;

I get this error:

ERROR: XX000: relation 1090457025 is still open

All subsequent times I get this error:

ERROR: 42P01: relation with OID 1090457025 does not exist 

What am I doing wrong?

(Sorry if this is a basic question.)

Thanks in advance...


Re: [GENERAL] basic temp table question

2005-03-01 Thread Tom Lane
L. Fletcher [EMAIL PROTECTED] writes:
 I get this error:
 ERROR: XX000: relation 1090457025 is still open

This is an 8.0 bug fixed in 8.0.1.

 All subsequent times I get this error:
 ERROR: 42P01: relation with OID 1090457025 does not exist=20

This is because plpgsql caches plans and therefore can't cope with
tmp_table not being the identical same table from run to run of the
function.  Sooner or later we will probably fix that, but in the
meantime consider using an ON COMMIT DELETE ROWS temp table, created
only once per connection, instead.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Trigger email?

2005-03-01 Thread Jonathan Hedstrom
Christopher Browne wrote:
[EMAIL PROTECTED] (CSN) writes:
 

Is it possible to setup a trigger so that every time a
certain field is changed, an email is sent? Using
pl/pgsql.
   

Something _like_ that is possible.
 

you can also do it directly with a trigger if you prefer:
CREATE TRIGGER alert_insert
   AFTER INSERT OR UPDATE ON alert FOR EACH ROW
   EXECUTE PROCEDURE alert_notify();
Where the alert_notify() function would then call pgmail() with a 
meaningful subject and body etc -- see below

What I would do instead is for the trigger to cause a record to be put
into a table that might be called something like submit_email.
An asynchronous process (that might look for NOTIFY requests!) would
then rummage thru submit_email to find messages it needs to send.
 

at which point you could then use pgmail() to send the email
http://sourceforge.net/projects/pgmail/
begin:vcard
fn:Jonathan Hedstrom
n:Hedstrom;Jonathan
org:Downtown Emergency Service Center;Information Services
email;internet:[EMAIL PROTECTED]
title:Data Systems Administrator
tel;work:(206) 464-1570 ext. 3014
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] sql join question

2005-03-01 Thread Scott Frankel
Sweet!  And not so sweet.
The natural join worked beautifully with my test schema; but it failed
to yield any rows with my real-world schema.  I think I've tracked down
why:  duplicate column names.  i.e.:
-1- these tables yield rows from a NATURAL JOIN query
CREATE TABLE palettes (palette_pkey SERIAL PRIMARY KEY,
palette_name text UNIQUE DEFAULT NULL);
CREATE TABLE tones(tone_pkey SERIAL PRIMARY KEY,
tone_name text UNIQUE DEFAULT NULL,
palette_pkey integer REFERENCES palettes);
-2- these tables yield NO rows from a NATURAL JOIN query
CREATE TABLE palettes (palette_pkey SERIAL PRIMARY KEY,
palette_name text UNIQUE DEFAULT NULL,
qwe text);
CREATE TABLE tones(tone_pkey SERIAL PRIMARY KEY,
tone_name text UNIQUE DEFAULT NULL,
palette_pkey integer REFERENCES palettes,
qwe text);
Are the 'qwe' columns in both tables clobbering each other and 
preventing the
join from succeeding?  The offending columns are inconsequential for 
what I'm
trying to do with this operation.  Can they be suppressed from the 
query for this
SQL statement to function properly?  Or am I SOL?

Thanks again!
Scott


On Mar 1, 2005, at 2:28 PM, Ragnar Hafstað wrote:
On Tue, 2005-03-01 at 13:42 -0800, Scott Frankel wrote:
[snip problem]

Task:  find all color names in each of palette1's tones.
Can this be done in a single SQL statement?

[snip table examples]
looks like a job for NATURAL JOIN
test=# select color_name
   from palettes
natural join tones
natural join colors
   where palette_name='plt1';
   color_name

 rose madder
 crimson
 red ochre
 phthalocyanine
 leaf green
(5 rows)
gnari

---(end of 
broadcast)---
TIP 8: explain analyze is your friend


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] howto? Fine Debugging control at cmd line

2005-03-01 Thread OpenMacNews
hi all,
per the docs, i understand i *can* define/change the global pgsql debugging 
level at the cmd line by passing, say, pg_ctl the '-o -d 5' option,

AND,
that i can finely change loglevel options in postgresql.conf, e.g.:
client_min_messages = debug5  # debug5, debug4, debug3, debug2, debug1,
  # log, notice, warning, error
log_min_messages =debug5  # debug5, debug4, debug3, debug2, debug1,
  # info, notice, warning, error, log, fatal, 
panic
log_error_verbosity = verbose # terse, default, verbose
log_min_error_statement = debug5  # debug5, debug4, debug3, debug2, debug1,

can I, however, *individually* change these fine logging controls at the 
command line, and simply re-HUP?  e.g., for just a quick debug session?

seems like it might be reasonable to do so, but i think i'm missing the 
(possibly?) obvious ...

or, does one make the change in the .conf file, and then rehup?
any pointers?
thanx!
richard
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Index size

2005-03-01 Thread Tatsuo Ishii
 Tatsuo Ishii [EMAIL PROTECTED] writes:
  ... Now the number becomes 1967+7 = 1974. Still it's different from
  2745. If you don't have deleted tuples, the difference probably comes
  from the fact that a btree index can never be 100% occupied. IMO
  1974/2745 = 0.71 seems not so bad.
 
 In fact the traditional figure for the steady-state load factor of a
 btree index is 2/3rds; that is, after a long sequence of inserts and
 deletes you can expect about one-third of each page to be empty space.
 
 If Ioannis' number was taken immediately after a CREATE INDEX operation,
 then his index size isn't reflective of any settling to a steady-state
 load factor; rather it happens because the CREATE INDEX command
 deliberately loads the index leaf pages only 2/3rds full, to avoid a
 disproportionate amount of page splitting when normal inserts commence.

Interesting. Right after CREATE INDEX for a int4 column using pgbench
-s 10(1,000,000 tuples), I got 2184 leaf pages. From my caliculation
the number of leaf pages is expected to 1965, which is 100% full case
assumption of course. So 1965/2184 = 0.8997 = 90% is actually used?
--
Tatsuo Ishii

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Index size

2005-03-01 Thread Tatsuo Ishii
 An other question:
 
 Is there any way to prevent duplicates on btree index attribute,
 PERMITTING them on table?

I can't think of any usefull usage for such an index. Can you explain
why you need it?
--
Tatsuo Ishii

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Index size

2005-03-01 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 ... rather it happens because the CREATE INDEX command
 deliberately loads the index leaf pages only 2/3rds full, to avoid a
 disproportionate amount of page splitting when normal inserts commence.

 Interesting. Right after CREATE INDEX for a int4 column using pgbench
 -s 10(1,000,000 tuples), I got 2184 leaf pages. From my caliculation
 the number of leaf pages is expected to 1965, which is 100% full case
 assumption of course. So 1965/2184 = 0.8997 = 90% is actually used?

Shoulda read the code rather than going by memory ;-).  What nbtsort.c
actually says is

 * It is not wise to pack the pages entirely full, since then *any*
 * insertion would cause a split (and not only of the leaf page; the need
 * for a split would cascade right up the tree).  The steady-state load
 * factor for btrees is usually estimated at 70%.  We choose to pack leaf
 * pages to 90% and upper pages to 70%.  This gives us reasonable density
 * (there aren't many upper pages if the keys are reasonable-size) without
 * incurring a lot of cascading splits during early insertions.

and indeed the code seems to do that:

/* set full threshold based on level.  See notes at head of file. */
if (level  0)
state-btps_full = (PageGetPageSize(state-btps_page) * 3) / 10;
else
state-btps_full = PageGetPageSize(state-btps_page) / 10;

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Index size

2005-03-01 Thread Tatsuo Ishii
  Interesting. Right after CREATE INDEX for a int4 column using pgbench
  -s 10(1,000,000 tuples), I got 2184 leaf pages. From my caliculation
  the number of leaf pages is expected to 1965, which is 100% full case
  assumption of course. So 1965/2184 = 0.8997 = 90% is actually used?
 
 Shoulda read the code rather than going by memory ;-).  What nbtsort.c
 actually says is
 
  * It is not wise to pack the pages entirely full, since then *any*
  * insertion would cause a split (and not only of the leaf page; the need
  * for a split would cascade right up the tree).  The steady-state load
  * factor for btrees is usually estimated at 70%.  We choose to pack leaf
  * pages to 90% and upper pages to 70%.  This gives us reasonable density
  * (there aren't many upper pages if the keys are reasonable-size) without
  * incurring a lot of cascading splits during early insertions.
 
 and indeed the code seems to do that:
 
 /* set full threshold based on level.  See notes at head of file. */
 if (level  0)
 state-btps_full = (PageGetPageSize(state-btps_page) * 3) / 10;
 else
 state-btps_full = PageGetPageSize(state-btps_page) / 10;
 

Thanks for the explanation.

So it seems Ioannis' number was not taken immediately after a CREATE
INDEX operation?
--
Tatsuo Ishii

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Index size

2005-03-01 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 So it seems Ioannis' number was not taken immediately after a CREATE
 INDEX operation?

I would guess not, but it's up to him to say.  If it is a number derived
after some period of normal operation, then his result agrees with the
theory that says 70% is the steady-state figure ...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Replication from other SQL Server

2005-03-01 Thread Keith Tsao
Hi,

I am new to postgresql!
We have a M$SQL server and would like to do a replication from this
server to postgresql.
Would this be possible? 

If so, what would be the appropiate method.
Any suggestion?

Thx!

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] basic temp table question

2005-03-01 Thread L. Fletcher




Hello,

The first time I run a query against this function (using Npgsql):

declare r_cursor1 cursor for

 SELECT * from tmp_table;

begin

 CREATE TEMPORARY TABLE 
tmp_table 
( 
testcol integer ) ON COMMIT 
DROP;

 INSERT INTO 
tmp_table SELECT 0;

 open 
r_cursor1; return 
0;end;

I get this error:

ERROR: XX000: relation 1090457025 is still open

All subsequent times I get this error:

ERROR: 42P01: relation with OID 1090457025 does not exist 

What am I doing wrong?

(Sorry if this is a basic question.)

Thanks in advance...


[GENERAL] Performance of Views

2005-03-01 Thread Steffen Boehme
Hello there,

i have a short question ...

I have a few tables (at the moment only 3 for testing), over which
will by made a query in this form:

SELECT
a.orderitem_id,
a.transaction_id,
a.order_id,
a.shop_id,
a.quantity,
a.price,
b.affiliate_id,
c.type
FROM
ss_order_orderitems a
LEFT JOIN ss_order_affiliate_tracking b ON a.order_id = b.order_id,
ss_shops c
WHERE
(a.order_id = b.order_id OR b.order_id IS NULL) AND
a.shop_id = c.shop_id;

The query will get later a few more conditions ...

Now is the problem, that the performance is not realy good ... and i
think about the best solution for such a query and found three possibilitys:

1. a simple select over the three tables (one of them contains 16
entrys in the moment and it's growing) in the form like above (the db is
mysql 4.1.x)
I think this solution is not very perfomant ...

2. move the data to a postgresql-db with the same structur and create a
view, wich makes the same query ...
Is the performance for the same query different between a simple select
and a view!?
If so, i can forget the view ...

3. put the data with the above query in one big table ...
I know, thats no good db-structur, but i don't know how i could make it
better ...


The main-question at the moment iss ...
Is the performance of the View-Method better then the first Method on
the existing tables!?

I hope of a view hints ...

Thanks
Steffen

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Database Name

2005-03-01 Thread Dorian Büttner
Envbop wrote:
Can someone tell me where I can find the database names.
login using psql and type \l ;-)
chances are you have root access to the linux box, then you could modify 
pg_hba.conf to gain some access

Do you have any account information at all?
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Novice Question

2005-03-01 Thread Edmund Bacon
[EMAIL PROTECTED] (Michael Romagnoli) writes:

 What kind of command would I run if I wanted to copy an entire table
 (along with renaming it, and, of course, all data from the first table
 -
 some of which is binary)?

SELECT * INTO newtable FROM oldtable;

Note that this doesn't construct indexes, Foreign keys, constraints,
etc.

If by 'binary data' you mean BLOBs, I'd expect the above to work.
Other than that, AFAIUI you have no reasonable expectation that your data is
stored in any meaningful binary format by the database.  All data
could be internally stored as strings (though that might be very
slow).



-- 
Remove -42 for email

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] to_char bug?

2005-03-01 Thread Ben Trewern
From the docs:

FM suppresses leading zeroes and trailing blanks that would otherwise be 
added to make the output of a pattern be fixed-width

It works now but for one I don't understand why the space is added in the 
firs place and two I think the docs don't tell the whole story ie leading 
blanks and I assume trailing zeros if applicable.

Regards,

Ben

Tom Lane [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Ben Trewern [EMAIL PROTECTED] writes:
 It seems that to_char(1, '000') gives a string  001 with a space in 
 front.
 Is this a bug?

 No.

 Possibly you want 'FM000'.

 regards, tom lane

 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Database Name

2005-03-01 Thread Envbop
Hi

I've just inherited a PostgreSQL database, for which I do not have any
details of, like database name or the users.
This used to be a library database which was managed via a web page written
in php.
Its running on a Linux box.
The front end was also written in php. The original writers of this
application is no longer.

I've just started to get familiarisd with postgresql in the last week. and
have installed PostgreSQL v8.0.1 on a Windows server.
And I am trying to connect to the linux box via the phppgadmin tool. as well
as trying an odbc connection.

Can someone tell me where I can find the database names.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] sql join question

2005-03-01 Thread Ragnar Hafstað
On Tue, 2005-03-01 at 16:51 -0800, Scott Frankel wrote:
 Sweet!  And not so sweet.
 
 The natural join worked beautifully with my test schema; but it failed
 to yield any rows with my real-world schema.  I think I've tracked down
 why:  duplicate column names.  i.e.:
 ...
   CREATE TABLE palettes (palette_pkey SERIAL PRIMARY KEY,
   palette_name text UNIQUE DEFAULT NULL,
   qwe text);
 
   CREATE TABLE tones(tone_pkey SERIAL PRIMARY KEY,
   tone_name text UNIQUE DEFAULT NULL,
   palette_pkey integer REFERENCES palettes,
   qwe text);
 
 Are the 'qwe' columns in both tables clobbering each other and 
 preventing the
 join from succeeding?

the docs really explain this better than I can, but a
  table1 NATURAL JOIN table2
is shorthand fo a 
  table1 JOIN table2 USING (list_of_common_keys)

so:
select color_name from palettes
   join tones USING (palette_pkey)
   join colors USING (tone_pkey) 
  where palette_name='plt1';

see:
http://www.postgresql.org/docs/8.0/interactive/sql-select.html

gnari




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Performance of Views

2005-03-01 Thread Greg Stark
Steffen Boehme [EMAIL PROTECTED] writes:

 FROM
   ss_order_orderitems a
   LEFT JOIN ss_order_affiliate_tracking b ON a.order_id = b.order_id, 
 ss_shops c
 WHERE
   (a.order_id = b.order_id OR b.order_id IS NULL) AND

What is that last line doing there? It's completely redundant and could very
well be the source of your problems.

For useful help you should post the \d output for the three tables and the
result of EXPLAIN ANALYZE SELECT 

 The main-question at the moment iss ...
 Is the performance of the View-Method better then the first Method on
 the existing tables!?

A view doesn't change performance at all. It's exactly the same as writing the
query in the view directly into your query.

-- 
greg


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] cursor already in use error

2005-03-01 Thread Sim Zacks
PostGreSQL 8.0beta1
I have a function that uses a cursor and it is giving me the error: cursor
crsr already in use when the parameters I pass in come from another table.

The function works fine when I call it by itself, such as select
PartNeedsReschedule(100,1) or select * from PartNeedsReschedule(100,1)
but when I try select PartNeedsReschedule(PartID,1) from Parts then I get
the error.

It seems that when I qualify the query such as PartNeedsReschedule(PartID,1)
from Parts where partid=100
then it works because it is only returning one row.

Is this a known problem with cursor based functions that they can not be
called by set based queries?
Is there a workaround?

Thank You
Sim



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq