Re: [GENERAL] Tabbed data in tab-separated output

2007-01-04 Thread felix
On Wed, Jan 03, 2007 at 10:31:46PM -0600, mike wrote:
 How are you attempting to restore the table after using psql?  Psql
 insert statements? Pgdump? COPY FROM?

Actually, right now I feed the dump file into a program which mangles
it and analyzes it in various ways.  It will eventually be fed into
psql for restoration elsewhere.  The problem isn't restoring it.  It's
not knowing how to tell which tabs are field separators and which are
part of the data.

 On Wed, 2007-01-03 at 17:59 -0800, [EMAIL PROTECTED] wrote:
  pg_dump apparently is smart enough to print embedded tabs as escaped
  chars, but not psql.  Is there a fix for this?  I thought of reverting
  to standard output, without the -t option, and analyzing the first two
  lines to tell exactly how many spaces are assigned to each column, but
  that gives me the shudders.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] no unpinned buffers available ? why? (hstore and plperl involved)

2007-01-04 Thread hubert depesz lubaczewski

On 1/3/07, Richard Huxton dev@archonet.com wrote:


hubert depesz lubaczewski wrote:
 On 1/3/07, Richard Huxton dev@archonet.com wrote:

 If you do that separately at the start of the process, (one query per
 custom column in the old table) then it becomes straightforward.


 no, because meaning of col1 in advert_custom_fields is different for
each
 record.
 for one record it's codename might be email for another record it
 might be
 engine size.
And is that not what's stored in v_category_custom_fields? So you can
do the transformation and get (advert_id=1, codename='col1',
value='vvv') then use v_category_custom_fields to update the 'col1' part.



this information is stored there, yet i have no clue on how you would like
to make it with standard sql statements? for every advert there are about
20-30 custom fields (in one record in advert_custom_fields). to do it your
way i would need to make approximatelly 30 (numer of custom field) times
30 (number of adverts) queries. that would be way slower and definitelly
not automatic.

best regards,

depesz


--
http://www.depesz.com/ - nowy, lepszy depesz


Re: [GENERAL] no unpinned buffers available ? why? (hstore and

2007-01-04 Thread Richard Huxton

hubert depesz lubaczewski wrote:

On 1/3/07, Richard Huxton dev@archonet.com wrote:


hubert depesz lubaczewski wrote:
 On 1/3/07, Richard Huxton dev@archonet.com wrote:

 If you do that separately at the start of the process, (one query per
 custom column in the old table) then it becomes straightforward.


 no, because meaning of col1 in advert_custom_fields is different for
each
 record.
 for one record it's codename might be email for another record it
 might be
 engine size.
And is that not what's stored in v_category_custom_fields? So you can
do the transformation and get (advert_id=1, codename='col1',
value='vvv') then use v_category_custom_fields to update the 'col1' part.



this information is stored there, yet i have no clue on how you would like
to make it with standard sql statements? for every advert there are about
20-30 custom fields (in one record in advert_custom_fields). to do it 
your

way i would need to make approximatelly 30 (numer of custom field) times
30 (number of adverts) queries. that would be way slower and 
definitelly

not automatic.


Show me the table definitions and some sample data and I'll see if the 
SQL is do-able.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Backup Restore

2007-01-04 Thread Richard Huxton

Bob Pawley wrote:
Found it in template 1. This seems strange as both servers and pgadmins 
are the same version and I haven't opened the template until today.


The only thing I can think of is that you accidentally restored into 
template1. Probably easier to do with pgadmin than from the command-line.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] no unpinned buffers available ? why? (hstore and

2007-01-04 Thread Richard Huxton

hubert depesz lubaczewski wrote:

On 1/4/07, Richard Huxton dev@archonet.com wrote:


Show me the table definitions and some sample data and I'll see if the
SQL is do-able.


technically - i can, but please - belive me it is not possible.
advert_custom_fields table has approx. 1200 columns (for reasons i was
explaining some time ago).
sample data would look like:
# select id, category_id from adverts order by id desc limit 5;
   id| category_id
--+-
35161391 |  35
35161390 |  35
35161389 | 230
35161388 |  34
35161387 |  37
(5 rows)

# select * from v_category_custom_fields limit 5;
category_id | codename  | custom_field_name
-+---+---
  1 | contact   | text_6
  1 | web   | text_5
  1 | mail  | text_4
  1 | phone | text_3
  1 | price_usd | number_3
(5 rows)

advert_custom_fields basically has id, advert_id, and then 128 column per
type (text, number, boolean, integer, date, time, timestamp).


OK, let's look at it one type at a time. You'd obviously generate the 
following query via a script then save it as a view/prepared query.


SELECT advert_id, 'text_1'::text as colname, text_1 AS value
FROM advert_custom_fields
UNION ALL
SELECT advert_id, 'text_2'::text as colname, text_2 AS value
FROM advert_custom_fields
UNION ALL
...
SELECT advert_id, 'text_128'::text as colname, text_128 AS value
FROM advert_custom_fields;

Now that's going to run a set of seq-scans, so if the table's not going 
to fit in RAM then you'll probably want to add a WHERE advert_id=xxx 
part to each clause. Then call it once per advert-id in a loop as you 
are at present. Or, you could do it in batches of e.g. 100 with a 
partial index.


I'd be tempted to create a TEMP TABLE from that query, then join to the 
table for the codename lookup via v_category_custom_fields. Of course, 
you could do it all in the giant UNION ALL query if you wanted to.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Accessing a custom FileSystem (as in Mysql Custom Engine)

2007-01-04 Thread Scara Maccai

Hi,

I have a custom database (a very fast select/slow insert db) written 
in c/c++.
I can access it with mysql writing a Custom Engine. That is pretty 
cool because now all my custom db tables can be joined with tables in 
mysql's format.
I only need read access to my custom table format db. Would read access 
be possible with the C-Language Functions - Returning Sets API? I 
would like to read my custom tables and join them with postgres tables...




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Table inheritance implementation.

2007-01-04 Thread Grzegorz Nowakowski
Hi.

I'm developing an application using PostgreSQL and it happened table
inheritance is THE solution to some design problems I have.
Unfortunately the feature doesn't exactly work as true class/object
inheritance would.  Main problems are well recognized and documented:
child table doesn't inherit parent constraints and parent's index
doesn't get updated with child's keys.  While I didn't dig in the
Postgres internals, from the symptoms I guess the inheritance is
implemented as implicit UNION of the tables.  To be more specific, I
have:

CREATE TABLE parent (
p int PRIMARY KEY
);
CREATE TABLE child (
c int
);

If I'm right, in the backend there are two tables: parent(pid) and
child(pid,cdata) and
INSERT INTO child ...
just go to child.  Then when I 
SELECT ... FROM parent
Postgres does
SELECT ... FROM parent UNION SELECT ... FROM child
for me (might be syntax error, I'm not so familiar with SQL).

This scenario of course explains these problems and I understand solving
them won't be easy.  But I have another question: why can't be
inheritance implemented as implicit JOIN?

I mean, in the backend there would be tables parent(p) and child(c) plus
some glue added (if oids/tids are not enough).  So
INSERT INTO child VALUES (1,2)
would
INSERT INTO parent VALUES (1)
INSERT INTO child (2)
And
SELECT ... FROM parent
would work as is, but
SELECT ... FROM child
would effect in
SELECT ... FROM parent JOIN child ON glue

It seems to me that it would solve both mentioned problems in one shot:
parent contains all keys it should have (and so index does) and parent's
constraints are enforced at the same time.

The glue can be issue or may be not.  The real issue would be with
overriding parent's constraints (from my point of view it's minor one
compared to contemporary problems).  There may be other deficiencies I'm
not aware of.  On the bright side, I think this implementation (or at
least some functionality of) can be made with rules.

Anyone share thought about the whole idea?  Or details?

Best regards.
-- 
Grzegorz Nowakowski


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] could not open file xxxx for writing: Permission

2007-01-04 Thread Bill Moran
Tomas Lanczos [EMAIL PROTECTED] wrote:

 Hello,
 
 Using PostgreSQL 8.1.4, pgadmin III 1.6.0 in WinXP I tried to export a table
 using COPY (first time in postgresql, did many times in Sybase):
 
 COPY ml50jtsk_datum_v TO 'c:/postgresql/ml50jtsk.out';
 
 I got the following errormessage:
 
 ERROR: could not open file c:/postgresql/ml50jtsk.out for writing:
 Permission denied
 SQL state: 42501
 
 What's wrong? Does it mean that the database user has no writing permission
 out of the database? How I did a pg_dump then few weeks ago?

When you run a pg_dump, the file is created with the perms of the user
calling the pg_dump program.

When you do a copy, the file is created with the perms of the user
running the PostgreSQL _server_ process.  Probably a user called
postgres.

If the user the server runs as doesn't have permission to write to the
specified directory, you'll see this error.

As Magnus stated, you can do \copy, which executes the copy command in
the the client instead of in the server, and it will have the perms of
the user who started the client -- assuming you're using the psql
program.

You can also change that directory's perms to allow the server user to
write to it, or choose a directory that the server user already has
rights to.

HTH,
Bill

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tabbed data in tab-separated output

2007-01-04 Thread Martijn van Oosterhout
On Thu, Jan 04, 2007 at 12:16:17AM -0800, [EMAIL PROTECTED] wrote:
 On Wed, Jan 03, 2007 at 10:31:46PM -0600, mike wrote:
  How are you attempting to restore the table after using psql?  Psql
  insert statements? Pgdump? COPY FROM?
 
 Actually, right now I feed the dump file into a program which mangles
 it and analyzes it in various ways.  It will eventually be fed into
 psql for restoration elsewhere.  The problem isn't restoring it.  It's
 not knowing how to tell which tabs are field separators and which are
 part of the data.

psql is prety dumb that way, being designed for people not programs. I
think recent versions will escape the tab.

I suggest you use \copy instead, which on recent versions will allow you
to copy from a query. Alternativly, use a delimiter that doesn't exist
in your data.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] LargeObjects Total Size

2007-01-04 Thread veejar

My pg_largeobject table filenode is 16404:
(select relfilenode from pg_class where relname='pg_largeobject')

Pages (record) count in table pg_largeobject: 73574 - 144 MB (73574 * 2Kb)
(select count(loid) from pg_largeobject)

Relation pages count (in table pg_class) for table pg_largeobject:
23713 - 185 MB (23713 * 8 Kb)
(select relpages from pg_class where relname='pg_largeobject')

Total:
144 MB - data in table pg_largeobject
186 MB - size on disk for table pg_largeobject

Question:
185 MB - 144MB = 41 MB - what is in this memory on disk?

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] I will hold your copy for 24 hours

2007-01-04 Thread henry akagbusi
Last week, I tried to arrange for you to receive a slightly 
damaged copy of Derek Gehl's Internet Millionaire's Protégé 
Bootcamp videos... but I don't think you got them?

Since these copies are available for a DEEP discount, I 
didn't think you'd care about a couple of scuff marks on 
the package covers because all the actual DVDs are fine

In fact, the package I've had saved for you is practically 
brand new!

It still contains all the same information you need to 
start and grow a wildly profitable Internet business... 
including the SAME strategies Derek's been using behind 
the scenes to make over $60,000,000 in online sales and 
help literally thousands of people make MILLIONS of 
dollars with their Internet businesses!

So if you don't want the DVDs that I've had put aside for you, 
that's fine, but please let me know your decision by 
tomorrow morning at the very latest...

Because I'm getting swamped with calls from people who 
can't believe Derek's giving away these slightly damaged 
copies for a massive $100 discount off, and I feel bad 
putting them off much longer.

Derek only gives away his slightly damaged stock once 
a year, and even then, he rarely has more than a couple 
hundred copies available...

... so if you DO want me to have this copy sent to you, 
please visit:

http://www.marketingtips.com/protege-video-likenew/t/884882


All the best,

Henry.C.A
P.S. By the way, Derek is also going to throw in $2,342 
worth of free bonuses and extra resources that he doesn't 
normally give away with these DVDs, even though you're 
already getting a ridiculous discount, because he wants 
to make sure you have everything you need to apply the 
SAME strategies his other students are using to make 
$100,000 to $2.5 million plus per year with THEIR Internet 
businesses.

You can hear their success stories, too, at 
http://www.marketingtips.com/protege-video-likenew/t/884882







 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

[GENERAL] Pure PostgreSQL unit tests - test and debug pgsql constraints and procedures/functions

2007-01-04 Thread BigSmoke
Following up on recent experiences I had with unit tests, I've written
an article on unit testing in PostgreSQL. If anyone is interested, it
can be found at http://www.bigsmoke.us/postgresql-unit-testing/

Of course, comments and criticisms are ever welcome.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How to use the SQL parser subsystem

2007-01-04 Thread Ravindra Jaju

Hi.

Is it possible to use the SQL parser embedded inside the postgres code via
some public API?
I wish to get a parsed SQL query in any format which lends itself to easy
analysis (structural, for example) so as to be able to find similarities
between queries.

I tried peeking around a bit, and tried pulling out src/backend/parser - but
that depends on other parts of the code, making it a bit non-trivial. Plus,
any changes in the base code upstream would not be available to this
pulled-out application over time. Looking further, I noticed on my Fedora
distribution that the pg devel package has a parser/parser.h file which
exposes the function 'raw_parser' which is the function that seemed most
interesting to me while exploring the code too.

I have some questions:
1] is 'raw_parser' the right function to parse an SQL statement (for mainly
structural/syntactic analysis)?
2] If yes, I found that none of the shared library files expose this
function - I could not link a simple C program with this function
successfully. (I tried all the .so files related to postgres! :-()
3] If not, what would be a better mechanism to re-use all the work already
done in postgres for SQL analysis?

regards,
jaju


Re: [GENERAL] How to use the SQL parser subsystem

2007-01-04 Thread Martijn van Oosterhout
On Thu, Jan 04, 2007 at 08:01:17PM +0530, Ravindra Jaju wrote:
 I tried peeking around a bit, and tried pulling out src/backend/parser - but
 that depends on other parts of the code, making it a bit non-trivial. Plus,
 any changes in the base code upstream would not be available to this
 pulled-out application over time. Looking further, I noticed on my Fedora
 distribution that the pg devel package has a parser/parser.h file which
 exposes the function 'raw_parser' which is the function that seemed most
 interesting to me while exploring the code too.

Note that that's a *backend* header file.

 I have some questions:
 1] is 'raw_parser' the right function to parse an SQL statement (for mainly
 structural/syntactic analysis)?

I beleive it's the function used y postgres internally, not sure
though.

 2] If yes, I found that none of the shared library files expose this
 function - I could not link a simple C program with this function
 successfully. (I tried all the .so files related to postgres! :-()

It's inside the server, it's not in a seperate library. To use it you
need to be inside the server.

 3] If not, what would be a better mechanism to re-use all the work already
 done in postgres for SQL analysis?

No idea.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] How to use the SQL parser subsystem

2007-01-04 Thread Tatsuo Ishii
 Hi.
 
 Is it possible to use the SQL parser embedded inside the postgres code via
 some public API?
 I wish to get a parsed SQL query in any format which lends itself to easy
 analysis (structural, for example) so as to be able to find similarities
 between queries.
 
 I tried peeking around a bit, and tried pulling out src/backend/parser - but
 that depends on other parts of the code, making it a bit non-trivial. Plus,
 any changes in the base code upstream would not be available to this
 pulled-out application over time. Looking further, I noticed on my Fedora
 distribution that the pg devel package has a parser/parser.h file which
 exposes the function 'raw_parser' which is the function that seemed most
 interesting to me while exploring the code too.
 
 I have some questions:
 1] is 'raw_parser' the right function to parse an SQL statement (for mainly
 structural/syntactic analysis)?

Yes.

 2] If yes, I found that none of the shared library files expose this
 function - I could not link a simple C program with this function
 successfully. (I tried all the .so files related to postgres! :-()

It's not exported.

 3] If not, what would be a better mechanism to re-use all the work already
 done in postgres for SQL analysis?

One idea would be writing a PostgreSQL function which calls
raw_parser() and returns the parse tree as a texual representaion.

Another idea would be pulling out raw_parser() along with related
functions from PostgreSQL. pgpool-II
(http://pgfoundry.org/projects/pgpool/) has alread done this.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How to use the SQL parser subsystem

2007-01-04 Thread Ravindra Jaju

Hi.

That was informative. Thanks.

The reason I started pursuing this path was that this file (parser/parser.h)
happens
to be distributed as part of some 'devel' package, making me think that it
might
be possible to re-use the SQL parsing functionality that exists within
postgres.

Even if it is in the backend, I guessed that there might be a .so related to
the backend
which I could link against. Not so...

If someone can confirm that there is no known utility which uses postgres
codebase to
play with SQL statements, then it would make sense for me to try and hack
something.

regards,
jaju

On 1/4/07, Martijn van Oosterhout kleptog@svana.org wrote:


On Thu, Jan 04, 2007 at 08:01:17PM +0530, Ravindra Jaju wrote:
 I tried peeking around a bit, and tried pulling out src/backend/parser -
but
 that depends on other parts of the code, making it a bit non-trivial.
Plus,
 any changes in the base code upstream would not be available to this
 pulled-out application over time. Looking further, I noticed on my
Fedora
 distribution that the pg devel package has a parser/parser.h file which
 exposes the function 'raw_parser' which is the function that seemed most
 interesting to me while exploring the code too.

Note that that's a *backend* header file.

 I have some questions:
 1] is 'raw_parser' the right function to parse an SQL statement (for
mainly
 structural/syntactic analysis)?

I beleive it's the function used y postgres internally, not sure
though.

 2] If yes, I found that none of the shared library files expose this
 function - I could not link a simple C program with this function
 successfully. (I tried all the .so files related to postgres! :-()

It's inside the server, it's not in a seperate library. To use it you
need to be inside the server.

 3] If not, what would be a better mechanism to re-use all the work
already
 done in postgres for SQL analysis?

No idea.

Have a nice day,
--
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to
litigate.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFFnROLIB7bNG8LQkwRApb+AJ9z8V9p3krQ9zxGCB/yQCFPVNg2XgCeLZGl
qDi9lewzfJmPfM23lht/p7E=
=tk7B
-END PGP SIGNATURE-





[GENERAL] self-referential UPDATE problem on 7.4

2007-01-04 Thread Karsten Hilbert
Hello !

I am trying to run this query in psql:

update clin.episode
set fk_patient = clin.health_issue.fk_patient
from clin.health_issue
where
clin.episode.fk_patient is NULL and
clin.episode.fk_health_issue = clin.health_issue.pk;

It returns UPDATE 2 which is what I expect from the data.
However, the rows in question are not actually updated.

What I am trying to do:

- clin.episode has a nullable foreign key fk_health_issue to 
clin.health_issue.pk
- clin.health_issue has a not-nullable fk_patient
- clin.episode also has an fk_patient which is nullable

I want to transfer the value of clin.health_issue.fk_patient
to clin.episode.fk_patient to those clin.episodes for which
fk_patient is NULL.

I'm sure I am doing something wrong in a fairly basic way.

I tried with a subselect, too, but get the same result:

update clin.episode
set fk_patient = (
select fk_patient from clin.health_issue chi where 
chi.pk=clin.episode.fk_health_issue
)
where fk_patient is NULL;

This is on 7.4.14 on Debian/Etch.

Thanks,
Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] How to use the SQL parser subsystem

2007-01-04 Thread Ravindra Jaju

On 1/4/07, Tatsuo Ishii [EMAIL PROTECTED] wrote:


 3] If not, what would be a better mechanism to re-use all the work
already
 done in postgres for SQL analysis?

One idea would be writing a PostgreSQL function which calls
raw_parser() and returns the parse tree as a texual representaion.

Another idea would be pulling out raw_parser() along with related
functions from PostgreSQL. pgpool-II
(http://pgfoundry.org/projects/pgpool/) has alread done this.




The first idea would mean having a postgres instance always accessible for
SQL analysis, which would be an overkill.

I shall check out pgpool-II.

Is there any specific reason why the SQL parsing portion is not exposed -
I expect a lot of people to do structure-level analysis of SQL statements,
hence
I do not really get why it should not be exposed.

Thanks a lot, and thanks for bearing with my questions! :-)

regards,
jaju


Re: [GENERAL] How to use the SQL parser subsystem

2007-01-04 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 On Thu, Jan 04, 2007 at 08:01:17PM +0530, Ravindra Jaju wrote:
 2] If yes, I found that none of the shared library files expose this
 function - I could not link a simple C program with this function
 successfully. (I tried all the .so files related to postgres! :-()

 It's inside the server, it's not in a seperate library. To use it you
 need to be inside the server.

By and large there is no part of the backend that is designed to be run
standalone --- almost everything relies on palloc and elog, for instance.
I concur with the suggestion to consider doing this as a backend
function rather than in a standalone program.

Note that what raw_parser gives you is the raw grammar output, which is
probably not really what you want.  For almost any sort of interesting
analysis, I'd think you'd want to run the syntax tree through
parse_analyze() or one of its siblings, so that semantic interpretation
gets done.  There is definitely no hope of pulling out parse_analyze(),
because it has to consult the catalogs ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Moving from 7.2.1 to 8.1.5 - looking for jdbc

2007-01-04 Thread Arindam

Hello,

I work with a product which uses postgresql internally. I build
postgresql from source for Solaris SPARC and Linux. So far I have been
using 7.2.1 but lately I realized that it has long been obsolete and I
chose to move to 8.1.5.

In my earlier build system, I was patching a couple of source files in
the source tree before carrying out the build. Currently I see, there
is no src/interface/jdbc directory. One of the files I was patching
was the Statement.java source.

I am in a fix. Do I need to separately download the jdbc drivers and
build them into a jar file? More importantly, I am a little concerned
about what all major changes have happened that will likely affect my
build.

What all has moved out of the tar that is available for download ( I
am talking about postgresql-ver.tar.gz ).


- Arindam

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Accessing a custom FileSystem (as in Mysql Custom Engine)

2007-01-04 Thread Tom Lane
Scara Maccai [EMAIL PROTECTED] writes:
 I only need read access to my custom table format db. Would read access 
 be possible with the C-Language Functions - Returning Sets API?

Probably.  Take a look at contrib/dblink for ideas.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Table inheritance implementation.

2007-01-04 Thread Tom Lane
Grzegorz Nowakowski [EMAIL PROTECTED] writes:
 But I have another question: why can't be
 inheritance implemented as implicit JOIN?

Interesting thought, but joins are expensive --- this would be quite a
lot slower than the current way, I fear, especially when you consider
more than one level of inheritance.  Also, switching over to this would
destroy the current usefulness of inheritance for partitioning.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] self-referential UPDATE problem on 7.4

2007-01-04 Thread Tom Lane
Karsten Hilbert [EMAIL PROTECTED] writes:
 I am trying to run this query in psql:

 update clin.episode
 set fk_patient = clin.health_issue.fk_patient
 from clin.health_issue
 where
   clin.episode.fk_patient is NULL and
   clin.episode.fk_health_issue = clin.health_issue.pk;

 It returns UPDATE 2 which is what I expect from the data.
 However, the rows in question are not actually updated.

That seems very strange.  Could you perhaps have a BEFORE UPDATE trigger
that's changing the values back to null again?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Moving from 7.2.1 to 8.1.5 - looking for jdbc

2007-01-04 Thread Tom Lane
Arindam [EMAIL PROTECTED] writes:
 I am in a fix. Do I need to separately download the jdbc drivers and
 build them into a jar file?

The jdbc driver is distributed separately now, yes.  See
http://jdbc.postgresql.org/

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Table inheritance implementation.

2007-01-04 Thread Vlad

Speaking of partitioning, I see there some improvements planed for
this feature in 8.3 - any info on what exactly users can expect? Any
possibility to improve it so we don't have to add insert trigger that
selects the right table for operation? Also, propagation of Alter
table on inherited tables is a sweat feature... :)


Interesting thought, but joins are expensive --- this would be quite a
lot slower than the current way, I fear, especially when you consider
more than one level of inheritance.  Also, switching over to this would
destroy the current usefulness of inheritance for partitioning.

regards, tom lane


-- vlad

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Moving from 7.2.1 to 8.1.5 - looking for jdbc

2007-01-04 Thread Arindam

On 1/4/07, Tom Lane [EMAIL PROTECTED] wrote:

Arindam [EMAIL PROTECTED] writes:
 I am in a fix. Do I need to separately download the jdbc drivers and
 build them into a jar file?

The jdbc driver is distributed separately now, yes.  See
http://jdbc.postgresql.org/

   regards, tom lane



Thank you. I was actually looking for a more extensive list of changes
which have happened between these two versions. Is there a place that
can help?

Thanks,
Arindam

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Table inheritance implementation.

2007-01-04 Thread Vlad

ops. alter table seems to be propagating OK in 8.2...

On 1/4/07, Vlad [EMAIL PROTECTED] wrote:

Speaking of partitioning, I see there some improvements planed for
this feature in 8.3 - any info on what exactly users can expect? Any
possibility to improve it so we don't have to add insert trigger that
selects the right table for operation? Also, propagation of Alter
table on inherited tables is a sweat feature... :)



-- vlad

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Moving from 7.2.1 to 8.1.5 - looking for jdbc

2007-01-04 Thread Tom Lane
Arindam [EMAIL PROTECTED] writes:
 Thank you. I was actually looking for a more extensive list of changes
 which have happened between these two versions. Is there a place that
 can help?

Read the release notes ...
http://www.postgresql.org/docs/8.2/static/release.html

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Moving from 7.2.1 to 8.1.5 - looking for jdbc

2007-01-04 Thread Devrim GUNDUZ
Hi,

On Thu, 2007-01-04 at 22:10 +0530, Arindam wrote:
 I was actually looking for a more extensive list of changes
 which have happened between these two versions. Is there a place that
 can help? 

See the release notes:

http://www.postgresql.org/docs/8.1/static/release.html

You will need to dig too much to track the changes between 7.2 and
8.1.5.

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/





signature.asc
Description: This is a digitally signed message part


[GENERAL] pg_dump question

2007-01-04 Thread Madison Kelly

Hi all,

  I've created a database (pgsql 8.1 on Debian Etch) that uses 
triggers/functions to keep all changes for various tables in a history 
schema. This is the first time I've done this (captured and stored 
changes in a different schema) so I was hoping for some backup/restore 
advice.


  As far as I can tell, you can only dump one schema at a time. Is this 
true? If so, can I dump 'public' first and then append the dump of 
'history' to the same file and be okay? Also, when I restore from this 
file, can I prevent the triggers from running just during the reload of 
the data?


  I hope these aren't too junior questions. :)

Madi

PS - In case it helps, here's an example of a table/function I am using:


CREATE TABLE files (
file_id int default(nextval('id_seq')),
file_for_table  textnot null,
file_ref_id int not null,
file_desc   text,
file_name   textnot null,
file_file_name  textnot null,
file_type   textnot null,
file_os textnot null,
file_vertext,
file_active boolean not nulldefault 't',
added_date  timestamp without time zone not null
default now(),
added_user  int not null,
modified_date   timestamp without time zone not null
default now(),
modified_user   int not null
);
ALTER TABLE files OWNER TO digimer;

CREATE TABLE history.files (
file_id int not null,
file_for_table  textnot null,
file_ref_id int not null,
file_desc   text,
file_name   textnot null,
file_file_name  textnot null,
file_type   textnot null,
file_os textnot null,
file_vertext,
file_active boolean not null,
added_date  timestamp without time zone not null,
added_user  int not null,
modified_date   timestamp without time zone not null,
modified_user   int not null
);
ALTER TABLE history.files OWNER TO digimer;

CREATE FUNCTION history_files() RETURNS trigger
AS $$
DECLARE
hist_files RECORD;
BEGIN
SELECT INTO hist_files * FROM public.files WHERE 
file_id=new.file_id;
INSERT INTO history.files
			(file_id, file_for_table, file_ref_id, file_desc, file_name, 
file_file_name, file_type, file_os, file_ver, file_active, added_user, 
modified_date, modified_user)

VALUES
			(hist_files.file_id, hist_files.file_for_table, 
hist_files.file_ref_id, hist_files.file_desc, hist_files.file_name, 
hist_files.file_file_name, hist_files.file_type, hist_files.file_os, 
hist_files.file_ver, hist_files.file_active, hist_files.added_user, 
hist_files.modified_date, hist_files.modified_user);

RETURN NULL;
END;$$
LANGUAGE plpgsql;
ALTER FUNCTION history_files() OWNER TO digimer;

CREATE TRIGGER trig_files AFTER INSERT OR UPDATE ON files FOR EACH ROW 
EXECUTE PROCEDURE history_files();


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Discovering time of last database write

2007-01-04 Thread Andy Dale

Hi,

I need to be able to determine the last time (and date) that a database was
written to.  I know it could be possible just to check the last modified
dates in the PGDATA directory, but i need to compare the last write time of
3 databases (connecting via JDBC).  Hopefully the last write date is
contained somewhere in a system table (information schema) but i have no
idea of the table(s) i would need to query.

Thanks in advance,

Andy


Re: [GENERAL] Discovering time of last database write

2007-01-04 Thread Scott Marlowe
On Thu, 2007-01-04 at 11:11, Andy Dale wrote:
 Hi,
 
 I need to be able to determine the last time (and date) that a
 database was written to.  I know it could be possible just to check
 the last modified dates in the PGDATA directory, but i need to compare
 the last write time of 3 databases (connecting via JDBC).  Hopefully
 the last write date is contained somewhere in a system table
 (information schema) but i have no idea of the table(s) i would need
 to query. 

Bad news, it's not generally stored.

Good news, it's not that hard to implement.

Perhaps if you give us the bigger picture we can make more logical
suggestions on how to accomplish it.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dump question

2007-01-04 Thread Richard Huxton

Madison Kelly wrote:

Hi all,

  I've created a database (pgsql 8.1 on Debian Etch) that uses 
triggers/functions to keep all changes for various tables in a history 
schema. This is the first time I've done this (captured and stored 
changes in a different schema) so I was hoping for some backup/restore 
advice.


  As far as I can tell, you can only dump one schema at a time. Is this 
true? 


No, pg_dump dumps a whole database by default. You can dump just a 
single schema or table though.


 If so, can I dump 'public' first and then append the dump of

'history' to the same file and be okay?


No, someone might have updated public in-between.

 Also, when I restore from this
file, can I prevent the triggers from running just during the reload of 
the data?


Yes, there's a command-line setting when doing a data-only restore. When 
doing a full restore (schema+data) this is done for you.


Try the page below or man pg_dump/man pg_restore for full details:
http://www.postgresql.org/docs/8.1/static/reference-client.html

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] self-referential UPDATE problem on 7.4

2007-01-04 Thread Karsten Hilbert
On Thu, Jan 04, 2007 at 11:36:35AM -0500, Tom Lane wrote:

  update clin.episode
  set fk_patient = clin.health_issue.fk_patient
  from clin.health_issue
  where
  clin.episode.fk_patient is NULL and
  clin.episode.fk_health_issue = clin.health_issue.pk;
 
  It returns UPDATE 2 which is what I expect from the data.
  However, the rows in question are not actually updated.
 
 That seems very strange.  Could you perhaps have a BEFORE UPDATE trigger
 that's changing the values back to null again?
I do but I drop that one in the script before doing the
above update. I may have an old one hanging around, though,
I'll double-check.

Thanks,
Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dump question

2007-01-04 Thread Devrim GUNDUZ
Hi,

On Thu, 2007-01-04 at 11:20 -0500, Madison Kelly wrote:

As far as I can tell, you can only dump one schema at a time. Is
 this true?

You can dump multiple schemas and multiple tables at a time with 8.2.

Regards,
 
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/





signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] self-referential UPDATE problem on 7.4

2007-01-04 Thread Karsten Hilbert
On Thu, Jan 04, 2007 at 06:37:23PM +0100, Karsten Hilbert wrote:

   It returns UPDATE 2 which is what I expect from the data.
   However, the rows in question are not actually updated.
  
  That seems very strange.  Could you perhaps have a BEFORE UPDATE trigger
  that's changing the values back to null again?
 I do but I drop that one in the script before doing the
 above update. I may have an old one hanging around, though,
 I'll double-check.
I was missing a schema qualification in a drop statement,
hence the culprit trigger function wasn't dropped properly.

Sometimes a hint helps.

Thanks Tom,
Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_dump question

2007-01-04 Thread Madison Kelly

Richard Huxton wrote:
  As far as I can tell, you can only dump one schema at a time. Is 
this true? 


No, pg_dump dumps a whole database by default. You can dump just a 
single schema or table though.


Hmm, I wonder why I thought this... Was this true in older versions or 
did I just imagine this? :)



  If so, can I dump 'public' first and then append the dump of

'history' to the same file and be okay?


No, someone might have updated public in-between.


Ah, of course.


  Also, when I restore from this
file, can I prevent the triggers from running just during the reload 
of the data?


Yes, there's a command-line setting when doing a data-only restore. When 
doing a full restore (schema+data) this is done for you.


Try the page below or man pg_dump/man pg_restore for full details:
http://www.postgresql.org/docs/8.1/static/reference-client.html


I had read the man pages, but I re-read them and I apparently went on a 
mental vacation and missed a fair chunk of it. *sigh*


Thanks kindly for your reply!

Madi

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dump question

2007-01-04 Thread Richard Huxton

Madison Kelly wrote:

Richard Huxton wrote:
  As far as I can tell, you can only dump one schema at a time. Is 
this true? 


No, pg_dump dumps a whole database by default. You can dump just a 
single schema or table though.


Hmm, I wonder why I thought this... Was this true in older versions or 
did I just imagine this? :)


Be comforted, imagination is a trait shared by all highly intelligent 
people :-)



Try the page below or man pg_dump/man pg_restore for full details:
http://www.postgresql.org/docs/8.1/static/reference-client.html


I had read the man pages, but I re-read them and I apparently went on a 
mental vacation and missed a fair chunk of it. *sigh*


You'll almost certainly want the custom format for your dumps. You 
might find the --list and --use-list options useful for restoring sets 
of tables from a full dump.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dump problems

2007-01-04 Thread JTyrrell


Richard Huxton wrote:
 
 JTyrrell wrote:
 
 pg_dump -a -d testrig  testrig.data
 
 on a machine running postgresql 7.4.13 with database testirg i have no
 problem.
 
 Then, I want to do the same but on a different machine running postgresql
 7.4.6 with database root. So...
 
 pg_dump -a -d root  root.data
 
 For some reason this doesnt work! The command is running without error,
 but
 the file root.data is always empty. The database does exist and all the
 data
 is there, and I can run psql and do everything I normally do. I've tried
 pg_dump with other databases and still get an empty file.
 
 Should work just fine.
 1. Does pg_dump -s work?
 2. Does the user you run this as have permission to dump data from root?
 
 -- 
Richard Huxton
Archonet Ltd
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 
 


1. pg_dump -s doesn't work. I've tried using a load of other options and
none at all but no luck. 

2. I'm running as root so wont be a problem there, and the file im dumping
to has full permissions for everyone. 

What makes this even more confusing is I tried pg_dump with a database name
that doesnt exist and tried dumping from database root with a user that
doesn't have permission to do that. Both gave the same results as before.
Its basically creating the file i need, looking for the database and not
finding it. When you dump an empty database the file at least has some info
on session authorization and the schema. Im getting nothing!

-- 
View this message in context: 
http://www.nabble.com/pg_dump-problems-tf2912789.html#a8156139
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Need help in PGSQL

2007-01-04 Thread sbaskar

Hi,
I am doing a project in Database Monitoring tool. I am planning to add 
pgsql also. So i need the table details, where i can get details like  
Request Rate, Bytes Received Rate, Bytes Send Rate, Open Connections, 
Aborted Connections, Aborted Clients, Threads Used, Threads in Cache, 
Thread Cache Size, Database Details, Immediate Locks, Locks Wait, Key 
Hitrate, Key Buffer Used, Key Buffer Size, Key Buffer Size, Key Buffer 
Size, Key Buffer Size, Total Memory, SQL Cache Memory, Lock Memory, 
Buffer Hit Ratio, Active Connections, Logins/Min, Cache Used/Min, Latch 
Details, Agents Statistics , Transaction Statistics,.


Any help on this would be greatly appreciated. Looking for positive and 
quick response from you.


Thanks in Advance,
Baskar.S

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Dependency conflicts on CentOS 4.4

2007-01-04 Thread Gunnar Wagenknecht
Hi!

I installed PostgreSQL on CentOS 4.4 using the packages provided at:
ftp://ftp.postgresql.org/pub/binary/v8.2.0/linux/rpms/redhat/rhel-es-4/

I installed using:
#yum localinstall compat-postgresql-libs-4-2PGDG.rhel4.i686.rpm
#yum localinstall postgresql-8.2.0-2PGDG.i686.rpm
postgresql-server-8.2.0-2PGDG.i686.rpm postgresql-libs-8.2.0-2PGDG.i686.rpm


However, I'm having dependency resolutions problems now when installing
'perl-DBD-Pg'.

# yum install perl-DBD-Pg
Setting up Install Process
Setting up repositories
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
-- Populating transaction set with selected packages. Please wait.
--- Package perl-DBD-Pg.i386 0:1.31-6 set to be updated
-- Running transaction check
-- Processing Dependency: libpq.so.3 for package: perl-DBD-Pg
-- Restarting Dependency Resolution with new changes.
-- Populating transaction set with selected packages. Please wait.
--- Package postgresql-libs.i386 0:7.4.13-2.RHEL4.1 set to be updated
-- Running transaction check
-- Processing Dependency: libpq.so.5 for package: postgresql-server
-- Processing Conflict: compat-postgresql-libs conflicts
postgresql-libs  8.1.5
-- Processing Dependency: libpq.so.5 for package: postgresql
-- Finished Dependency Resolution
Error: Missing Dependency: libpq.so.5 is needed by package postgresql-server
Error: compat-postgresql-libs conflicts with postgresql-libs  8.1.5
Error: Missing Dependency: libpq.so.5 is needed by package postgresql


Do I need to look for a different 'perl-DBD-Pg' that doesn't depend on
'libpq.so.3' or should 'libpq.so.3' be provided by
'postgresql-libs-8.2.0-2PGDG.i686.rpm'?


Thanks,
Gunnar

-- 
Gunnar Wagenknecht
[EMAIL PROTECTED]
http://wagenknecht.org/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Update to 8.2 in openSUSE 10.2

2007-01-04 Thread Romulo Hunter

Hola a todos.
Como sabemos openSUSE viene con muchos paquetes para instalar o
preinstalados. El punto es que viene con la versión de PostgreSQL
8.1... y cuando descargo la 8.2 desde el sitio de PostgreSQL me lo
instala en en /usr/local/pgsl; la instalación inicial del motor que
viene dentro del sistema openSUSE está en /var/lib/pgsql, ahora : ¿
Cómo hago para que la nueva versión reemplace a la anterior y quede
una sola versión (la 8.2) en mi máquina?.

Estoy corriendo con openSUSE 10.2

Gracias al que se de el tiempo de reponder.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Using duplicate foreign keys

2007-01-04 Thread Andrus
I have multi-company database.
Each company has its own chart of accounts table which are stored in each 
company schema.
Some account numbers are used in a common table which is stored in public 
schema.

So I need to create duplicate foreign keys like

create temp table company1.chartoffaccounts ( accountnumber int primary 
key);
create temp table company2.chartoffaccounts ( accountnumber int primary 
key);

create temp table public.commontable ( accountnumber int,
   FOREIGN KEY (accountnumber)   REFERENCES c1hartoffaccounts 
(accountnumber)
   FOREIGN KEY (accountnumber)   REFERENCES c2hartoffaccounts 
(accountnumber) );


Is it OK to use duplicate foreign keys ? What issues will they cause ?

Andrus. 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Dependency conflicts on CentOS 4.4

2007-01-04 Thread Devrim GUNDUZ
Hi,

On Thu, 2007-01-04 at 16:57 +0100, Gunnar Wagenknecht wrote:

 Do I need to look for a different 'perl-DBD-Pg' that doesn't depend on
 'libpq.so.3' or should 'libpq.so.3' be provided by
 'postgresql-libs-8.2.0-2PGDG.i686.rpm'? 

Install this package:

http://developer.postgresql.org/~devrim/rpms/compat/compat-postgresql-libs-3-3PGDG.i686.rpm

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/





signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] pg_dump problems

2007-01-04 Thread Tom Lane
JTyrrell [EMAIL PROTECTED] writes:
 1. pg_dump -s doesn't work. I've tried using a load of other options and
 none at all but no luck. 

 2. I'm running as root so wont be a problem there, and the file im dumping
 to has full permissions for everyone. 

 What makes this even more confusing is I tried pg_dump with a database name
 that doesnt exist and tried dumping from database root with a user that
 doesn't have permission to do that. Both gave the same results as before.
 Its basically creating the file i need, looking for the database and not
 finding it. When you dump an empty database the file at least has some info
 on session authorization and the schema. Im getting nothing!

It's really hard to believe that pg_dump would write nothing to either
the output file or stderr.  The only thought that comes to mind is that
if this machine has SELinux enabled, the SELinux policy might be
forbidding it from writing on the output file.  What's the platform,
exactly?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Need help in PGSQL

2007-01-04 Thread Dimitri Fontaine
Le jeudi 4 janvier 2007 06:51, sbaskar a écrit :
 I am doing a project in Database Monitoring tool. I am planning to add
 pgsql also. So i need the table details, where i can get details like
 Request Rate, Bytes Received Rate, Bytes Send Rate, Open Connections,
[...]

Some of the SQL requests you're looking for can be found in munin postgresql 
plugins.

  http://munin.projects.linpro.no/wiki/PluginCat#Postgresql
  http://www.dalibo.org/Plug-ins-PostgreSQL-pour-Munin.html

Regards,
-- 
Dimitri Fontaine
http://www.dalibo.com/


pgphTFGTErk6O.pgp
Description: PGP signature


Re: [GENERAL] Dependency conflicts on CentOS 4.4

2007-01-04 Thread Tony Caduto

Gunnar Wagenknecht wrote:



Do I need to look for a different 'perl-DBD-Pg' that doesn't depend on
'libpq.so.3' or should 'libpq.so.3' be provided by
'postgresql-libs-8.2.0-2PGDG.i686.rpm'?


  
Just make a symbolic link from the shipped libpq.so.5 to libpq.so.3 and 
it should work.


Yum kind of stinks because you can't do a --nodeps or force.
You might want to download the perl dbd rpm and install manually without 
Yum.


Later,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread John McCawley
This is revisiting a problem I posed to this group a month or so ago 
regarding separating different users' data through schema views.  The 
solution we're using is based on a suggestion we received here:


http://archives.postgresql.org/pgsql-general/2006-12/msg00037.php

Everything is working great with the exception of performance.  One of 
our tables has close to a million records, and the overhead of calling 
the get_client_id() function per row is eating us alive, I assume 
because it is having to per-row call a select  on a table to retrieve 
the proper ID within the function.


Is there any way I could establish this ID initially in some sort of 
connection-level variable, and from this point on reference that 
variable?  What I'm thinking is something like the following:


select initialize_client_id(); //This would actually hit the DB to 
retrieve the proper ID for the logged in user


//Now, in the view get_client_id() retrieves the earlier established 
variable instead of hitting the DB
select foo,bar FROM tbl_foo WHERE client_id = get_client_id(); 


Am I incorrect in assuming that the statement:

select foo from tbl_bar WHERE client_id = get_client_id();

will call get_client_id() for every row?

John

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Dependency conflicts on CentOS 4.4

2007-01-04 Thread Brian Mathis

On 1/4/07, Tony Caduto [EMAIL PROTECTED] wrote:


Gunnar Wagenknecht wrote:



Do I need to look for a different 'perl-DBD-Pg' that doesn't depend on



 'libpq.so.3' or should 'libpq.so.3' be provided by
 'postgresql-libs-8.2.0-2PGDG.i686.rpm'?

Just make a symbolic link from the shipped libpq.so.5 to libpq.so.3 and
it should work.

Yum kind of stinks because you can't do a --nodeps or force.
You might want to download the perl dbd rpm and install manually without
Yum.

Later,

--
Tony Caduto



While this may work, it pretty much defeats the purpose of using rpm and yum
to manage your packages.  Devrim's suggestion is really the best way to go
(as are most of his suggestions).  I have used the compat rpm before, and it
works like a charm.

Usually nodeps and force are there for a reason, defeating them is usually
not what you want.


Re: [GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread John McCawley

I think I got it:

CREATE FUNCTION new_get_emp_id() RETURNS INTEGER AS $$ select emp_id 
from secureview.tbl_employee where username = (SELECT current_user) $$ 
LANGUAGE SQL IMMUTABLE;


I made the function immutable so it only calls it once, therefore no 
longer requiring a call per-row.




John McCawley wrote:

This is revisiting a problem I posed to this group a month or so ago 
regarding separating different users' data through schema views.  The 
solution we're using is based on a suggestion we received here:


http://archives.postgresql.org/pgsql-general/2006-12/msg00037.php

Everything is working great with the exception of performance.  One of 
our tables has close to a million records, and the overhead of calling 
the get_client_id() function per row is eating us alive, I assume 
because it is having to per-row call a select  on a table to retrieve 
the proper ID within the function.


Is there any way I could establish this ID initially in some sort of 
connection-level variable, and from this point on reference that 
variable?  What I'm thinking is something like the following:


select initialize_client_id(); //This would actually hit the DB to 
retrieve the proper ID for the logged in user


//Now, in the view get_client_id() retrieves the earlier established 
variable instead of hitting the DB

select foo,bar FROM tbl_foo WHERE client_id = get_client_id();
Am I incorrect in assuming that the statement:

select foo from tbl_bar WHERE client_id = get_client_id();

will call get_client_id() for every row?

John

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] database design and refactoring

2007-01-04 Thread Michael Glaesemann


On Jan 3, 2007, at 5:24 , Luca Ferrari wrote:


Running the database, the users
decided to place numbers as strings, so values like 00110002 and so  
on.


Note that '00110002' is not a number (i.e., it's not equal to  
110002): it's a string of digits.



is there a tool or a
way to easily do such refactoring or should I write a program on my  
own to do

this?


Not that I know of, though ALTER TABLE table ALTER column  
TYPE ... is pretty easy to use.


And moreover a database design question: is a better idea to choose  
always

(when possible) numeric keys?


Depends on your requirements. This is an oft-discussed topic about  
which you can find many more opinions by googling on natural and  
surrogate keys.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Using duplicate foreign keys

2007-01-04 Thread Erik Jones

Andrus wrote:

I have multi-company database.
Each company has its own chart of accounts table which are stored in each 
company schema.
Some account numbers are used in a common table which is stored in public 
schema.


So I need to create duplicate foreign keys like

create temp table company1.chartoffaccounts ( accountnumber int primary 
key);
create temp table company2.chartoffaccounts ( accountnumber int primary 
key);


create temp table public.commontable ( accountnumber int,
   FOREIGN KEY (accountnumber)   REFERENCES c1hartoffaccounts 
(accountnumber)
   FOREIGN KEY (accountnumber)   REFERENCES c2hartoffaccounts 
(accountnumber) );



Is it OK to use duplicate foreign keys ? What issues will they cause ?

Andrus. 
  
The problem I see with that is that any value of accountnumber in 
public.commontable would need to be in both company1.chartoffaccounts 
and company2.chartoffaccounts.  One key referencing two completely sets 
of data?  That sounds broken.  Can you give a more detailed example of 
how you want to use this?   Off hand, it sounds like what you may want 
is to have accountnumber as the primary key of public.commontable with 
the accountnumber columns in the schema specific tables referencing it 
instead.


--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] database design and refactoring

2007-01-04 Thread Erik Jones

Michael Glaesemann wrote:


On Jan 3, 2007, at 5:24 , Luca Ferrari wrote:
And moreover a database design question: is a better idea to choose 
always

(when possible) numeric keys?


Depends on your requirements. This is an oft-discussed topic about 
which you can find many more opinions by googling on natural and 
surrogate keys. 
Come on, it's been a whole, what?, three or four weeks since the last 
natural v. surrogate debate?  Seriously though, Luca, for information on 
that vein, peruse the archives or do as Michael suggested and Google.



--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread Tom Lane
John McCawley [EMAIL PROTECTED] writes:
 I think I got it:
 CREATE FUNCTION new_get_emp_id() RETURNS INTEGER AS $$ select emp_id 
 from secureview.tbl_employee where username = (SELECT current_user) $$ 
 LANGUAGE SQL IMMUTABLE;
 I made the function immutable so it only calls it once, therefore no 
 longer requiring a call per-row.

Since it's obviously *not* immutable, this will come back to bite you
sooner or later (probably sooner).  Labeling it STABLE would be
reasonable, although I'm not certain how much that helps you.  Do you
have indexes on the columns it's being compared to?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread Erik Jones

Tom Lane wrote:

John McCawley [EMAIL PROTECTED] writes:
  

I think I got it:
CREATE FUNCTION new_get_emp_id() RETURNS INTEGER AS $$ select emp_id 
from secureview.tbl_employee where username = (SELECT current_user) $$ 
LANGUAGE SQL IMMUTABLE;
I made the function immutable so it only calls it once, therefore no 
longer requiring a call per-row.



Since it's obviously *not* immutable, this will come back to bite you
sooner or later (probably sooner).  Labeling it STABLE would be
reasonable, although I'm not certain how much that helps you.  Do you
have indexes on the columns it's being compared to?
  

Besides, a temp table is pretty much a session variable.

--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread John McCawley
I tried stable, and that didn't help at all.  How long does PostgreSQL 
maintain the state of the function when using immutable?  Until a 
restart?  Until the end of a session?  Until the function is dropped and 
re-added? 

While this value isn't 100% absolutely positively guaranteed to never 
change, it is pretty dang close.  I have a strict correlation between 
the username that someone uses to login and the ID returned.  Changing 
someone's username would break a whole lot more than this one app on the 
project...




Erik Jones wrote:


Tom Lane wrote:


John McCawley [EMAIL PROTECTED] writes:
 


I think I got it:
CREATE FUNCTION new_get_emp_id() RETURNS INTEGER AS $$ select emp_id 
from secureview.tbl_employee where username = (SELECT current_user) 
$$ LANGUAGE SQL IMMUTABLE;
I made the function immutable so it only calls it once, therefore no 
longer requiring a call per-row.




Since it's obviously *not* immutable, this will come back to bite you
sooner or later (probably sooner).  Labeling it STABLE would be
reasonable, although I'm not certain how much that helps you.  Do you
have indexes on the columns it's being compared to?
  


Besides, a temp table is pretty much a session variable.



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Bug in 8.2 (8.1) dump restore

2007-01-04 Thread Scott Ribe
create database test;
\c test
create table base (foo int not null);
create table derived () inherits (base);
alter table derived alter foo drop not null;
insert into derived values(null);

Dump it, and the dump will not include any command to drop the not null
constraint on derived.foo, so restore will fail.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread Tom Lane
John McCawley [EMAIL PROTECTED] writes:
 While this value isn't 100% absolutely positively guaranteed to never 
 change, it is pretty dang close.

Counterexample: SET ROLE or SET SESSION AUTHORIZATION.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Bug in 8.2 (8.1) dump restore

2007-01-04 Thread Tom Lane
Scott Ribe [EMAIL PROTECTED] writes:
 create database test;
 \c test
 create table base (foo int not null);
 create table derived () inherits (base);
 alter table derived alter foo drop not null;
 insert into derived values(null);

 Dump it, and the dump will not include any command to drop the not null
 constraint on derived.foo, so restore will fail.

Actually, the bug there is that ALTER TABLE lets you set up a
self-inconsistent inheritance hierarchy.  The above should be illegal
because it would mean that select foo from base could return nulls,
contradicting the clear definition of the table.

We've been talking about fixing that, but it'll probably take catalog
changes (to be able to track which constraints were inherited from a
parent table) so this isn't ever going to be enforced by any existing
release.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread John McCawley
I'm not trying to be argumentative, but I honestly don't know what you 
mean here...and the only reason I ask for clarification is that you are 
probably trying to tell me something important :)


Tom Lane wrote:


John McCawley [EMAIL PROTECTED] writes:
 

While this value isn't 100% absolutely positively guaranteed to never 
change, it is pretty dang close.
   



Counterexample: SET ROLE or SET SESSION AUTHORIZATION.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
 



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Interrupted pg_dump / pg_restore Upgrade

2007-01-04 Thread Thomas F. O'Connell
I just became involved in a scenario wherein a migration between  
releases (8.1.x - 8.2) using pg_dumpall piped to psql (per section  
23.5 of the 8.2 docs) was interrupted based on duration of the  
procedure. The interruption was green lit because it was determined  
that the data had been migrated and that indexes and constraints were  
still to come (indexes were actually mid-way). A decision was made to  
go ahead and move forward with the 8.2 database with the intention of  
rebuilding indexes and other constraints manually.


My big question is: Is there anything that happens late in the game  
in a pg_dumpall that affects system catalogs or other non-data  
internals in any critical ways that would make an interrupted  
pg_dumpall | psql sequence unstable?


There are a number of irregularities turning up with the upgraded  
database, and I'm trying to rule out as many root causes as possible.


The new database is 8.2 (as were all the client utilities used in the  
migration), built from source, running on Solaris:


SunOS x41-xl-01.int 5.10 Generic_118855-19 i86pc i386 i86pc

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



Re: [GENERAL] Interrupted pg_dump / pg_restore Upgrade

2007-01-04 Thread Tom Lane
Thomas F. O'Connell [EMAIL PROTECTED] writes:
 My big question is: Is there anything that happens late in the game  
 in a pg_dumpall that affects system catalogs or other non-data  
 internals in any critical ways that would make an interrupted  
 pg_dumpall | psql sequence unstable?

There's quite a lot of stuff that happens after the data load, yes.
One thought that comes to mind is that permissions aren't
granted/revoked until somewhere near the end.  But why don't you
look at the output of pg_dumpall -s and find out for yourself
what got lost?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread Scott Ribe
What about:

 create function set_emp_id() returns void as $$
 begin
drop table if exists emp_1_id;
select emp_id into temp emp_1_id from secureview.tbl_employee where
username = current_user;
 end; 
 $$ language plpgsql;

 create function get_emp_id() returns int as $$
return select emp_id from emp_1_id;
 $$ language plpgsql stable;

Call set_emp_id once on connection, then use get_emp_id thereafter. Would
that be any faster? (This is what Erik meant by  a temp table is pretty
much a session variable in his earlier message.)

 -- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Any form of connection-level session variable ?

2007-01-04 Thread Joe Conway

Scott Ribe wrote:

What about:

 create function set_emp_id() returns void as $$
 begin
drop table if exists emp_1_id;
select emp_id into temp emp_1_id from secureview.tbl_employee where
username = current_user;
 end; 
 $$ language plpgsql;


 create function get_emp_id() returns int as $$
return select emp_id from emp_1_id;
 $$ language plpgsql stable;

Call set_emp_id once on connection, then use get_emp_id thereafter. Would
that be any faster? (This is what Erik meant by  a temp table is pretty
much a session variable in his earlier message.)


Or, in similar fashion, use this:
http://www.joeconway.com/sessfunc.tar.gz
http://www.onlamp.com/pub/a/onlamp/2004/06/28/postgresql_extensions.html

HTH,

Joe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Database Corruption - last chance recovery options?

2007-01-04 Thread Michael Best
Had some database corruption problems today.  Since they came on the 
heels of making some minor database changes yesterday, they may or may 
not be related to that.  Centos 4.x, Postgresql 8.1.4


I modified the following settings and then issued a reload.I hadn't 
turned up the kernel.shmmax to allow for these bigger memory settings, 
but the database continued to run fine.


shared_buffers = 1
work_mem = 2048
autovacuum = on# enable autovacuum subprocess?
autovacuum_naptime = 60# time between autovacuum runs, 
in secs

autovacuum_vacuum_threshold = 1000 # min # of tuple updates before
autovacuum_analyze_threshold = 500 # min # of tuple updates before
autovacuum_vacuum_scale_factor = 0.4   # fraction of rel size before
autovacuum_analyze_scale_factor = 0.2  # fraction of rel size before
autovacuum_vacuum_cost_delay = -1  # default vacuum cost delay for
autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit for

After reloading I ran a number of vacuumdb -a -z which completed 
successfully.


Sometime after the vacuum or reload one of our clients started to have 
database problems.  The other databases in the same postgresql on the 
server don't seem to be affected.


When I finally got the error report in the morning the database was in 
this state:


$ psql dbname

dbname=# \dt
ERROR:  cache lookup failed for relation 20884

Doing a select * from pg_tables seemed to indicate that some of the 
tables were no longer in the database, also some other tables were 
inaccessible.


I made a backup and then some functionality was restored by issuing a 
reindex system dbname


Using the broken database pg_dump on all tables in pg_table gives this 
for some tables:


pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  cache lookup failed for 
relation 20871
pg_dump: The command was: SELECT tableoid, oid, conname, 
pg_catalog.pg_get_const
raintdef(oid) as condef FROM pg_catalog.pg_constraint WHERE conrelid = 
'20876'::

pg_catalog.oid AND contype = 'f'
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  relation 
public.auth_groups_permissions does not exist
pg_dump: The command was: LOCK TABLE public.auth_groups_permissions IN 
ACCESS SH

ARE MODE
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  relation with OID 21186 does 
not exist
pg_dump: The command was: LOCK TABLE public.ght_ght_shippingorders IN 
ACCESS SHA

RE MODE

Our backups failed of course, and we have made a good attempt at 
recovery, which we are willing to accept as all the recovery we can do.


As a last chance to get some of the data back, I would be interested if 
there is any way to read through the raw database files to see if I can 
recover some more data from them.


I'm also curious if any of the settings/reload caused this problem, or 
perhaps the vacuum or autovacuum is what caused our error.  I'll file a 
bug report if it's somehow repeatable.


Oh, and make backups.

-Mike

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Dependency conflicts on CentOS 4.4

2007-01-04 Thread Gunnar Wagenknecht
Devrim GUNDUZ wrote:
 Install this package:
 
 http://developer.postgresql.org/~devrim/rpms/compat/compat-postgresql-libs-3-3PGDG.i686.rpm

Thanks, it worked. But I had to remove
compat-postgresql-libs-4-2PGDG.rhel4.i686.rpm for this one to work. What
if some client needs libpq.so.4?

Generally, I like to use yum localinstall wherever possible.

Cu, Gunnar

-- 
Gunnar Wagenknecht
[EMAIL PROTECTED]
http://wagenknecht.org/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Dependency conflicts on CentOS 4.4

2007-01-04 Thread Gunnar Wagenknecht
Michael Best wrote:
 8.1.4 is also in CentosPlus
 
 They are apparently following a RH RPM that is at revision 8.1.4
 http://mirror.centos.org/centos/4/centosplus/Readme.txt

Thanks for the tip!

Cu, Gunnar

-- 
Gunnar Wagenknecht
[EMAIL PROTECTED]
http://wagenknecht.org/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq