[GENERAL] Converting bytea to LargeObject

2010-02-17 Thread Satish Burnwal (sburnwal)
I am also having the problem as mentioned in
http://archives.postgresql.org/pgsql-general/2009-01/msg00771.php.

However, the sql statement:

 

UPDATE tbl SET newoid = (

SELECT oid FROM (

  SELECT oid, lowrite(lo_open(oid, 131072), byteafield)

  FROM lo_create(0) o(oid)) x);

 

This is not working for me:

- it is putting same newoid for all the records in the table whereas
what is needed is separate oid for each record. Lo_create(0) seems to be
taken for all the records.

- can you pls explain the syntax o(oid) ? I do not see any documentation
for this.

 

 

Thanks

Satish



Re: [GENERAL] Source RPMs for PostgreSQL 7.4.27 on RHEL4

2010-02-17 Thread Devrim GÜNDÜZ
On Wed, 2010-02-17 at 08:45 +0200, Devrim GÜNDÜZ wrote:
 I just started uploading SRPMs to ftp.postgresql.org. It will take a
 few hours to sync. 

...and packages hit main FTP site.

Regards,
-- 
Devrim GÜNDÜZ, RHCE
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


[GENERAL] Persistent identifiers for Postgres users

2010-02-17 Thread Peter Geoghegan
Hello,

I maintain an app where database users correspond to actual users,
with privileges granted or denied to each. At the moment, records that
each user creates are identified as such by a text column that has a
default value of session_user(). I don't need to tell you that this is
suboptimal, because db users (as far as I'm aware) lack persistent
identifiers - names may change, users may be dropped, etc. Also, there
is no way that I am aware of to fake row level privileges by adding a
...AND id NOT IN (SELECT forbidden_department FROM user_priveleges
WHERE user_id = current_user_id() ) to relevant queries . Actually,
that approach is probably preferable to actual row level privileges,
as it allows me to deny access based on a domain-level concept,
departments.

Am I correct in my belief that postgres users lack persistent identifiers?

I believe that some other similar systems implement their own users
and privileges systems to achieve this, but I hesitate to do that. I
also hesitate to assume that the DB user name will never change, and
go ahead and use session_user() in lieu of a real persistent
identifier.

Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Having a plpgsql function return multiple rows that indicate its progress in a cursor like fashion

2010-02-17 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Peter Geoghegan peter.geoghega...@gmail.com writes:
 Aren't my requirements sufficiently common to justify developing a
 mechanism to report progress back to client applications during batch
 operations and the like?

 Have you experimented with RAISE NOTICE?  Using it this way is a bit of
 a hack maybe, but I think you are entirely unaware of what would be
 involved in producing something that's less of a hack.

Would returning a refcursor then using fetch in the application be
another solution?

As far as hacking is concerned, I think it boils down to materialise
against value-per-call implementation, right? Not saying it's easy to
implement value-per-call support in plpgsql, but should the OP think
about what's involved, is that the track to follow?

  
http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/backend/utils/fmgr/README;hb=HEAD#l380
  
http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=d9a319525591bc437e4770b4e796a7517844a784

The first link is the fmgr/README explaining the concepts, and the
second one is a recent enough patch dealing with materialise and
value-per-call in the context of SQL functions.

Regards,
-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] error creating database

2010-02-17 Thread Geoffrey

I'm trying to reload a database and I'm receiving the following error:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 5181; 1262 45260 
DATABASE nev postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  encoding 
LATIN1 does not match server's locale en_US.UTF-8

DETAIL:  The server's LC_CTYPE setting requires encoding UTF8.
Command was:
CREATE DATABASE nev WITH TEMPLATE = template0 ENCODING = 'LATIN1';

This backup was created on another machine, so it must be that the 
enviroment on the two machines is different.


Where do I look to fix this?

Thanks.

--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error creating database

2010-02-17 Thread Chris Barnes

 

This depends on your OS. If you are running (linux) redhat or centos you would 
edit this file /etc/sysconfig/i18n

and change your locale to, for example. Save it and reboot.

 

There are probably ways around this when creating the database, but we install 
our OS with this in mind.

 

 

LANG=en_US.UTF-8
SYSFONT=latarcyrheb-sun16
 
 Date: Wed, 17 Feb 2010 09:15:56 -0500
 From: li...@serioustechnology.com
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] error creating database
 
 I'm trying to reload a database and I'm receiving the following error:
 
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 5181; 1262 45260 
 DATABASE nev postgres
 pg_restore: [archiver (db)] could not execute query: ERROR: encoding 
 LATIN1 does not match server's locale en_US.UTF-8
 DETAIL: The server's LC_CTYPE setting requires encoding UTF8.
 Command was:
 CREATE DATABASE nev WITH TEMPLATE = template0 ENCODING = 'LATIN1';
 
 This backup was created on another machine, so it must be that the 
 enviroment on the two machines is different.
 
 Where do I look to fix this?
 
 Thanks.
 
 -- 
 Until later, Geoffrey
 
 I predict future happiness for America if they can prevent
 the government from wasting the labors of the people under
 the pretense of taking care of them.
 - Thomas Jefferson
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
  
_
Introducing Windows® phone.
http://go.microsoft.com/?linkid=9708122

Re: [GENERAL] Having a plpgsql function return multiple rows that indicate its progress in a cursor like fashion

2010-02-17 Thread Peter Geoghegan
Hi dim,

 Would returning a refcursor then using fetch in the application be
 another solution?

I assume not, since nobody stepped forward and offered a way, even
though I suggested that returning a refcursor may be the way to go
(you'll recall that you suggested that to me in IRC - I'm sternocera
there. I believe we met in Paris too). As I said, I guess my question
boils down to: can I return a cursor, but not to return the result of
a single select, but of multiple different selects in succession, to
report progress as described, or, alternatively, do something that
will produce similar results?

I also said:

Cursors simply address the problem of impedance mismatch...You
don't have to fetch the result set all at once where that is
impractical. However, the entire result set is available on the server
from the first fetch.

Tom contradicted this, but I believe he just meant that my statement
was technically inaccurate, and not that it was conceptually
inaccurate. My (perhaps incorrect) understanding is that once you open
a cursor, you cannot change that which will later be fetched from the
same cursor - What rows will be returned when everything is fetched is
determined when the cursor is opened. Also, I cannot very well open a
cursor twice, because, as the docs say, the cursor cannot be open
already when opening a cursor. So, unless I'm mistaken, I don't think
returning a refcursor helps me here.

 As far as hacking is concerned, I think it boils down to materialise
 against value-per-call implementation, right? Not saying it's easy to
 implement value-per-call support in plpgsql, but should the OP think
 about what's involved, is that the track to follow?

  http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/backend/utils/fmgr/README;hb=HEAD#l380
  http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=d9a319525591bc437e4770b4e796a7517844a784

 The first link is the fmgr/README explaining the concepts, and the
 second one is a recent enough patch dealing with materialise and
 value-per-call in the context of SQL functions.

I'll investigate. To be perfectly frank, I am probably overextending
myself in doing so, because a) I am not familiar with the PostgreSQL
codebase and b) Tom's admonition about the likely difficulty of doing
this indicates that it is probably quite an involved task.

I think it would be useful to articulate, in broad strokes, what this
feature should look like, if not for my benefit, then for the benefit
of whoever will eventually implement it (because, given the
aspirations and momentum of the postgres community, and the obvious
utility of what I've described, I think it's inevitable that *someone*
will).

Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY FROM wish list

2010-02-17 Thread Dimitri Fontaine
Scott Bailey arta...@comcast.net writes:
 PgFoundry has http://pgfoundry.org/projects/pgloader/
 It is a step in the right direction but definitely not as powerful as
 sql*loader.

Yeah, it's only offering what I needed and what I've been requested to
add. So far there's support for INFORMIX UNLOAD files, mysqldump -t
files, fixed with files, ragged files, and some more.

And it also support python threading for some parallel workload, either
loading several files at once or several chunks of the same file, and
then 2 modes are possible.

If you need more, try asking, you never know. I'm still on the hook to
maintaining it, though I've not received any bug report in a long
while. I guess it's not much used anymore.

I've been proposed to replace the pgloader.conf INI file with a custom
COPY command parser exposing all the options, and will consider that
sometime in the future.

 I've been writing a Postgres equivalent that does Oracle SQL*Loader/DB2 Load
 style import, and external tables using CSV, fixed with and XML data
 sources. But its not production ready. I'm hoping SQL/MED makes it in to the
 next release of Postgres so I can throw it all away :)

XML support in pgloader will certainly take the form of applying user
given XSLT filter that outputs CSV from the XML. That's the option
requiring the less code in pgloader I guess :)

I'd be happy to see pgloader deprecated by PostgreSQL offering its
features.

Meantime, do you want to join the fun on the pgloader front?
-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_database_size(oid)

2010-02-17 Thread Dhimant Patel
Hi All,

I have just started with postgres and after going through manual nearly for
2 hours, I need help.

I have created a database, which I plan to load with several tables. I am
trying to find the size of the databases and came
across pg_database_size(oid) function. Since it requires databse oid, I
thought there must be a view where you get it - then came across
pg_database table and still all in vain. The pg_database table lists all
databases in the server but won't provide me the oid
for pg_database_size().

I noticed this being true for almost all reference views. For example,
pg_tablespace lists the tablespaces but no tablespace oid.
Function pg_tablespace_databases (tablespace_oid) , needs tablespace_oid,
but where I could get it from?



Please help!


-Thanks,
DP.


Re: [GENERAL] pg_database_size(oid)

2010-02-17 Thread Tom Lane
Dhimant Patel drp4...@gmail.com writes:
 I have created a database, which I plan to load with several tables. I am
 trying to find the size of the databases and came
 across pg_database_size(oid) function. Since it requires databse oid, I
 thought there must be a view where you get it - then came across
 pg_database table and still all in vain. The pg_database table lists all
 databases in the server but won't provide me the oid
 for pg_database_size().

What you lack is the knowledge that oid is a system column in these
tables, meaning it isn't displayed by SELECT * FROM   It's there
though and you can select it explicitly:
select oid from pg_database where datname = 'foo';

http://www.postgresql.org/docs/8.4/static/ddl-system-columns.html

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_database_size(oid)

2010-02-17 Thread Reid Thompson
On Wed, 2010-02-17 at 11:59 -0500, Dhimant Patel wrote:

 
 I have created a database, which I plan to load with several tables. I
 am trying to find the size of the databases and came
 across pg_database_size(oid) function. Since it requires databse oid,
 I thought there must be a view where you get it - then came across
 pg_database table and still all in vain. The pg_database table
 lists all databases in the server but won't provide me the oid
 for pg_database_size(). 
 
 
 I noticed this being true for almost all reference views. For example,
 pg_tablespace lists the tablespaces but no tablespace oid.
 Function pg_tablespace_databases (tablespace_oid) ,
 needs tablespace_oid, but where I could get it from?

Use the db name.

test=# select pg_size_pretty(pg_database_size('test'));
 pg_size_pretty 

 5392 kB
(1 row)

test=# 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Reading a table with composite type, into it equivalent c structure

2010-02-17 Thread mike stanton
Hello All,
Has anyone got a working program that reads a tuple from a table (defined 
as a single Composite type with lower atributes also as composite types) and 
converts the data into the corresponding C structure variables ?
I've been looking for a working example, but havn´t found that sort of example.

Cheers
Michael Stanton W.
Santiago Chile

Re: [GENERAL] ERROR: permission denied to finish prepared transaction

2010-02-17 Thread David Kerr
On Tue, Feb 16, 2010 at 12:57:28PM -0800, David Kerr wrote:
- I'm seeing a bunch of these error messages:
- Feb 16 12:04:51 QA-HC01-DB01 postgres-hc01[26420]: [2-1] 
user=xy,db=x,pid=26420 ERROR:  permission denied to finish prepared transaction
- Feb 16 12:04:51 QA-HC01-DB01 postgres-hc01[26420]: [2-2] 
user=xy,db=x,pid=26420 HINT:  Must be superuser or the user that prepared the
- transaction.
- Feb 16 12:04:51 QA-HC01-DB01 postgres-hc01[26420]: [2-3] 
user=xy,db=x,pid=26420 STATEMENT:  ROLLBACK PREPARED
- Feb 16 12:04:51 QA-HC01-DB01 postgres-hc01[26420]: [2-4]
- 
'1197822575_uwQAAABHVE1JRA==_AQBHVE1JRA
- Feb 16 12:04:51 QA-HC01-DB01 postgres-hc01[26420]: [2-5] 
=='
- 
- 
- As far as I can tell, that means that user xy is trying to rollback a 
prepared TX that it didn't create. 
- is that the only reason this error would show up?
- 
- I'm asking becase, as far as I can tell, xy is the only user on the box that 
would be doing prepared tx's
- 
- this is 8.3


Bump.  =)

Is it possible that this is a permissions error?

any ideas? 

Thanks

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread karsten vennemann
 vacuum should clean out the dead tuples, then cluster on any large tables 
 that are bloated will sort them out without needing too much temporary 
 space.
 
Yes ok  am running a vacuum full on a large table (150GB) and will cluster the 
spatial data by zip code then. Understand that should get rid of any dead 
records and reclaim hard disk space then. The system I'm running it on is a 1.7 
GB RAM Ubuntu jaunty machine,  PostgreSQL 8.3.8.
 
I was hesitant to do any of this (vacuum, cluster, or dump and restore) because 
it might run days or weeks (hopefully not). Here are some of my PostgreSQL.conf 
settings in case this is not optimal and someone has a hint...
shared_buffers=160MB, effective_cache_size=1GB, maintenance_work_mem=500MB, 
wal_buffers=16MB, checkpoint_segments=100
 
Also I just set-up a new server (mirror of the other one I need to clean out) 
specifically for the purpose of running a database dump with enough storage 
space 2TB...So that is no issue right now
I really need to find out what is wrong with my procedure dumping the whole 
database as I never succeed yet to dump and restore such a bid db...
That will not be the least time I will have to do something similar.
Here is what I tried (test database is 350GB in size)
 
1. pg_dump -U postgres -Fc test  /ebsclean/testdb.dump
This gave me a dump of about 4GB in size (too smal in size even if its 
compressed ?) after running 5 hours (not bad I thought). But when I tried to 
restore it using pg_retore to another database (in a different table space)I 
got an error like not an valid archive file or something like that
So I was wondering if 4GB is a problem in Ubuntu OS ? 
 
Thus I tried to split it during the dump operation
2. pg_dump -U postgres -Fc test | split -b 1000m - /ebsclean/testdb.dump
This gave me 5 files with a total combined size of about 4GB . But when I tried 
to restore it got the same error as above...
This dump and restore procedure should be the fastest (in respect to vacuum 
and/or cluster) from what I collected in an IRC session with some gurus some 
weeks ago.
Main question now is why is my dump /restore not working what am I doing wrong ?
 
Thanks
Karsten 


  _  

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Pavel Stehule
Sent: Tuesday, February 09, 2010 23:30
To: karsten vennemann
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] dump of 700 GB database


Hello


2010/2/10 karsten vennemann kars...@terragis.net


I have to write a 700 GB large database to a dump to clean out a lot of 
dead records on an Ubuntu server with postgres 8.3.8. What is the proper 
procedure to succeed with this - last time the dump stopped at 3.8 GB size I 
guess. Should I combine the -Fc option of pg_dump and and the split command ?   
  

I thought something like
pg_dump -Fc test | split -b 1000m - testdb.dump 
might work ?
Karsten


vacuum full doesn't work?

Regards
Pavel Stehule
 



Terra GIS LTD
Seattle, WA, USA 

 




Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread Scott Marlowe
On Wed, Feb 17, 2010 at 3:44 PM, karsten vennemann kars...@terragis.net wrote:

  vacuum should clean out the dead tuples, then cluster on any large tables 
  that are bloated will sort them out without needing too much temporary 
  space.

 Yes ok  am running a vacuum full on a large table (150GB) and will cluster 
 the spatial data by zip code then. Understand that should get rid of any dead 
 records and reclaim hard disk space then. The system I'm running it on is a 
 1.7 GB RAM Ubuntu jaunty machine,  PostgreSQL 8.3.8.

If you're going to cluster anyways, a vacuum full is wasted effort
there.  Unless you're running out of disk space.  Cluster uses the
same amount of space as the original table, minus dead rows, for the
destination table.

 I was hesitant to do any of this (vacuum, cluster, or dump and restore) 
 because it might run days or weeks (hopefully not). Here are some of my 
 PostgreSQL.conf settings in case this is not optimal and someone has a hint...

Note that cluster on a randomly ordered large table can be
prohibitively slow, and it might be better to schedule a short
downtime to do the following (pseudo code)

alter table tablename rename to old_tablename;
create table tablename like old_tablename;
insert into tablename select * from old_tablename order by
clustered_col1, clustered_col2;

(creating and moving over FK references as needed.)

 shared_buffers=160MB, effective_cache_size=1GB, maintenance_work_mem=500MB, 
 wal_buffers=16MB, checkpoint_segments=100

What's work_mem set to?

 Also I just set-up a new server (mirror of the other one I need to clean out) 
 specifically for the purpose of running a database dump with enough storage 
 space 2TB...So that is no issue right now
 I really need to find out what is wrong with my procedure dumping the whole 
 database as I never succeed yet to dump and restore such a bid db...
 That will not be the least time I will have to do something similar.
 Here is what I tried (test database is 350GB in size)

 1. pg_dump -U postgres -Fc test  /ebsclean/testdb.dump
 This gave me a dump of about 4GB in size (too smal in size even if its 
 compressed ?) after running 5 hours (not bad I thought). But when I tried to 
 restore it using pg_retore to another database (in a different table space)I 
 got an error like not an valid archive file or something like that
 So I was wondering if 4GB is a problem in Ubuntu OS ?

What ubuntu?  64 or 32 bit?  Have you got either a file system or a
set of pg tools limited to 4Gig file size?  Ubuntu 64Bit can def do
files larger than 4G.  Pretty sure 4G files have been ok for quite
some time now.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread karsten vennemann
 Note that cluster on a randomly ordered large table can be 
 prohibitively slow, and it might be better to schedule a 
 short downtime to do the following (pseudo code)
 alter table tablename rename to old_tablename; create table 
 tablename like old_tablename; insert into tablename select * 
 from old_tablename order by clustered_col1, clustered_col2;

That sounds like a great idea if that saves time.
 
 (creating and moving over FK references as needed.)
 shared_buffers=160MB, effective_cache_size=1GB, 
 maintenance_work_mem=500MB, wal_buffers=16MB, 
 checkpoint_segments=100
 
 What's work_mem set to?
work_mem = 32MB

 What ubuntu?  64 or 32 bit?  
It’s a 32 bit. I don’t know if 4GB files doesn't sound to small of a dump
for originally 350GB big db - nor why pg_restore fails... 

 Have you got either a file 
 system or a set of pg tools limited to 4Gig file size?  
Not sure what is the problem on my server - I'm trying to figure out what
has pg_restore fail...


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tuning bgwriter in 8.4.2

2010-02-17 Thread Greg Smith

Ben Chobot wrote:
As I understand things, assuming I don't keep updating the same pages 
then buffers_backend should be a small percentage of buffers_alloc, 
and buffers_clean should be larger than it is compared to 
buffers_checkpoint. Is my understanding correct?


Sure; your buffers_clean is really low relative to the totals.  You 
should take a snapshot now that you've fixed bgwriter_lru_maxpages, with 
a timestamp, and then another sometime later to get really useful 
numbers.  A diff only considering the current setup and with a time 
interval to go along with it is much more useful than the aggregate 
numbers here (patch to make that easier already in 9.0:  
http://blog.2ndquadrant.com/en/2010/01/measuring-postgresql-checkpoin.html 
)  Keep taking regular snapshots with a timestamp:


select current_timestamp,* from pg_stat_bgwriter;

So you can compute a diff to measure what's changing as you go.

The only explanation I can offer is that your workload might be really 
bursty.  The method used for estimating how much the cleaner should do 
is most likely to break down when the load comes in narrow spikes.  The 
main way to improve response in that situation is by decreasing the 
interval, so it kicks in and does the what's happened during the last 
n ms? computations more often.  Right now, a burst that lasts less 
than 200ms can be completely missed, if the system was mostly idle 
before that.


You can try lowering bgwriter_delay and proportionally decreasing 
bgwriter_lru_maxpages to make response time to burst workloads better.  
In your situation, I'd try make the writer wake up 4X as often, only do 
1/4 as much maximum work as it currently does each time, and doubling 
the multiplier too; see if things move in the right direction, and maybe 
keep going from there afterwards.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tuning bgwriter in 8.4.2

2010-02-17 Thread Ben Chobot
On Feb 17, 2010, at 3:41 PM, Greg Smith wrote:

 Ben Chobot wrote:
 As I understand things, assuming I don't keep updating the same pages then 
 buffers_backend should be a small percentage of buffers_alloc, and 
 buffers_clean should be larger than it is compared to buffers_checkpoint. Is 
 my understanding correct?
 
 Sure; your buffers_clean is really low relative to the totals.  You should 
 take a snapshot now that you've fixed bgwriter_lru_maxpages, with a 
 timestamp, and then another sometime later to get really useful numbers.  A 
 diff only considering the current setup and with a time interval to go along 
 with it is much more useful than the aggregate numbers here (patch to make 
 that easier already in 9.0:  
 http://blog.2ndquadrant.com/en/2010/01/measuring-postgresql-checkpoin.html )  
 Keep taking regular snapshots with a timestamp:
 
 select current_timestamp,* from pg_stat_bgwriter;
 
 So you can compute a diff to measure what's changing as you go.
 
 The only explanation I can offer is that your workload might be really 
 bursty.  The method used for estimating how much the cleaner should do is 
 most likely to break down when the load comes in narrow spikes.  The main way 
 to improve response in that situation is by decreasing the interval, so it 
 kicks in and does the what's happened during the last n ms? computations 
 more often.  Right now, a burst that lasts less than 200ms can be completely 
 missed, if the system was mostly idle before that.
 
 You can try lowering bgwriter_delay and proportionally decreasing 
 bgwriter_lru_maxpages to make response time to burst workloads better.  In 
 your situation, I'd try make the writer wake up 4X as often, only do 1/4 as 
 much maximum work as it currently does each time, and doubling the multiplier 
 too; see if things move in the right direction, and maybe keep going from 
 there afterwards.

Thanks for the suggestions Greg. I'll monitor it closely over the next few 
days, but it doesn't really seem to have changed much so far. Is there a way to 
tell if I really am just keeping the same few pages dirty throughout every 
checkpoint? I wouldn't have expected that, but given our application I suppose 
it is possible.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tuning bgwriter in 8.4.2

2010-02-17 Thread Greg Smith

Ben Chobot wrote:

Is there a way to tell if I really am just keeping the same few pages dirty 
throughout every checkpoint? I wouldn't have expected that, but given our 
application I suppose it is possible.


You can install pg_buffercache and look at what's in the cache to check 
your theory.  I have some sample queries that show neat things at 
http://www.westnet.com/~gsmith/content/postgresql/bufcache.sh


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_database_size(oid)

2010-02-17 Thread Greg Smith

Dhimant Patel wrote:
I have created a database, which I plan to load with several tables. I 
am trying to find the size of the databases and came 
across pg_database_size(oid) function.


Here's what you probably want:

select datname,pg_size_pretty(pg_database_size(pg_database.oid)) from 
pg_database order by pg_database_size(pg_database.oid) desc;


The OID in these are sort of hidden column, there's a list of them all 
at http://www.postgresql.org/docs/current/static/ddl-system-columns.html


Most tables will not have an OID nowadays, but many of the system 
catalog ones still do.  ctid is another handy one to know about--useful 
for removing duplicate records and some other fancy tricks.




--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Doubts about oid

2010-02-17 Thread Jayadevan M
Hi,
I was reading about oid and default configuration of PostgreSQL. A couple 
of doubts 
1) Why is use of OIDS considered deprecated? Is there something else that 
can be used in place of oids for user tables?
2) Is there a performance impact if we keep the default default_with_oids 
to ON?
Googling, I came across this -
http://philmcrew.com/oid.html
But most of the links given at that page were broken and the page itself 
did not provide a lot of information.
Thanks,
Jayadevan




DISCLAIMER: 

The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect.






[GENERAL] DDL trigger kind functionality in PostGreSQL

2010-02-17 Thread dipti shah
Hi,

I was looking for SQL DDL trigger kind of functionality in PostGreSQL but
couldn;t find any. Could anyone please tell me how to achieve the similar
functionality in PostGreSQL. Basically I want to make sure that no users
should use DROP command directly on my database even though he/she owner
of table or any database object. I want users to use stored procedures
always to drop or alter the tables and any other database objects.

Any alternative way would be really helpful.

Thanks,
Dipti


[GENERAL] returning records from plpython

2010-02-17 Thread Scott Bailey

Does anyone know why plpython doesn't support returning record?
I couldn't find anything in the archives or source that indicated why it 
wasn't supported. I'm trying to do Oracle style external tables and 
xmltable and it would make it much easier if I could return setof record.


Scott

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpool error, pid not found!

2010-02-17 Thread Tatsuo Ishii
Wao. This is pgpool, right? It's not maintained anymore(it was almost
3 years ago). Please use pgpool-II. The latest version is pgpool-II
2.3.2.1.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

 Sorry for late reply. I work full time. so I work on postgres in night.
 here is the command that I use.
 
 I started pgpool using this command.
 pgpool/bin/pgpool  -f /home/megha/pgpool/etc/pgpool.conf -a
 /home/megha/pgpool/etc/pool_hba.conf
 
 pgpool.conf is attached.
 
 I have been trying to configure postgres and pgpool for past 2 months and I
 am getting too many problems with this.
 I hope you help me out to solve my errors.
 
 Waiting for your reply.
 Thanks
 
 On Tue, Feb 16, 2010 at 10:20 PM, Tatsuo Ishii is...@postgresql.org wrote:
 
  How did you start pgpool exactly(command line)?
 
  Also, the number  dpends on pgpool.conf.
  Can you show me pgpool.conf?
  --
  Tatsuo Ishii
  SRA OSS, Inc. Japan
  English: http://www.sraoss.co.jp/index_en.php
  Japanese: http://www.sraoss.co.jp
 
   From the README file , I found that  psql -p  -c 'show pool_status'
   template1 command gives the internal status of pgpool.
   so when i start pgpool, I dont get any inernal status.
   I get this error message
  
   psql: could not connect to server: No such file or directory
   Is the server running locally and accepting
   connections on Unix domain socket /tmp/.s.PGSQL.?
  
  
  
   On Tue, Feb 16, 2010 at 10:09 PM, Tatsuo Ishii is...@postgresql.org
  wrote:
  
pgpool.pid file contains pgpool's parent process id.  That message
indicates that the file exists but the pgpool process id does not
exist. Probably pgpool was stopped in unclean way. You may ignore the
message as long as pgpool starts ok.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
   
 Hi

 I have installed pgpool in /home/megha/pgpool dir.
 I found pgpool.pid file in /tmp folder.

 when I try to start pgpool I get this error.
 *pid file found but it seems bogus. Trying to start pgpool anyway..*

 Please help me!


 Thanks,

 --
 Megha
   
  
  
  
   --
   Megha
 
 
 
 
 -- 
 Megha

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Howto analyse sql statement.

2010-02-17 Thread Tatsuo Ishii
From: Muhammad Isnaini moch_isna...@yahoo.com
Subject: Re: Howto analyse sql statement.
Date: Wed, 17 Feb 2010 21:02:26 +0800 (SGT)
Message-ID: 525621.291...@web76202.mail.sg1.yahoo.com

      parse_tree_list = raw_parser(string);
 
 I have got it. I will send later.
 Next chalange is how writing query statement ( include rewriting ) from a 
 node ?. Is that posible ?.

You could recursively search the parse tree and rewrite the query. See
pool_timestamp.c for a great example.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trying to add a type modifier to user created type

2010-02-17 Thread Carsten Kropf
Hi *,
I have the following problem:
I wanted to add a new type that supports modifiers (comparable to 
numeric/varchar). I succeeded in adding the type modifier functions to my new 
type. These methods are called and the modifier is set. However the modifiers 
are not applied here. I don't know exactly how to get the modifiers to be 
applied to my type. I looked into the source code of varchar/numeric and a 
function called the same name as the type is added in there that gets called 
with a value of the particular type and the typmod, respectively. But I don't 
know, how to register this function properly in order to get called, when sth 
is to be stored in the database. I also set up such a function and registered 
it in pg_proc (just called create function and made it public like this). My 
problem is that I don't know how to tell the database system when to call my 
function. I don't see any possibility to do this here. Even reading the code of 
numeric/varchar didn't help, because it just works in there. Where do I have to 
set up that this function is to be called when the modifier is to be applied? I 
thought, there would be something magic that just calls the function named as 
the given type in order to apply the modifiers. My modifier function, however 
is never called. The type modifiers (typmodin and typmodout) are called 
properly when requesting a description of the particular table or setting the 
modifier. However, unfortunately, it is never applied to my column. How can I 
achieve this/What do I have to do to get it to work?

Best regards
Carsten Kropf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] One solution for Packages in Postgre

2010-02-17 Thread venkatrao . b
Hi Posgre Developers,

I am new to Postgre. We are migrating an oracle db to postgre. In oracle 
we have used so many packages.  As per my understanding, there is no 
oracle package like functionality in postgre. I was just trying to find 
some way to migrate ocale packages to postgre.
Please let me know your comments on below solution - (this is for creating 
a package named thms_pack script is also attached). 




Common table for all packages

 table Package_Variable_Table :-
 For Storing Package public and private Variables This table will be 
common for all packages.
to distinguish between different sessions, it uses unique session id. Get 
and Set functions used to access these variables.
 
Common Functions for all packages
1) function - Current_Session() 
   Function for getting current unique session id. 
Reference :  
http://archives.postgresql.org/pgsql-hackers/2006-06/msg01557.php

2)function - IsValidSession( sid character varying)
common for all packages.  function for checking if given sessionid is 
valid,  returns true or false, if returns false then data from public and 
local pack table data for that sesseion can be deleted.
sid = unique session id

3)function - IsPackLoaded( sid character varying )
common for all packages.  function for checking if given sessionid 's 
package is initialized or not,  returns true or false, if returns false 
then pack initialized - global/private var values are set.
sid = unique session id

4)function - Delete_invld_pack_data()
Common for all packages .function for deleting invalid session' public and 
private variable data. 

Package specific additional functions 

In addition to the procedure, functions in the oracle package( which will 
be created as functions); below additional functions will be required for 
each package -

1)  Pack_Spec_THMS_Pack() - function to initialze package specification if 
already not done (for setting any public private variable values 
initially). Internally this function calls Delete_invld_pack_data for 
deleting unnecessary data from table - Package_Variable_Table

2)GetPublicVar_THMS_Pack ( v_VariableName character varying ) One function 
 for getting value of all  public variable of the package. this will 
return char type; which can be typecasted if required. This function calls 
Pack_Spec_THMS_Pack().

3)SetPublicVar_THMS_Pack( v_VariableName character varying, 
v_VariableValue  character varying)One function for setting value of each 
private variable of the package. This function calls Pack_Spec_THMS_Pack()
.

Package functions/procedures 
Each function has to call Pack_Spec_THMS_Pack() function at start, to 
initialize package data.

How it works:
 Whenever any procedure/function of package is called OR whenever any of 
the public variable is accessed( that is to be done through get or set 
functions ) ,
package initialization function - Pack_Spec_THMS_Pack()  is get called, it 
first removes unnecessary data from Package_Variable_Table table (possibly 
for other packages also whose session' ended). Then it checks in the table 
Package_Variable_Table if any entry for current session is there or not, 
if not it inserts new data for public/private variables and initializes.

access to public variables of package - through get /set functions.
access to private variables of package - function and procedures can 
directly access table Package_Variable_Table for this.



Venkat Rao Buddu
Tata Consultancy Services
Mailto: venkatra...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Outsourcing

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Package_Postgre_dt17feb10.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general