[GENERAL] Best practice for file storage?

2010-01-31 Thread Joe Kramer
Hi,

I need to store a lot of large files (thousands of 10-100 MB files)
uploaded through my web application and  I find that storing them in
database as bytea field is not practical for backup purposes.
My database has full backup performed every 12 hours and backup is
encrypted and copied to server on another continent. Having all the
heavy binary data in database will make backups impossible.

So I am thinking of having a table just for metadata and file id
sequence and storing the file on file system:

CREATE TABLE business_logo
(
  file_id bigserial NOT NULL,
 file_name varchar,
 file_date timestamp,
);

Storing file in path composed from serial id, e.g. file with id 2345
will be stored in
/webapp/files/2/3/4/5/2345

So I can backup files separately and database backup is still quick
and painless.

This is very simplistic and straightforward method.
I suppose there are better ways of doing it, using some virtual file system?
Anyone had a similar issue with avoiding of storing large files in
database, how did you solve it?

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practice for file storage?

2010-01-31 Thread Mark Morgan Lloyd

Joe Kramer wrote:

Hi,

I need to store a lot of large files (thousands of 10-100 MB files)
uploaded through my web application and  I find that storing them in
database as bytea field is not practical for backup purposes.
My database has full backup performed every 12 hours and backup is
encrypted and copied to server on another continent. Having all the
heavy binary data in database will make backups impossible.

So I am thinking of having a table just for metadata and file id
sequence and storing the file on file system:


Wasn't one of the particle accelerator establishments using PostgreSQL 
in this way for their raw data?


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ability to 'fork' a running transaction?

2010-01-31 Thread Craig Ringer

On 31/01/2010 9:06 PM, Alex Besogonov wrote:

On Sun, Jan 31, 2010 at 7:25 AM, Craig Ringer
cr...@postnewspapers.com.au  wrote:

However, here lies the problem: I need to use SERIALIZABLE transaction
isolation level, and AFAIK it's not possible to make several database
connections to share the same exact view of the database.

I've noticed some talk on -HACKERS of finding ways to make this possible.
It's needed for parallel pg_dump, among other things.

Actually, I the program I'm writing behaves exactly like parallel
pg_dump from PostgreSQL's point of view.

I've found this discussion in -HACKERS:
http://osdir.com/ml/pgsql-hackers/2009-11/msg00265.html It seems, it's
exactly what I need to do. I might try to contribute a patch.


Well, if you're able to that'd be absolutely brilliant :-)


It's not clear if it'd work for non-read-only transactions; I didn't notice
that being discussed, and don't know enough about it to have an opinion of
my own. Still, it's worth looking into for the future.

It should be possible to do this for read/write transactions as well.


So, is there a way to somehow stop all mutating operations?

Take explicit locks on the resources of interest that are permissive enough
to be shared with other read transactions, but not to permit writes.

I thought about it, but it's too deadlock-prone. I need to lock the
whole database, and if I do this table-by-table then I'll almost
certainly generate a deadlock.


Not if you specify, and stick to, a strict lock acquisition order and 
never try to upgrade a lock you already hold.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practice for file storage?

2010-01-31 Thread Craig Ringer

On 31/01/2010 6:46 PM, Joe Kramer wrote:

Hi,

I need to store a lot of large files (thousands of 10-100 MB files)
uploaded through my web application and  I find that storing them in
database as bytea field is not practical for backup purposes.
My database has full backup performed every 12 hours and backup is
encrypted and copied to server on another continent. Having all the
heavy binary data in database will make backups impossible.

So I am thinking of having a table just for metadata and file id
sequence and storing the file on file system:

CREATE TABLE business_logo
(
   file_id bigserial NOT NULL,
  file_name varchar,
  file_date timestamp,
);

Storing file in path composed from serial id, e.g. file with id 2345
will be stored in
/webapp/files/2/3/4/5/2345


Consider going a bit easier on your path component splitting. A given 
directory only containing 10 subdirs gets expensive in dir traversal 
(and is a nightmare to work with for admins); it'll probably be much 
better to use 23/45/2345 or even bigger chunks.


This depends a lot on the file system, so testing it is probably best. 
ext3 with dir_index shouldn't need much in the way of such manipulation 
at all - I regularly store tens of thousands of files in a single 
directory without issues.



--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practice for file storage?

2010-01-31 Thread Andy Colson

On 01/31/2010 04:46 AM, Joe Kramer wrote:

Hi,

I need to store a lot of large files (thousands of 10-100 MB files)
uploaded through my web application and  I find that storing them in
database as bytea field is not practical for backup purposes.
My database has full backup performed every 12 hours and backup is
encrypted and copied to server on another continent. Having all the
heavy binary data in database will make backups impossible.

So I am thinking of having a table just for metadata and file id
sequence and storing the file on file system:

CREATE TABLE business_logo
(
   file_id bigserial NOT NULL,
  file_name varchar,
  file_date timestamp,
);

Storing file in path composed from serial id, e.g. file with id 2345
will be stored in
/webapp/files/2/3/4/5/2345

So I can backup files separately and database backup is still quick
and painless.

This is very simplistic and straightforward method.
I suppose there are better ways of doing it, using some virtual file system?
Anyone had a similar issue with avoiding of storing large files in
database, how did you solve it?

Thanks.



A bonus you get from using a standard file system is rsync'able backups.  I'd 
bet most of those thousands of files dont change that often?  Your backup times 
with rsync will be outstanding.  If you use dark magic, you may be limiting 
your backup options.

And +1 with Craig, I've also stored thousands of files in the same dir (using 
XFS) and it was not slower than splitting them into smaller subdir's.  (it 
wasnt faster, but it wasnt slower either)

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ability to 'fork' a running transaction?

2010-01-31 Thread Alex Besogonov
On Sun, Jan 31, 2010 at 4:02 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
 I've found this discussion in -HACKERS:
 http://osdir.com/ml/pgsql-hackers/2009-11/msg00265.html It seems, it's
 exactly what I need to do. I might try to contribute a patch.
 Well, if you're able to that'd be absolutely brilliant :-)
I have already checked out the sources of PostgreSQL and started to
look how to hook up the required functionality and then I found this:
http://archives.postgresql.org/pgsql-hackers/2010-01/msg00916.php :)
It would be really nice to have it in the next PostgreSQL release.

I'll write a parallel variant of pg_dump so this functionality won't
be left unused.

 So, is there a way to somehow stop all mutating operations?
 Take explicit locks on the resources of interest that are permissive
 enough
 to be shared with other read transactions, but not to permit writes.
 I thought about it, but it's too deadlock-prone. I need to lock the
 whole database, and if I do this table-by-table then I'll almost
 certainly generate a deadlock.
 Not if you specify, and stick to, a strict lock acquisition order and never
 try to upgrade a lock you already hold.
That's not possible, I'm afraid. My code is essentially a 'parallel
pg_dump' and it needs to dump the whole database. So it's just not
possible to stick to the same locking order.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ability to 'fork' a running transaction?

2010-01-31 Thread Alex Besogonov
On Sun, Jan 31, 2010 at 7:25 AM, Craig Ringer
cr...@postnewspapers.com.au wrote:
 However, here lies the problem: I need to use SERIALIZABLE transaction
 isolation level, and AFAIK it's not possible to make several database
 connections to share the same exact view of the database.
 I've noticed some talk on -HACKERS of finding ways to make this possible.
 It's needed for parallel pg_dump, among other things.
Actually, I the program I'm writing behaves exactly like parallel
pg_dump from PostgreSQL's point of view.

I've found this discussion in -HACKERS:
http://osdir.com/ml/pgsql-hackers/2009-11/msg00265.html It seems, it's
exactly what I need to do. I might try to contribute a patch.

Thanks for the pointer!

 It's not clear if it'd work for non-read-only transactions; I didn't notice
 that being discussed, and don't know enough about it to have an opinion of
 my own. Still, it's worth looking into for the future.
It should be possible to do this for read/write transactions as well.

 So, is there a way to somehow stop all mutating operations?
 Take explicit locks on the resources of interest that are permissive enough
 to be shared with other read transactions, but not to permit writes.
I thought about it, but it's too deadlock-prone. I need to lock the
whole database, and if I do this table-by-table then I'll almost
certainly generate a deadlock.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problems backing up

2010-01-31 Thread Sebastien Boisvert


Hi all,

We have an OS X app which integrates postgres as its database backend, and 
recently we've have a couple of cases where users haven't been able to perform 
a backup of their database. The failure gets reported as a problem in a table 
(largedata) where we store large binary objects, with a bytea column. 
Normally when this happens, it's due to database corruption, and we delete/fix 
the affected data to repair the database. However, in these recent cases, it 
appears that database corruption is not the issue.

When we run into these backup problems we use a tool we've created that fetches 
all data from each row for that table to verify them; in the cases of 
corruption there's always 1 or more rows that can't be fetched, but in these 
cases all data can be fetch, so it doesn't appear to be a corruption issue.

The backup is done by usin pg_dump, and the logs shows the failure details 
below; prior to the failure I've seen real/virtual mem balloon past 1GB

While one of these problem database itself is very large (about 30gigs), the 
data in each row isn't too large (biggest object was 138871354 bytes, and our 
application can read it from the database just fine).

I've attached the details of the error(s) and pg/table configurations below. 
I'm hoping someone can point us in the right direction as to what to look for, 
as I'm not familiar enough with the intricacies and low-level implementation 
details of postgres to know what to look for or what additional information to 
look at to determine the cause.

Error during backup:








 (31971) malloc: *** 
mmap(size=1073745920) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
TopMemoryContext: 129928 total in 15 blocks; 9600 free (19 chunks); 120328 used
  Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks); 6392 
used
  Operator class cache: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 used
  Operator lookup cache: 24576 total in 2 blocks; 14072 free (6 chunks); 10504 
used
  TopTransactionContext: 8192 total in 1 blocks; 7776 free (0 chunks); 416 used
  MessageContext: 8192 total in 1 blocks; 5008 free (1 chunks); 3184 used
  smgr relation table: 24576 total in 2 blocks; 11952 free (4 chunks); 12624 
used
  TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks); 16 
used
  Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
  PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 536951860 total in 6 blocks; 11016 free (12 chunks); 
536940844 used
  COPY TO: 564491844 total in 3 blocks; 8120 free (5 chunks); 564483724 used
  Relcache by OID: 24576 total in 2 blocks; 15584 free (3 chunks); 8992 used
  CacheMemoryContext: 4337488 total in 23 blocks; 1934512 free (3 chunks); 
2402976 used
pg_toast_30058_index: 1024 total in 1 blocks; 240 free (0 chunks); 784 used
pg_toast_29648_index: 1024 total in 1 blocks; 240 free (0 chunks); 784 used
pg_description_o_c_o_index: 1024 total in 1 blocks; 192 free (0 chunks); 
832 used
pg_shdescription_o_c_index: 1024 total in 1 blocks; 280 free (0 chunks); 
744 used
pg_database_datname_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 
used
pg_largeobject_loid_pn_index: 1024 total in 1 blocks; 280 free (0 chunks); 
744 used
pg_trigger_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
pg_trigger_tgconstrname_index: 1024 total in 1 blocks; 344 free (0 chunks); 
680 used
pg_constraint_contypid_index: 1024 total in 1 blocks; 344 free (0 chunks); 
680 used
pg_constraint_conrelid_index: 1024 total in 1 blocks; 344 free (0 chunks); 
680 used
pg_constraint_conname_nsp_index: 1024 total in 1 blocks; 280 free (0 
chunks); 744 used
pg_attrdef_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
pg_cast_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
pg_rewrite_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 280 free (0 chunks); 
744 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 240 free (0 
chunks); 784 used
pg_depend_reference_index: 1024 total in 1 blocks; 192 free (0 chunks); 832 

Re: [GENERAL] Best practice for file storage?

2010-01-31 Thread Steve Atkins

On Jan 31, 2010, at 2:46 AM, Joe Kramer wrote:

 Hi,
 
 I need to store a lot of large files (thousands of 10-100 MB files)
 uploaded through my web application and  I find that storing them in
 database as bytea field is not practical for backup purposes.
 My database has full backup performed every 12 hours and backup is
 encrypted and copied to server on another continent. Having all the
 heavy binary data in database will make backups impossible.
 
 So I am thinking of having a table just for metadata and file id
 sequence and storing the file on file system:
 
 CREATE TABLE business_logo
 (
  file_id bigserial NOT NULL,
 file_name varchar,
 file_date timestamp,
 );
 
 Storing file in path composed from serial id, e.g. file with id 2345
 will be stored in
 /webapp/files/2/3/4/5/2345
 
 So I can backup files separately and database backup is still quick
 and painless.
 
 This is very simplistic and straightforward method.
 I suppose there are better ways of doing it, using some virtual file system?
 Anyone had a similar issue with avoiding of storing large files in
 database, how did you solve it?

Works fine, though you probably want more than 10 entries in each directory,
depending on which filesystem you use.

If you're going to delete the files ever, you can do that transactionally by
having a trigger on the table that queues the filenames in a table for
deletion by an external process.

If you can store some information about the file contents - size at least,
maybe a cheap hash (md5) - then you'll find it easier to sanity check
filesystem vs database when something goes wrong. It also lets you
find duplicate files more easily.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Versions RSS page is missing version(s)

2010-01-31 Thread Magnus Hagander
On Fri, Jan 29, 2010 at 18:34, Greg Sabino Mullane g...@turnstep.com wrote:

 yet, so that page should be listing 7.4.27. Further, shouldn't we be keeping
 even 'unsupported' versions on this page, so (e.g. case of 
 check_postgres.pl)
 clients can check if they have the latest revision, even if the major/minor
 combo is super old?

 No, I don't think we should. We should list supported versions only.
 And check_postgres could be advised to throw a warning at least if
 you're running an unsupported version ;)

 I'm not sure how useful that is. Surely while we encourage people to run
 a recent major version, we also want to encourage people who will not
 or cannot upgrade to at least be running the latest revision of a branch,
 no matter how old it is?

We don't support 7.3. Not even if you run the latest version.


 How about a compromise? We add a new field to that XML so we can state
 that it is unsupported, but leave it in there. That way, programs such
 as check_postgres can not only distinguish between old but valid versions
 and invalid versions (e.g. 7.typo.oops) but can act in a more intelligent
 way for unsupported versions. Heck, maybe an estimated end-of-life date
 field for all versions as well?

How do you add that field in a backwards compatible way? Meaning that
people or tools relying on it should *not* see 7.3 or 6.1 or whatever.
And it needs to be done within the RSS spec (which does allow custom
namespaces though, so that may not be a problem)

As for an estimated end-of-life, yes, we could definitely add that.
Now that we finally have it :-)


 Either way, please add 7.4 back in. :)

Done, will be on in the next site rebuild.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems backing up

2010-01-31 Thread Tom Lane
Sebastien Boisvert sebastienboisv...@yahoo.com writes:
 [ COPY fails to dump a 138MB bytea column ]

If you can't switch to a 64-bit build of Postgres, you might need to
think about converting those byteas to large objects.  It's expected for
COPY to require memory space equal to several times the width of the row
it's trying to dump, because it has to have multiple copies of the row
data in order to do format conversion.  In this case the required space
seems to be going well past 1GB, so I wonder whether you are doing
anything that exacerbates the memory requirement, for instance by
forcing an encoding conversion to something other than the database's
server_encoding.

But having said that, it seems a bit odd that it is failing at what
seems to be only around 2GB of memory demand.  Even in a 32-bit build
I'd expect there to be enough address space for that, especially in
view of your very small (arguably too small) shared_buffers setting.
Are you sure the postmaster is really being started with 'unlimited'
ulimits?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Which version will this run on?

2010-01-31 Thread Mads Lie Jensen
Hi

I have this:

SELECT  pg_catalog.pg_get_constraintdef(r.oid, true) AS condef
FROM pg_catalog.pg_constraint r,
  pg_catalog.pg_class c
WHERE c.oid=r.conrelid
  AND r.contype = 'f'
  AND c.relname = 'table_name'

which gives me the foreign keys of a given table in the database.
It is working perfectly on my 8.4.2-version of pstgresql.

But what other versions of postgresql will it run on?

Its used in a php-project, and I would like to know which minimum
version of postgres I should recommend.

Thanks in advance.

-- 
Mads Lie Jensen - m...@gartneriet.dk - ICQ #25478403
Gartneriet - http://www.gartneriet.dk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Determine if postmaster can accept new connections

2010-01-31 Thread Sebastien Boisvert
I'm not sure if this is the best list to ask... I have a need to know if the 
server is able to accept connections - is there a way to call 
canAcceptConnections() from the front end somehow?

Thanks.


  __
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your 
favourite sites. Download it now
http://ca.toolbar.yahoo.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which version will this run on?

2010-01-31 Thread Tom Lane
Mads Lie Jensen m...@gartneriet.dk writes:
 SELECT  pg_catalog.pg_get_constraintdef(r.oid, true) AS condef
 FROM pg_catalog.pg_constraint r,
   pg_catalog.pg_class c
 WHERE c.oid=r.conrelid
   AND r.contype = 'f'
   AND c.relname = 'table_name'

 which gives me the foreign keys of a given table in the database.
 It is working perfectly on my 8.4.2-version of pstgresql.

 But what other versions of postgresql will it run on?

A quick test says that it works back to 7.4, which is the oldest
version that is supported at all anymore.  I don't think you need
to worry too much.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] back out configure options

2010-01-31 Thread zhong ming wu
Thanks.  That works nicely.

On Tue, Jan 26, 2010 at 8:00 PM, Greg Smith g...@2ndquadrant.com wrote:
 zhong ming wu wrote:

 Is there a way to figure out from binaries what options were used to
 compile/config?  For example with apache I can do httpd -l


 pg_config is what you're looking for.

 In some distributions, this may not be installed by default with the rest of
 the server.  For example, in the RPM version you need the postgresql-devel
 package to have it available.

 --
 Greg Smith    2ndQuadrant   Baltimore, MD
 PostgreSQL Training, Services and Support
 g...@2ndquadrant.com  www.2ndQuadrant.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] dynamic crosstab

2010-01-31 Thread Florent THOMAS
Hello everybody,

I'm trying to find out how to have a dynamic crosstab as in excel,
ireport,etc...
As i understand of the manual here :
http://docs.postgresqlfr.org/8.4/tablefunc.html
I can have multiple columns.

Unfortunately, it seems indispensible to name the columns in the AS
clause.
Am I right or is ther a way to let the query generate the columns and
there name without naming them?

Best regards

Florent THOMAS


[GENERAL] problem with triggers

2010-01-31 Thread Florent THOMAS
Hy everybody,

I have a problem with 2 triggers.

I work on 3 tables :
table A == with one trigger after insert that insert values in table B
Table B == with one trigger after insert that insert values in table C
Table C
As I insert values on table A, I have a message that indicates the
EXECUTE statement as null.
I wonder if it is because the 2nd insert is sent as the first one is not
ended.
In this case, how configure postgresql to accept this second insertion?

Best regards



Re: [GENERAL] problem with triggers

2010-01-31 Thread Scott Marlowe
On Sun, Jan 31, 2010 at 4:53 PM, Florent THOMAS mailingl...@tdeo.fr wrote:
 Hy everybody,

 I have a problem with 2 triggers.

 I work on 3 tables :
 table A == with one trigger after insert that insert values in table B
 Table B == with one trigger after insert that insert values in table C
 Table C
 As I insert values on table A, I have a message that indicates the EXECUTE
 statement as null.
 I wonder if it is because the 2nd insert is sent as the first one is not
 ended.
 In this case, how configure postgresql to accept this second insertion?

OK, that's a good overview, but it would help if you had a simple
self-contained test case to post so we could reproduce what you're
seeing.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dynamic crosstab

2010-01-31 Thread Joe Conway
On 01/31/2010 03:52 PM, Florent THOMAS wrote:
 Hello everybody,
 
 I'm trying to find out how to have a dynamic crosstab as in excel,
 ireport,etc...
 As i understand of the manual here :
 http://docs.postgresqlfr.org/8.4/tablefunc.html
 I can have multiple columns.
 
 Unfortunately, it seems indispensible to name the columns in the AS clause.
 Am I right or is ther a way to let the query generate the columns and
 there name without naming them?

Wow, second time this week this has come up. Maybe it ought to be an FAQ.

Anyway, your best bet is to use crosstab from contrib/tablefunc, and
wrap it with application code that dynamically executes the query with
the needed column definitions. It is a simple two step process:

Using crosstab(text source_sql, text category_sql),

 - first execute category_sql to get a list of columns
 - dynamically build the complete crosstab SQL including the columns
 - execute the crosstab SQL

The parser/planner requires the column type information because the
result is potentially filtered (WHERE clause) or joined (FROM CLAUSE)
with other relations. There is no way around this, at least not
currently, and probably not ever in this form. If PostgreSQL ever
supports true procedures (i.e. CALL sp_crosstab(...)), then it would be
possible to forego the column definitions as joining and filtering would
not be possible in that scenario.

Joe



signature.asc
Description: OpenPGP digital signature


[GENERAL] How to test my new install

2010-01-31 Thread ray
I have just installed 8.4 on an XP.  My intent is to use it with Trac
and Apache.

I would like to validate the installation of pgsql.  What would be a
good method to make sure that pgsql is in there right?

Ray

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can LISTEN/NOTIFY deal with more than 100 every second?

2010-01-31 Thread Gavin Mu
Hi,

I am prototyping a system which sends all INSERT/UPDATE/DELETE events
to a third party software, I do:

CREATE TABLE data (id Serial PRIMARY KEY, data VARCHAR(255));
CREATE TABLE log (op CHAR(6), id integer, data VARCHAR(255));
CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO ALSO NOTIFY logevent;
CREATE OR REPLACE FUNCTION log_event() RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'DELETE') THEN
INSERT INTO log VALUES ('DELETE', OLD.id, OLD.data);
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO log VALUES ('UPDATE', NEW.id, NEW.data);
ELSIF (TG_OP = 'INSERT') THEN
INSERT INTO log VALUES ('INSERT', NEW.id, NEW.data);
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER log_event_trigger AFTER INSERT OR UPDATE OR DELETE ON
data FOR EACH ROW EXECUTE PROCEDURE log_event();

A simple client program is used to wait for the NOTIFY logevent and
query the log table to send the changes, then delete what he has sent.

When I inserted data to TABLE data with the rate of about 25 every
second, the client can receive the notifies without any problem, and
when I use 3 similar programs to feed data, which means about 75
events every second, I found that Postgres didn't send NOTIFY
opportunely, since the client do SELECT query every several hundreds
seconds, which is too long to be acceptable.

So what I want to know is, is there anything wrong with my idea? and
how frequence can LISTEN/NOTIFY support? Thanks.

Regards,
Gavin Mu

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to look for duplicate rows?

2010-01-31 Thread Jayadevan M
select distinct a.* from test a, test b where a.fname = b.fname 
and a.lname=b.lname
and a.sn  b.sn
Regards,
Jayadevan




From:   zach cruise zachc1...@gmail.com
To: pgsql-general@postgresql.org
Date:   01/29/2010 10:09 PM
Subject:[GENERAL] how to look for duplicate rows?
Sent by:pgsql-general-ow...@postgresql.org



i have to clean a table that looks like so:

create table test (sn integer, fname varchar(10), lname varchar(10));
insert into test values (1, 'adam', 'lambert');
insert into test values (2, 'john', 'mayer');
insert into test values (3, 'john', 'mayer');
insert into test values (4, 'mary', 'kay');
insert into test values (5, 'john', 'mayer');
insert into test values (6, 'susan', 'boyle');
insert into test values (7, 'susan', 'boyle');
insert into test values (8, 'mark', 'ingram');

for that, i need to run a query that returns like so:

result:
is_not_distinct
2, 3, 5
6, 7

using 8.1.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






DISCLAIMER: 

The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect.






Re: [GENERAL] problem with triggers

2010-01-31 Thread Florent THOMAS
Thank you for answering so fast,

After a good night, I found the solution.
There was a problem with a variable that was name as a field name I
execute in the query.
So everything looks fine now!

Thanks a lot

Le dimanche 31 janvier 2010 à 16:55 -0700, Scott Marlowe a écrit :

 On Sun, Jan 31, 2010 at 4:53 PM, Florent THOMAS mailingl...@tdeo.fr wrote:
  Hy everybody,
 
  I have a problem with 2 triggers.
 
  I work on 3 tables :
  table A == with one trigger after insert that insert values in table B
  Table B == with one trigger after insert that insert values in table C
  Table C
  As I insert values on table A, I have a message that indicates the EXECUTE
  statement as null.
  I wonder if it is because the 2nd insert is sent as the first one is not
  ended.
  In this case, how configure postgresql to accept this second insertion?
 
 OK, that's a good overview, but it would help if you had a simple
 self-contained test case to post so we could reproduce what you're
 seeing.