Re: [GENERAL] PHP and PostgreSQL boolean data type

2010-02-11 Thread Torsten Zühlsdorff

Thom Brown schrieb:


A long-standing problem we've had with PostgreSQL queries in PHP is
that the returned data for boolean columns is the string 'f' instead
of the native boolean value of false.


This problem is solved since nearly 5 years with PDO. You can use an 
abstraction like DDDBL (see my signature) if you want to save time while 
using PDO.


Greetings from Germany,
Torsten

--
http://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 
verschiedenen Datenbanksystemen abstrahiert,
Queries von Applikationen trennt und automatisch die Query-Ergebnisse 
auswerten kann.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xpath

2010-02-11 Thread MOLINA BRAVO FELIPE DE JESUS
try to cast to xml xml_payload::xml




El mié, 10-02-2010 a las 12:39 +0300, Allan Kamau escribió:
 As advised by Peter,
 Below is an example (including the ddl and dml statements), it _drops_
 and creates a table called simple_table and a sequence called
 simple_table_seq both in the public schema.
 
 DROP SEQUENCE IF EXISTS simple_table_seq CASCADE;
 CREATE SEQUENCE simple_table_seq;
 DROP TABLE IF EXISTS simple_table CASCADE;
 CREATE TABLE simple_table
 (id INTEGER NOT NULL DEFAULT NEXTVAL('simple_table_seq')
 ,xml_payload TEXT
 ,PRIMARY KEY(id)
 )
 ;
 INSERT INTO simple_table
 (
 id
 ,xml_payload
 )
 SELECT
 nextval('simple_table_seq')AS id
 ,'docname first=David last=Marston/some text/doc' AS xml_payload
 ;
 SELECT a.id,a.xml_payload FROM simple_table a LIMIT 1;
 SELECT xpath('/doc/name/@first',SELECT a.xml_payload FROM simple_table
 a LIMIT 1);
 SELECT a.id,xpath('/doc/name/@first',a.xml_payload) FROM simple_table a LIMIT 
 1;
 
 SELECT xpath('/doc/name/@first','docname first=David
 last=Marston/some text/doc');
 
 DROP SEQUENCE IF EXISTS simple_table_seq CASCADE;
 DROP TABLE IF EXISTS simple_table CASCADE;
 
 
 Allan.
 
 On Wed, Feb 10, 2010 at 11:34 AM, Otandeka Simon Peter
 sotand...@gmail.com wrote:
  Allan,
 
  Postgres is very strict on variable types and char conversion.  I have a
  feeling you are trying to access data from a varchar feild using an
  integer...
 
  Can you paste here your schema for that table?
 
  P.
 
  On Wed, Feb 10, 2010 at 11:06 AM, Allan Kamau kamaual...@gmail.com wrote:
 
  Hi,
  I am running postgreSQL-8.4.2. I have a table that stores a single xml
  document per row in one of it's fields. I would like to use xpath to
  retrieve portions of these xml documents.
  Is there a way to do so. (I am running postgreSQL 8.4.2 configured
  (built) with --with-libxml and --with-libxslt options)
 
  I have looked at 'xpath' but I am unable to get it work for table fields.
 
  The command below works.
  SELECT xpath('/doc/name/@first','docname first=David
  last=Marston/.../doc');
 
  The command below seems not to execute successfully
  SELECT a.id,xpath('/doc/name/@first',a.xml_payload) FROM
  staging.simple_table a WHERE a.id=1;
 
  HINT:  No function matches the given name and argument types. You
  might need to add explicit type casts.
 
 
  Allan.
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 
 
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple buffer cache?

2010-02-11 Thread Alexei Vladishev

Greg,

Let's look at this from an application developer perspective. Suppose my 
application has a few hundreds of tables. I know _very_well_ how the 
tables are used. I'd like to tune PostgreSQL so that it would respect 
how the application works in order to get best possible performance.


My database is 500GB. One set of tables, configuration and operational 
data, takes around 20GB. Historical data takes 480GB.


Configuration: 20GB, lots of selects and updates. Latency is important.
History: 480GB, frequent inserts, selects. Latency is important as well.

Since PostgreSQL has only one buffer cache, operations with one set of 
tables affects other set of tables. Bunch of selects from the historical 
tables could easily cause cache pollution.


Pinning buffers is a nice feature, but imho it would be hard to tune and 
control. However I think that pinning relations (tables) would help a 
lot. In this case all configuration tables could be pinned, therefore 
selects and updates will be very fast.


Alternatively if PostgreSQL supported multiple buffer cache, I would 
assign relatively small configuration tables to one buffer and 
historical tables to another, so they won't affect each other. It may 
also work on per database level, so a database can be assigned to one of 
available buffers.


Does it make sense?

Kind regards,
Alexei

Greg Stark wrote:


I doubt pinning buffers ever improve system on any halfway modern 
system. It will often *look* like it has improved performance because 
it improves the performance of the queries you're looking at -- but at 
the expense of slowing down everything else.


There is a use case it would be useful for though. When you have some 
queries that are latency critical. Then you might want to pin the 
buffers those queries use to avoid having larger less urgent queries 
purge those buffers.


If we had a way to mark latency critical queries that might be a more 
flexible interface but ewe would need some way to control just how 
critical they are. we wouldn't want to keep those buffets pinned forever.


greg

On 7 Feb 2010 07:36, Tom Lane t...@sss.pgh.pa.us 
mailto:t...@sss.pgh.pa.us wrote:


Greg Smith g...@2ndquadrant.com mailto:g...@2ndquadrant.com writes:
 ...  Part of the reason this feature

 hasn't been been perceived as more valuable is because just letting 
the

 two cache levels involv...

Or to put it even more clearly: the only way that pinning particular
tables into RAM could beat letting the caching algorithm take care of
it is if the caching algorithm is seriously suboptimal.  Therefore,
it would make more sense to put development effort into improving
the cache algorithm than to put it into giving the DBA a rather blunt
instrument for overriding the cache algorithm.  We've already made
several rounds of improvements of that kind, and are quite happy to
consider more.

   regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
mailto:pgsql-general@postgresql.org)

To make changes to your subs...




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple buffer cache?

2010-02-11 Thread Alexei Vladishev

Greg,


Alexei Vladishev wrote:
Is there a way of configuring PostgreSQL so that one specific table 
would

use, say, 4GB of buffer cache while other tables would use the rest?


It sounds like you're looking for what other databases call 
pinning.  It's not supported in PostgreSQL right now, and as far as 
I know it's not on anybody's hotlist of features they're working on.  
It would be straightforward to add actually; I know exactly where the 
code that evicts pages from the buffer cache would need to be tweaked 
to support this.  See the Inside the PostgreSQL Buffer Cache 
presentation at http://www.westnet.com/~gsmith/content/postgresql/ for 
more details about how the current implementation works.  Be happy to 
talk about what what it would take to sponsor the bit of development 
required if this is something you really need for your app--it's not a 
giant feature to build, just not one that anyone has needed badly 
enough so far to bother writing.

Thank you for the link. Lots of useful information there.

I would like to keep the table and its indexes always in hot state, 
so that
other queries won't pollute this part of the buffer cache. It would 
ensure

reliable performance and much less disk IOPS working with the table.


The PostgreSQL design presumes that the OS cache is significantly 
larger than the RAM dedicated to the database, so a database cache 
miss doesn't necessarily turn into physical I/O.  Part of the reason 
this feature hasn't been been perceived as more valuable is because 
just letting the two cache levels involved here sort out what's really 
valuable or not can often outperform what an application developer 
thinks the optimal configuration will be.
Interesting! I assumed that it is a common practice to dedicate most of 
the RAM to a database engine leaving only small percentage to OS.


Kind regards,
Alexei

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple buffer cache?

2010-02-11 Thread Alexei Vladishev

Bret,


And then, even if the support is there, you'd need to outline exactly
how you're planning on pushing this button.

Specifically, what's your usage pattern that would make this a
win for you?
  

Let me explain. I have a very busy application generating thousands
of SQLs per second.
There is an application level cache built into the application already.

The important part is that once per hour the application writes
collected data to huge historical
tables (100M up-to billions of records, partitioned). Since it
happens every hour database buffer
cache is already overwritten by data and indexes of other tables, so
the write operation is very
slow and requires huge amount of disk seeks causing 50-100x drop of
performance.



The disk seeks will happen regardless of what Postgres does, as the
OS pulls in new disk blocks to perform the write. If your OS' buffer
cache is large enough to hold all the data you need, then your
best bet is likely partitioning data across multiple disks, so that
queuing the archive reads doesn't get in the way of production reads.

As I'm a unix admin mostly, I'm not qualified to give advice on whether
or not that's possible, or how to do it if it is ;)
  
I was talking about read seeks obviously caused by index-related 
searches. Write operations
do not cause latency issues as they are handled quite well by OS, 
controller, HDD, whatever

write cache.


So, my idea is to assign a separate buffer cache for the historical
tables. It would guarantee that
index data is always cached, so the write operation will be very fast.

Is it possible? Is there any other techniques available?



If it were at all possible, I'd actually set up a secondary archiving
server (unless you need the historical data on tap for the production
system as well), either on another port on the same machine, or on
another machine which won't impact your production system if it has to
suddenly do a bunch of disk I/O, and log the history to that.
  
I agree. Two separate servers would be a nice solution as usage patterns 
are absolutely different,
so the servers can be tuned differently. Lack of transactional integrity 
is an obvious drawback of

such approach.

Kind regards,
Alexei

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help Join Tables

2010-02-11 Thread BlackMage

Hey,

I'm having some trouble trying to join tables because I'm trying to join
data in a column to a column name. An example looks like this:

Table 1:
field_names(varchar 255) | field_title(varchar 255)
name_field  |  User Name
interest_field | User Interest
number_field | User Number

Table 2:
user_id  |  name_field  |  interest_field  |  number_field
1 |   John   |  Baseball  |  555-

For the beginning of a sql query I have SELECT table2.name_field,
table2.interest_field, table2.number_field, table1.field_title JOIN table1
ON ?? WHERE table2.userid=1

So basically I am trying to match up the column name with the field. Can
anyone help me with this?

Thank

-- 
View this message in context: 
http://old.nabble.com/Help-Join-Tables-tp27507462p27507462.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [Pgsqlrpms-hackers] [GENERAL] weird bug in rebuilding RPMs

2010-02-11 Thread Devrim GÜNDÜZ
On Mon, 2010-02-08 at 10:33 -0500, Tom Lane wrote:
 But having said that, I don't get the point of trying to build a
 nonstandard installation from the RPM.  That seems more or less
 antithetical to most of the design concepts of RPM-based distros;
 and it certainly seems pretty silly if your intent is to transition
 to the standard RPM later. 

Right.

Also, I think using _prefix is not supported? Isn't it considered as
relocation of postgresql rpm?
-- 
Devrim GÜNDÜZ
Professional PostgreSQL Services, Training, 24x7 Support
Authors: PostgreSQL RPMs, PITRTools, PostgreSQL Replicator 
http://www.commandprompt.com/ XMPP: dgun...@jabber.commandprompt.com
Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


[GENERAL] Cache lookup failed for relation message in PG 8.3.7

2010-02-11 Thread Keaton Adams
Any ideas why we would be receiving this cache lookup failed message?

PostgreSQL 8.3.7 64 bit, RHEL 5 64 bit OS

Linux hostname.net 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64 
x86_64 x86_64 GNU/Linux


2010-02-07 08:05:36 MSTERROR:  cache lookup failed for relation 391262678
2010-02-07 08:05:36 MSTSTATEMENT:  SELECT n.nspname as Schema,
  c.relname as Name,
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' 
THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as Type,
  r.rolname as Owner
FROM pg_catalog.pg_class c
 JOIN pg_catalog.pg_roles r ON r.oid = c.relowner
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','')
  AND n.nspname  'pg_catalog'
  AND n.nspname !~ '^pg_toast'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;

Thanks.



[GENERAL] error migrating database from 8.4 to 8.3

2010-02-11 Thread Marc Lustig
Due to a server issue we needed a reinstallation of Ubuntu along with a 
downgrade to Ubuntu Hardy.


So this is what we did:

- copied all from /var/lib/postgresql/8.4/main/ to the new server 
/var/lib/postgresql/8.3/main/
- edited /var/lib/postgresql/8.3/main/postmaster.opts to correct the path

Now trying to start the server results in 

* Error: The server must be started under the locale : which does not exist any 
more.

I googled and found that people ran into this problem due to different 
architectures (32 vs 64 bit).
In this case, the architecture is definiately the same.

The only difference is that the database comes from a 8.4 installation and the 
server is 8.3

Please help solving this locale issue.

thanks!
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] trouble with unique constraint

2010-02-11 Thread Khin, Gerald
The following SQL leads to a unique constraint violation error message
(PostgreSQL 8.4.1).

 

 

create table test (val integer);

 

create unique index test_uni on test(val);

 

insert into test (val) values (1);

insert into test (val) values (2);

insert into test (val) values (3);

 

update test set val = val + 1;

 

 

But it works fine with Oracle, MSSQL and DB2. Any idea how to make it
working with PostgreSQL as well?

 

 

 

 



[GENERAL] migrating data from 8.4 to 8.3

2010-02-11 Thread Marc Lustig
Hi,
can we safely put the data from /var/lib/postgresql/8.4/main/ of server x to 
/var/lib/postgresql/8.3/main/ of server y, considering that the new target 
machine is running 8.3 whereas the old one ran 8.4 ?

Rgards
Marc
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problems maintaining boolean columns in a large table

2010-02-11 Thread Timo Klecker
Hi Ben,

could you post your trigger function? When you need to rebuild the index,
you could disable the trigger setting the flag if the article is modified.
This could speed up your UPDATE.



Mit freundlichen Grüßen
Timo Klecker 




-Ursprüngliche Nachricht-
Von: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von Ben Campbell
Gesendet: Dienstag, 9. Februar 2010 12:26
An: pgsql-general@postgresql.org
Betreff: [GENERAL] problems maintaining boolean columns in a large table

I've got a database that holds a bunch of articles in a table called 
'article'. It has a bunch of columns, and each row might hold a few KB 
of data, say.
I'm maintaining a separate fulltext database, and so I added a boolean 
flag, 'needs_indexing' to my 'article' table to keep track of which 
articles have been indexed (and I have some trigger functions on 
'article' to automatically set the flag if the article is modified).

It all works fine.
Except when I want to rebuild my index from scratch. I need to set all 
those flags, but it takes _ages_  to do UPDATE article SET 
needs_indexing=true; (many hours at least - I've never let it run to 
completion)

I _think_ the reason it takes so long is that postgresql doesn't modify 
rows in place - it creates an entry for the modified row and zaps the 
old one. So by touching _every_ row I'm basically forcing it to rebuild 
my whole database... I've got about 2 million rows in 'articles'.
There are a few indexes on columns in 'articles' which obviously will 
slow things down too.

I've had a minor attempt at tuning (increased checkpoint_segments) an d 
I'm sure there are a bunch of other tricks I could use to bulk-set that 
flag in much less time...

But my gut feeling is that the flag would be better off in it's own 
table anyway, eg:

CREATE TABLE needs_indexing (
   article_id integer references article(id)
);

So, if an article is listed in this table, it needs indexing.
(maybe with a constraint to ensure uniqueness - I only need articles 
entered once in this table)

Does this sound like a reasonable way to go?
Any advice or insight welcome!

Thanks,
Ben.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] recovering fs-data from previous installation

2010-02-11 Thread Marc Lustig
I managed to install again postgresql-8.4 from hardy-backports.
Now the installation is identical with the previous one.

With the fresh database, the server starts up fine.

Now I copied all from the backup to 
/usr/lib/postgresql/8.4
/var/lib/postgresql/8.4

The the startup fails like this:

 * Starting PostgreSQL 8.4 database server  

  * Error: could not exec 
/usr/lib/postgresql/8.4/bin/pg_ctl /usr/lib/postgresql/8.4/bin/pg_ctl start -D 
/var/lib/postgresql/8.4/main -l /var/log/postgresql/postgresql-8.4-main.log -s 
-o  -c config_file=/etc/postgresql/8.4/main/postgresql.conf : 

the log-file is empty.

Can you please help to get postgresql to start again using the previous 
database.
I suppose there should be no serious issues, as the version of postgresql is 
identical now.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Manipulating Large Object datatype in Postgresql

2010-02-11 Thread Sharmila Jothirajah
Hi,
We have tables with data type CLOb and BLOBs (in oracle). This needs o be 
migrated to Postgresql. What data types can be used for this. I've done some 
resaerch/search in this and found that (correct me if Im wrong)
1. For CLObs the equivalent are TEXT and OID(lob).
But streaming(thro' jdbc) is possible only with oids
2. For Blobs the equivalent are oid(lob) and bytea (again straeming is possible 
only with oid)

Questions:
1. Which is the better approach for CLOb/BLOB? I need straeming possible since 
the data are pretty big
2. How is UPDATE/DELETE handled with the lob datatype? 


Would it update the referenced data?  Or would it create a new lob
and update the pointer, keeping the old data somewhere?  Or would that
depend on our implementation?  Would it make a difference if the new value
was null? 

Thanks




  

[GENERAL] migrating data from 8.4 to 8.3

2010-02-11 Thread Marc Lustig
Hi,
can we safely put the data from /var/lib/postgresql/8.4/main/ of server x to 
/var/lib/postgresql/8.3/main/ of server y, considering that the new target 
machine is running 8.3 whereas the old one ran 8.4 ?

Rgards
Marc
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fwd: [pgadmin-support] a quick question

2010-02-11 Thread Farrer, Rhys
Hi, I was told to email you about this question...

Cheers,
Rhys

Begin forwarded message:

From: Guillaume Lelarge guilla...@lelarge.infomailto:guilla...@lelarge.info
Date: 8 February 2010 10:41:57 GMT
To: Farrer, Rhys r.farre...@imperial.ac.ukmailto:r.farre...@imperial.ac.uk
Cc: pgadmin-supp...@postgresql.orgmailto:pgadmin-supp...@postgresql.org 
pgadmin-supp...@postgresql.orgmailto:pgadmin-supp...@postgresql.org
Subject: Re: [pgadmin-support] a quick question

Le 05/02/2010 18:06, Farrer, Rhys a écrit :
[...]

I have installed PostgreSQL 8.4 on my mac osX. Is there a postgresql-libs 
package that i can install? I can only seem to find those made for unix.


I really don't know. You should better ask on the pgsql-general list.


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com



[GENERAL] weird bug in rebuilding RPMs

2010-02-11 Thread kz win
Hello,

I'm a long time user and I had always installed postgres by compiling
from source.
I recently moved to a new environment where everything is installed as
rpm and plus
I need to install a newer postgres for testing while keeping the
existing version in place.
That leads me to repackage postgres rpm.

Because I will eventually be installing red hat rpm I take their
latest srpm and modify their rpm spec file
in order to let me install rpm.  After trials and errors I have
managed to rebuild rpm to my satisfaction
but would like to report a minor bug.

I can now rebuild rpms so that all files go under a specific directory
specified by _prefix directive in ~/.rpmmacros
Forgetting lots of other bugs that I managed to fix, this weird bug is
making me use _prefix that does not include
the word pgsql in the path name any where.  For example if _prefix
path has the word pgsqL or pgsq or pgsqA then rebuilding is successful
but if the path includes the word pgsql or pgsql-8.1.18 or pgsqll then
rebuilding breaks down.

It is failing at install stage.  This is the command.
$ rpmbuild --bi spec
I always do rpmbuild --clean spec before any rebuilding.

What follows is the error message before it quits (prefix is
/usr/local/pgsql/bla)
-
make: Leaving directory `/home/me/build/BUILD/postgresql-8.1.18/contrib/xml2'
+ case `uname -i` in
++ uname -i
++ uname -i
+ mv 
/var/tmp/postgresql-8.1.18-2.1-root//usr/local/pgsql/bla/include/pg_config.h
/var/tmp/postgresql-8.1.18-2.1-root//usr/local/pgsql/bla/include/pg_config
_x86_64.h
+ install -m 644 /home/me/build/SOURCES/pg_config.h
/var/tmp/postgresql-8.1.18-2.1-root//usr/local/pgsql/bla/include
++ uname -i
+ mv 
/var/tmp/postgresql-8.1.18-2.1-root//usr/local/pgsql/bla/include/pgsql/server/pg_config.h
/var/tmp/postgresql-8.1.18-2.1-root//usr/local/pgsql/bla/incl
ude/pgsql/server/pg_config_x86_64.h
mv: cannot stat
`/var/tmp/postgresql-8.1.18-2.1-root//usr/local/pgsql/bla/include/pgsql/server/pg_config.h':
No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.56667 (%install)


RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.56667 (%install)
-
If any one else would find it useful, I can also post a diff of the
spec file which will make it possible for repackaging and including
renaming so that it does not override the existing version.  Other
bugs I encountered are of the nature: paths that should not be
hard-coded are hard-coded and vice versa.

Thanks

mr.wu

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Warning when selecting column from pg_stat_user_tables.

2010-02-11 Thread Greg Smith

Chris Barnes wrote:
 
I have this error when selecting from the pg_stat_user_tables. I 
restarted postgres and the error went away.

Has anyone else seen this error?
[postg...@preventdb02 londiste]$ cat 
/data/pgsql/data/pg_log/postgresql-Tue.log

WARNING:  pgstat wait timeout


There have been periodic reports of these wait timeouts messing with the 
statistics results returned by the database for a while now; some examples:


http://archives.postgresql.org/pgsql-bugs/2009-07/msg00081.php
http://archives.postgresql.org/pgsql-bugs/2009-12/msg00175.php

I just ran into one of the myself recently, wasn't able to reproduce 
though.  There seems to be something subtle going wrong in statistics 
collection, nobody seems exactly sure what it is yet though.


I wouldn't get stressed about it unless these become frequent, in which 
case we'd probably want to ask you to collect more data about your 
system to try and catch more information about one of them when it shows up.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migrating data from 8.4 to 8.3

2010-02-11 Thread Thomas Kellerer

Marc Lustig, 08.02.2010 11:36:

Hi, can we safely put the data from /var/lib/postgresql/8.4/main/ of
server x to /var/lib/postgresql/8.3/main/ of server y, considering
that the new target machine is running 8.3 whereas the old one ran
8.4 ?


No, a dump  restore is required (as stated in the release notes)

You can also try pg_migrator. It was introduced with 8.4 and will upgrade the data 
in-place (i.e. without a dump and restore)

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] trouble with unique constraint

2010-02-11 Thread A. Kretschmer
In response to Khin, Gerald :
 The following SQL leads to a unique constraint violation error message
 (PostgreSQL 8.4.1).
 
  
 
  
 
 create table test (val integer);
 
  
 
 create unique index test_uni on test(val);
 
  
 
 insert into test (val) values (1);
 
 insert into test (val) values (2);
 
 insert into test (val) values (3);
 
  
 
 update test set val = val + 1;

update test set val = val + 10;
update test set val = val -9;

Thats the only way now, but the upcoming new release 9.0 contains
deferrable unique constraints.

Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migrating data from 8.4 to 8.3

2010-02-11 Thread A. Kretschmer
In response to Marc Lustig :
 Hi,
 can we safely put the data from /var/lib/postgresql/8.4/main/ of server x to 
 /var/lib/postgresql/8.3/main/ of server y, considering that the new target 
 machine is running 8.3 whereas the old one ran 8.4 ?

No!

Make a regular Backup und restore that Backup. Consider, 8.4 contains
features that not in 8.3, maybe you are not able to do this.

Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migrating data from 8.4 to 8.3

2010-02-11 Thread A. Kretschmer
In response to Thomas Kellerer :
 Marc Lustig, 08.02.2010 11:36:
 Hi, can we safely put the data from /var/lib/postgresql/8.4/main/ of
 server x to /var/lib/postgresql/8.3/main/ of server y, considering
 that the new target machine is running 8.3 whereas the old one ran
 8.4 ?
 
 No, a dump  restore is required (as stated in the release notes)
 
 You can also try pg_migrator. It was introduced with 8.4 and will upgrade 
 the data in-place (i.e. without a dump and restore)

I don't believe that pg_migrator can do a downgrade ...

Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migrating data from 8.4 to 8.3

2010-02-11 Thread Thomas Kellerer

A. Kretschmer, 11.02.2010 09:42:

In response to Thomas Kellerer :

Marc Lustig, 08.02.2010 11:36:

Hi, can we safely put the data from /var/lib/postgresql/8.4/main/ of
server x to /var/lib/postgresql/8.3/main/ of server y, considering
that the new target machine is running 8.3 whereas the old one ran
8.4 ?


No, a dump  restore is required (as stated in the release notes)

You can also try pg_migrator. It was introduced with 8.4 and will upgrade
the data in-place (i.e. without a dump and restore)


I don't believe that pg_migrator can do a downgrade ...


Ooops!

I read it the wrong way round (I thought the OP wanted to migrate from 8.3 to 
8.4...)

Regards
Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread Albe Laurenz
u235sentinel wrote:
 I have a strange problem we noticed the other day with 
 triggers.  We're 
 running 8.3.3 on Solaris 10 (intel) and have a feed that comes in 
 regularly to populate a table we're working on.  The feed works just 
 fine inserting rows however the following trigger stops the feed until 
 we remove the trigger.  Any thoughts on what I'm doing wrong here?
 
 Thanks!
 
 ---
 
 CREATE OR REPLACE FUNCTION r.m_t()
 RETURNS trigger AS
 $BODY$
 BEGIN
  INSERT INTO temp_m_t VALUES (NEW.*,1+1);
 RETURN NULL;
 END;
 $BODY$
 LANGUAGE 'plpgsql';
 
 
 CREATE TRIGGER tafter
 AFTER INSERT OR UPDATE
 ON r.m_a
 FOR EACH ROW
 EXECUTE PROCEDURE r.m_t();

What do you mean stops the feed?

Can you describe the behaviour in database terms?
What exactly happens, and how does it differ from what you expect?
Are there error messages? If yes, could you quote them?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL Installation

2010-02-11 Thread db . subscriptions

Hi,

Please why is it that we must register at EnterpriseDB and register  
each Windows installation of postgreSQL these days?


At times, I need to install on a server that is not on the internet  
and have had to bear the pain of configuring a server for internet  
before I can install postgreSQL.


Please can't we bypass this or better still, we have a binary that has  
no 'strings' attached?


Please let me know.

Chris.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Installation

2010-02-11 Thread Dave Page
On Thu, Feb 11, 2010 at 11:00 AM,  db.subscripti...@shepherdhill.biz wrote:
 Hi,

 Please why is it that we must register at EnterpriseDB and register each
 Windows installation of postgreSQL these days?

There is no requirement to register to use or download PostgreSQL.


-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Installation

2010-02-11 Thread A. Kretschmer
In response to db.subscripti...@shepherdhill.biz :
 Hi,
 
 Please why is it that we must register at EnterpriseDB and register  
 each Windows installation of postgreSQL these days?

That's not true.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Installation

2010-02-11 Thread Ashesh Vashi
You can always download PostgreSQL installer for any supported platform
without any registration from
http://www.enterprisedb.com/products/pgdownload.do.


On Thu, Feb 11, 2010 at 4:30 PM, db.subscripti...@shepherdhill.biz wrote:

 Hi,

 Please why is it that we must register at EnterpriseDB and register each
 Windows installation of postgreSQL these days?

 At times, I need to install on a server that is not on the internet and
 have had to bear the pain of configuring a server for internet before I can
 install postgreSQL.

 Please can't we bypass this or better still, we have a binary that has no
 'strings' attached?

 Please let me know.

 Chris.



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] PostgreSQL Installation

2010-02-11 Thread db . subscriptions

Thanks.

I think I mistook postgres for postgres-plus.

Quoting Ashesh Vashi ashesh.va...@enterprisedb.com:


You can always download PostgreSQL installer for any supported platform
without any registration from
http://www.enterprisedb.com/products/pgdownload.do.


On Thu, Feb 11, 2010 at 4:30 PM, db.subscripti...@shepherdhill.biz wrote:


Hi,

Please why is it that we must register at EnterpriseDB and register each
Windows installation of postgreSQL these days?

At times, I need to install on a server that is not on the internet and
have had to bear the pain of configuring a server for internet before I can
install postgreSQL.

Please can't we bypass this or better still, we have a binary that has no
'strings' attached?

Please let me know.

Chris.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COPY FROM wish list

2010-02-11 Thread Marc Mamin
Hello,

Looking at the TODO List, I feel that only some aspects of the COPY FROM
command are adressed.
Could a discussion trigger some activity on this topic  :o)  ?

Best regards,

Marc Mamin


Here my wish list:

COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ] 

   [ CHECKONLY (DATA_TYPES,CONSTRAINTS) ] # do not import, just
check the content

   [ SKIPLINES (n) ]

   [ HEADERLINE (n) ]# line conlaining the column names, must be
within the line to skip.
 # must match the column list when both are
given
   [ DEFERCONSTRAINTS ]

   [ SKIPINVALID [FILE file ] SKIPLIMIT (n)]  # when set,
invalid lines are skipped and possibly stored in file
  # an exception is
triggered when more than SKIPLIMIT lines are found
   
  
   [ NULLREPLACEMENT (column =value[, ...] ) ]  # allow a per column
handling of null values.
# cannot be set
allong with NULL [ AS ]
# not usable for
columns that are missing in the input file  ?
   
   [ FEEDBACK  (n) ] # display progress every n rows .# for
ORACLE users...

   [ COMMIT (n) ]# just an idea: commit each n rows
   
   [
  [ BINARY ]
  [ OIDS ]
  [ DELIMITER [ AS ] 'delimiter' ]
  [ NULL [ AS ] 'null string' ]
  [ CSV [ HEADER ]
[ QUOTE [ AS ] 'quote' ] 
[ ESCAPE [ AS ] 'escape' ]
[ FORCE NOT NULL column [, ...] ]
   ]



[GENERAL] Inserting exported bytea value

2010-02-11 Thread seiliki
I am trying to build SQL commands that insert values exported from a bytea 
column.  Output facilities do not escape single quotes for bytea column. As 
such, the built INSERT SQL can be invalid because single quotes can appear in 
values. For example, the built (invalid) INSERT SQL command can look like this:

INSERT INTO t1 VALUES ('ABC\\000C'DE');

Note the apostrophe between C and D.

Are there straight approaches to do this job using version 8.x?
Does the bytea hex export feature provided by version 9 help solve this problem?

Thank you in advance!

CN

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inserting exported bytea value

2010-02-11 Thread Pavel Stehule
Hello

why you don't use PQescapeBytea function ?

http://www.postgresql.org/docs/8.4/interactive/libpq-exec.html#LIBPQ-EXEC-ESCAPE-BYTEA

Regards
Pavel Stehule



2010/2/11  seil...@so-net.net.tw:
 I am trying to build SQL commands that insert values exported from a bytea 
 column.  Output facilities do not escape single quotes for bytea column. As 
 such, the built INSERT SQL can be invalid because single quotes can appear in 
 values. For example, the built (invalid) INSERT SQL command can look like 
 this:

 INSERT INTO t1 VALUES ('ABC\\000C'DE');

 Note the apostrophe between C and D.

 Are there straight approaches to do this job using version 8.x?
 Does the bytea hex export feature provided by version 9 help solve this 
 problem?

 Thank you in advance!

 CN

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Handling Numeric Datums in C

2010-02-11 Thread Alban Hertroys
Hey all,

I'm working on some server-side C code that involves a few calculations on a 
numeric value. The calculation is along the lines of:
(numeric) result = (numeric) value * ((int) base ^ (int) power);

What's the usual approach to write functions like these?

This is how far I got:

#include postgres.h
#include fmgr.h
#include executor/spi.h
#include executor/executor.h
#include utils/numeric.h
#include utils/builtins.h

PG_FUNCTION_INFO_V1(unit_product__aggregate_state);

Datum
unit_product__aggregate_state(PG_FUNCTION_ARGS)
{
Datum state = PG_GETARG_DATUM(0);
HeapTupleHeader t   = PG_GETARG_HEAPTUPLEHEADER(1);

Datum base, scale, exponent;
Datum result;
bool isNull;

base= GetAttributeByName(t, base, isNull);
if (isNull)
PG_RETURN_NULL();

scale   = GetAttributeByName(t, scale, isNull);
if (isNull)
PG_RETURN_NULL();

exponent= GetAttributeByName(t, exponent, isNull);
if (isNull)
PG_RETURN_NULL();


/* state *= base ^ (scale * exponent) */
result = DirectFunctionCall2(int4mul, scale, exponent);
result = DirectFunctionCall2(numeric_power, base, result);
result = DirectFunctionCall2(numeric_mul, state, result);

PG_RETURN_NUMERIC(result);
}

But it crashes the server, probably because I'm passing ints where numerics are 
expected. Is this the right approach at all? And if so, how am I supposed to 
cast those ints to numerics?

Are there any examples around for dealing with numerics? The stuff in the 
documentation (http://www.postgresql.org/docs/8.4/interactive/xfunc-c.html) 
conveniently omits numerics, so I had to dig through doxygen to get as far as I 
am now...

Regards,

Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4b74028910442077341801!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fwd: Handling Numeric Datums in C

2010-02-11 Thread Alban Hertroys
I guess it'd be useful to mention what types the different fields are, doh!

Begin forwarded message:

 Hey all,
 
 I'm working on some server-side C code that involves a few calculations on a 
 numeric value. The calculation is along the lines of:
   (numeric) result = (numeric) value * ((int) base ^ (int) power);
 
 What's the usual approach to write functions like these?
 
 This is how far I got:
 
 #include postgres.h
 #include fmgr.h
 #include executor/spi.h
 #include executor/executor.h
 #include utils/numeric.h
 #include utils/builtins.h
 
 PG_FUNCTION_INFO_V1(unit_product__aggregate_state);
 
 Datum
 unit_product__aggregate_state(PG_FUNCTION_ARGS)
 {
Datum state= PG_GETARG_DATUM(0);

This is a numeric.

HeapTupleHeader t  = PG_GETARG_HEAPTUPLEHEADER(1);
 
Datum base, scale, exponent;

These are all ints.

Datum result;

This is a numeric again.

bool isNull;
 
base   = GetAttributeByName(t, base, isNull);
if (isNull)
   PG_RETURN_NULL();
 
scale  = GetAttributeByName(t, scale, isNull);
if (isNull)
   PG_RETURN_NULL();
 
exponent   = GetAttributeByName(t, exponent, isNull);
if (isNull)
   PG_RETURN_NULL();
 
 
/* state *= base ^ (scale * exponent) */
result = DirectFunctionCall2(int4mul, scale, exponent);
result = DirectFunctionCall2(numeric_power, base, result);
result = DirectFunctionCall2(numeric_mul, state, result);
 
PG_RETURN_NUMERIC(result);
 }
 
 But it crashes the server, probably because I'm passing ints where numerics 
 are expected. Is this the right approach at all? And if so, how am I supposed 
 to cast those ints to numerics?
 
 Are there any examples around for dealing with numerics? The stuff in the 
 documentation (http://www.postgresql.org/docs/8.4/interactive/xfunc-c.html) 
 conveniently omits numerics, so I had to dig through doxygen to get as far as 
 I am now...

Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4b74034810441727621217!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] trouble with unique constraint

2010-02-11 Thread Khin, Gerald
The following SQL leads to a unique constraint violation error message
(PostgreSQL 8.4.1).

 

 

create table test (val integer);

 

create unique index test_uni on test(val);

 

insert into test (val) values (1);

insert into test (val) values (2);

insert into test (val) values (3);

 

update test set val = val + 1;

 

 

But it works fine with Oracle, MSSQL and DB2. Any idea how to make it
working with PostgreSQL as well?

 

 



Re: [GENERAL] problems maintaining boolean columns in a large table

2010-02-11 Thread Ben Campbell

Timo Klecker wrote:

could you post your trigger function? When you need to rebuild the index,
you could disable the trigger setting the flag if the article is modified.
This could speed up your UPDATE.


Embarrassingly, when I checked, I found that I'd never gotten around to 
writing that particular trigger function... (It's just being handled at 
the app level).
However, there _is_ a trigger function which sets another flag somewhere 
which I bet is responsible for a lot of the time... it sets a modified 
flag on any journalist associated with the article:



---

-- article table trigger
CREATE OR REPLACE FUNCTION article_setjournomodified_onupdate() RETURNS 
TRIGGER AS $$

BEGIN
-- whenever article is modified, set the modified flag on any 
attributed journos
UPDATE journo SET modified=true WHERE id IN (SELECT journo_id FROM 
journo_attr WHERE article_id=NEW.id);

return NULL;
END;
$$ LANGUAGE 'plpgsql';

CREATE TRIGGER article_update AFTER UPDATE ON article FOR EACH ROW 
EXECUTE PROCEDURE article_setjournomodified_onupdate();


---

(excuse the bad linebreaks!)
I bet the subselect in that trigger slows things down.
article_id in journo attr is a foreign key:

journo_attr_article_id_fkey FOREIGN KEY (article_id) REFERENCES 
article(id) ON DELETE CASCADE


Can the SELECT use such a foreign key index to speed things up? Or do I 
need to explicitly add another index? (and yes, I know that's a stupid 
newbie question!)


Either way, I'll have a go at disabling the trigger to see what impact 
it has on the bulk update of 'article.needs_indexing'!


Actually, I think it's a good argument for moving the needs_indexing 
flag out of the article table - modifying any other article fields 
should cause attributed journos to be marked 'modified', but the 
'needs_indexing' doesn't need to do this - it's just a little 
implementation detail rather than real data...

(and the same goes for 'journo.modified'!)

Thanks,
Ben.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] windows7 login- user account

2010-02-11 Thread Justin Graf
On 2/10/2010 7:15 PM, paul e wrote:
 Before Installed postgresql Windows7 went straight to my user account.
 Now when it boots I have to go to a selection page where I choose 
 between my user account and a postgresql user account. Is there any 
 way to bypass this so it boots directly to my user account?
 
 Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. Sign up 
 now. http://clk.atdmt.com/GBL/go/201469229/direct/01/

go here for instructions
http://www.howtogeek.com/howto/windows-vista/make-windows-vista-log-on-automatically/


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [GENERAL] Problem with pg_compresslog'd archives

2010-02-11 Thread Karl Denninger
Will this come through as a commit on the pgfoundry codebase?  I've
subscribed looking for it

The last edit, if I read the release notes and tracebacks on the
codebase correctly, goes back to the early part of 2009 - which strongly
implies that there are a **LOT** of people out there that could be
running this code with un-restoreable archives!

That, for obvious reasons, could be VERY, VERY bad if someone was to
suffer a system crash


Koichi Suzuki wrote:
 I found it's pg_compresslog problem (calculation of XNOOP record
 length used in pg_decompresslog).I'm fixing the bug and will
 upload the fix shortly.

 Sorry for inconvenience.

 --
 Koichi Suzuki

 2010/2/8 Karl Denninger k...@denninger.net:
   
 This may belong in a bug report, but I'll post it here first...

 There appears to be a **SERIOUS** problem with using pg_compresslog and
 pg_uncompresslog with Postgresql 8.4.2.

 Here's my configuration snippet:

 full_page_writes = on   # recover from partial page writes
 wal_buffers = 256kB # min 32kB
# (change requires restart)
 #wal_writer_delay = 200ms   # 1-1 milliseconds

 #commit_delay = 0   # range 0-10, in microseconds
 #commit_siblings = 5# range 1-1000

 # - Checkpoints -

 checkpoint_segments = 64# in logfile segments, min 1,
 16MB each
 #checkpoint_timeout = 5min  # range 30s-1h
 checkpoint_completion_target = 0.9  # checkpoint target duration,
 0.0 - 1.0
 #checkpoint_warning = 30s   # 0 disables

 archive_command = 'test ! -f /dbms/pg_archive/%f.bz2  pg_compresslog
 %p | bzip2 - /dbms/pg_archive/%f.bz2'   #command to use to
 archive a logfile segment

 All appears to be fine with the writes, and they are being saved off on
 the nightly backups without incident.

 I take a full dump using the instructions in the documentation and make
 sure I copy the proper must have file for consistency to be reached.

 The problem comes when I try to restore.

 recovery_conf contains:

 restore_command = '/usr/local/pgsql/recovery.sh %f %p'

 And that file contains:


 #! /bin/sh

 infile=$1
 outfile=$2

 if test -f /dbms/pg_archive/$infile.bz2
 then
bunzip2 -c /dbms/pg_archive/$infile.bz2 |
 /usr/local/pgsql/bin/pg_decompresslog - $outfile
exit 0
 else
exit 1
 fi

 ==

 The problem is that it appears that some of the segments being saved are
 no good!  On occasion I get this when trying to restore...

 Feb  7 12:43:51 dbms2 postgres[2001]: [210-1] LOG:  restored log file
 00010171009A from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [211-1] LOG:  restored log file
 00010171009B from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [212-1] LOG:  restored log file
 00010171009C from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [213-1] LOG:  restored log file
 00010171009D from archive
 Feb  7 12:43:53 dbms2 postgres[2001]: [214-1] LOG:  restored log file
 00010171009E from archive
 Feb  7 12:43:53 dbms2 postgres[2001]: [215-1] LOG:  restored log file
 00010171009F from archive
 Feb  7 12:43:54 dbms2 postgres[2001]: [216-1] LOG:  restored log file
 0001017100A0 from archive
 Feb  7 12:43:54 dbms2 postgres[2001]: [217-1] LOG:  restored log file
 0001017100A1 from archive
 Feb  7 12:43:55 dbms2 postgres[2001]: [218-1] LOG:  restored log file
 0001017100A2 from archive
 Feb  7 12:43:55 dbms2 postgres[2001]: [219-1] LOG:  restored log file
 0001017100A3 from archive
 Feb  7 12:43:56 dbms2 postgres[2001]: [220-1] LOG:  restored log file
 0001017100A4 from archive
 Feb  7 12:43:56 dbms2 postgres[2001]: [221-1] LOG:  restored log file
 0001017100A5 from archive
 Feb  7 12:43:57 dbms2 postgres[2001]: [222-1] LOG:  restored log file
 0001017100A6 from archive
 Feb  7 12:43:57 dbms2 postgres[2001]: [223-1] PANIC:  corrupted page
 pointers: lower = 772, upper = 616, special = 0
 Feb  7 12:43:57 dbms2 postgres[2001]: [223-2] CONTEXT:  xlog redo
 hot_update: rel 1663/616245/1193269; tid 53/93; new 53/4
 Feb  7 12:43:57 dbms2 postgres[2000]: [1-1] LOG:  startup process (PID
 2001) was terminated by signal 6: Abort trap
 Feb  7 12:43:57 dbms2 postgres[2000]: [2-1] LOG:  terminating any other
 active server processes

 Eek.

 I assume this means that either A6 or A7 is corrupt.  But I have the
 file both in the restore AND ON THE MACHINE WHERE IT ORIGINATED:

 On the SOURCE machine (which is running just fine):
 tickerforum# cksum *171*A[67]*
 172998591 830621 0001017100A6.bz2
 1283345296 1541006 0001017100A7.bz2

 And off the BACKUP archive, which is what I'm trying to restore:

 # cksum *171*A[67]*
 172998591 830621 0001017100A6.bz2
 1283345296 1541006 0001017100A7.bz2

Re: [GENERAL] trouble with unique constraint

2010-02-11 Thread A. Kretschmer
In response to Khin, Gerald :
 The following SQL leads to a unique constraint violation error message

You have already got the answer ... for the same question from you.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Objects: Sizeof and Deleting Unlinked LOs

2010-02-11 Thread Harald Fuchs
In article 4b72aeb3.4000...@selestial.com,
Howard Cole howardn...@selestial.com writes:

 Is there an SQL function to determine the size of a large object?

I'm using a pgsql helper function for that:


  CREATE FUNCTION lo_size(oid oid) RETURNS integer
  LANGUAGE plpgsql
  AS $$
  DECLARE
fd int;
res int;
i int;
  BEGIN
fd = lo_open(oid, 262144); -- INV_READ
IF fd  0 THEN
  RAISE EXCEPTION 'lo_open returns %', fd;
END IF;
res = lo_lseek(fd, 0, 2); -- SEEK_END
IF res  0 THEN
  RAISE EXCEPTION 'lo_lseek returns %', res;
END IF;
i = lo_close(fd);
IF i  0 THEN
  RAISE EXCEPTION 'lo_close returns %', i;
END IF;
RETURN res;
  END;
  $$;

 Also, can I safely delete all the large objects in
 pg_catalog.pg_largeobject? For example:

 select lo_unlink(loid) from (select distinct loid from
 pg_catalog.pg_largeobject) as loids where loid not in (select my_oid
 from my_only_table_that_uses_large_objects)

Did you have a look at contrib/lo?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error migrating database from 8.4 to 8.3

2010-02-11 Thread Filip Rembiałkowski
2010/2/8 Marc Lustig m...@marclustig.com

 Due to a server issue we needed a reinstallation of Ubuntu along with a
 downgrade to Ubuntu Hardy.


 So this is what we did:

 - copied all from /var/lib/postgresql/8.4/main/ to the new server
 /var/lib/postgresql/8.3/main/
 - edited /var/lib/postgresql/8.3/main/postmaster.opts to correct the path



No, that will not work. On-disk formats are not binary compatible.
You have to make backups from 8.4 and restore them on empty 8.3.

See Notes in http://www.postgresql.org/docs/8.4/static/app-pgdump.html



 Now trying to start the server results in

 * Error: The server must be started under the locale : which does not exist
 any more.

 I googled and found that people ran into this problem due to different
 architectures (32 vs 64 bit).
 In this case, the architecture is definiately the same.

 The only difference is that the database comes from a 8.4 installation and
 the server is 8.3

 Please help solving this locale issue.

 thanks!
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/


Re: [GENERAL] problems maintaining boolean columns in a large table

2010-02-11 Thread Filip Rembiałkowski
2010/2/10 Ben Campbell b...@scumways.com

 I settled on:

 CREATE TABLE needs_indexing (
  article_id integer REFERENCES article(id) PRIMARY KEY
 );

 The primary key-ness enforces uniqueness, and any time I want to add an
 article to the queue I just make sure I do a DELETE before the INSERT. Bound
 to be more efficient ways to do it, but it works.

 better use

INSERT INTO needs_indexing (article_id)
SELECT NEW.id
WHERE NOT EXISTS ( SELECT 42 FROM needs_indexing WHERE article_id = NEW.id
);





-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/


Re: [GENERAL] problems maintaining boolean columns in a large table

2010-02-11 Thread Timo Klecker
Hi Ben,

you can check weather one of your indexes is used within the Query by simply
using EXPLAIN ANALYZE.

EXPLAIN ANALYZE SELECT * from journo WHERE id IN (SELECT journo_id FROM
journo_attr WHERE article_id=$AnyExistingIdHere$);

Maybe you have another trigger on the journo table, that is triggered on
update? This would indeed slow everything down. As you mentioned you should
move the needs_indexing flag out of the article table. This could simply
hold all the article_ids that need indexing.


Greetings
Timo Klecker 




-Ursprüngliche Nachricht-
Von: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von Ben Campbell
Gesendet: Donnerstag, 11. Februar 2010 14:45
An: pgsql-general@postgresql.org
Betreff: Re: [GENERAL] problems maintaining boolean columns in a large table

Timo Klecker wrote:
 could you post your trigger function? When you need to rebuild the index,
 you could disable the trigger setting the flag if the article is modified.
 This could speed up your UPDATE.

Embarrassingly, when I checked, I found that I'd never gotten around to 
writing that particular trigger function... (It's just being handled at 
the app level).
However, there _is_ a trigger function which sets another flag somewhere 
which I bet is responsible for a lot of the time... it sets a modified 
flag on any journalist associated with the article:


---

-- article table trigger
CREATE OR REPLACE FUNCTION article_setjournomodified_onupdate() RETURNS 
TRIGGER AS $$
BEGIN
 -- whenever article is modified, set the modified flag on any 
attributed journos
 UPDATE journo SET modified=true WHERE id IN (SELECT journo_id FROM 
journo_attr WHERE article_id=NEW.id);
 return NULL;
END;
$$ LANGUAGE 'plpgsql';

CREATE TRIGGER article_update AFTER UPDATE ON article FOR EACH ROW 
EXECUTE PROCEDURE article_setjournomodified_onupdate();

---

(excuse the bad linebreaks!)
I bet the subselect in that trigger slows things down.
article_id in journo attr is a foreign key:

journo_attr_article_id_fkey FOREIGN KEY (article_id) REFERENCES 
article(id) ON DELETE CASCADE

Can the SELECT use such a foreign key index to speed things up? Or do I 
need to explicitly add another index? (and yes, I know that's a stupid 
newbie question!)

Either way, I'll have a go at disabling the trigger to see what impact 
it has on the bulk update of 'article.needs_indexing'!

Actually, I think it's a good argument for moving the needs_indexing 
flag out of the article table - modifying any other article fields 
should cause attributed journos to be marked 'modified', but the 
'needs_indexing' doesn't need to do this - it's just a little 
implementation detail rather than real data...
(and the same goes for 'journo.modified'!)

Thanks,
Ben.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with pg_compresslog'd archives

2010-02-11 Thread Koichi Suzuki
I understand the situation.   I'll upload the improved code ASAP.

--
Koichi Suzuki



2010/2/11 Karl Denninger k...@denninger.net:
 Will this come through as a commit on the pgfoundry codebase?  I've
 subscribed looking for it

 The last edit, if I read the release notes and tracebacks on the codebase
 correctly, goes back to the early part of 2009 - which strongly implies that
 there are a **LOT** of people out there that could be running this code with
 un-restoreable archives!

 That, for obvious reasons, could be VERY, VERY bad if someone was to suffer
 a system crash


 Koichi Suzuki wrote:

 I found it's pg_compresslog problem (calculation of XNOOP record
 length used in pg_decompresslog).I'm fixing the bug and will
 upload the fix shortly.

 Sorry for inconvenience.

 --
 Koichi Suzuki

 2010/2/8 Karl Denninger k...@denninger.net:


 This may belong in a bug report, but I'll post it here first...

 There appears to be a **SERIOUS** problem with using pg_compresslog and
 pg_uncompresslog with Postgresql 8.4.2.

 Here's my configuration snippet:

 full_page_writes = on                   # recover from partial page writes
 wal_buffers = 256kB                     # min 32kB
                                        # (change requires restart)
 #wal_writer_delay = 200ms               # 1-1 milliseconds

 #commit_delay = 0                       # range 0-10, in microseconds
 #commit_siblings = 5                    # range 1-1000

 # - Checkpoints -

 checkpoint_segments = 64                # in logfile segments, min 1,
 16MB each
 #checkpoint_timeout = 5min              # range 30s-1h
 checkpoint_completion_target = 0.9      # checkpoint target duration,
 0.0 - 1.0
 #checkpoint_warning = 30s               # 0 disables

 archive_command = 'test ! -f /dbms/pg_archive/%f.bz2  pg_compresslog
 %p | bzip2 - /dbms/pg_archive/%f.bz2'           #command to use to
 archive a logfile segment

 All appears to be fine with the writes, and they are being saved off on
 the nightly backups without incident.

 I take a full dump using the instructions in the documentation and make
 sure I copy the proper must have file for consistency to be reached.

 The problem comes when I try to restore.

 recovery_conf contains:

 restore_command = '/usr/local/pgsql/recovery.sh %f %p'

 And that file contains:


 #! /bin/sh

 infile=$1
 outfile=$2

 if test -f /dbms/pg_archive/$infile.bz2
 then
        bunzip2 -c /dbms/pg_archive/$infile.bz2 |
 /usr/local/pgsql/bin/pg_decompresslog - $outfile
        exit 0
 else
        exit 1
 fi

 ==

 The problem is that it appears that some of the segments being saved are
 no good!  On occasion I get this when trying to restore...

 Feb  7 12:43:51 dbms2 postgres[2001]: [210-1] LOG:  restored log file
 00010171009A from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [211-1] LOG:  restored log file
 00010171009B from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [212-1] LOG:  restored log file
 00010171009C from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [213-1] LOG:  restored log file
 00010171009D from archive
 Feb  7 12:43:53 dbms2 postgres[2001]: [214-1] LOG:  restored log file
 00010171009E from archive
 Feb  7 12:43:53 dbms2 postgres[2001]: [215-1] LOG:  restored log file
 00010171009F from archive
 Feb  7 12:43:54 dbms2 postgres[2001]: [216-1] LOG:  restored log file
 0001017100A0 from archive
 Feb  7 12:43:54 dbms2 postgres[2001]: [217-1] LOG:  restored log file
 0001017100A1 from archive
 Feb  7 12:43:55 dbms2 postgres[2001]: [218-1] LOG:  restored log file
 0001017100A2 from archive
 Feb  7 12:43:55 dbms2 postgres[2001]: [219-1] LOG:  restored log file
 0001017100A3 from archive
 Feb  7 12:43:56 dbms2 postgres[2001]: [220-1] LOG:  restored log file
 0001017100A4 from archive
 Feb  7 12:43:56 dbms2 postgres[2001]: [221-1] LOG:  restored log file
 0001017100A5 from archive
 Feb  7 12:43:57 dbms2 postgres[2001]: [222-1] LOG:  restored log file
 0001017100A6 from archive
 Feb  7 12:43:57 dbms2 postgres[2001]: [223-1] PANIC:  corrupted page
 pointers: lower = 772, upper = 616, special = 0
 Feb  7 12:43:57 dbms2 postgres[2001]: [223-2] CONTEXT:  xlog redo
 hot_update: rel 1663/616245/1193269; tid 53/93; new 53/4
 Feb  7 12:43:57 dbms2 postgres[2000]: [1-1] LOG:  startup process (PID
 2001) was terminated by signal 6: Abort trap
 Feb  7 12:43:57 dbms2 postgres[2000]: [2-1] LOG:  terminating any other
 active server processes

 Eek.

 I assume this means that either A6 or A7 is corrupt.  But I have the
 file both in the restore AND ON THE MACHINE WHERE IT ORIGINATED:

 On the SOURCE machine (which is running just fine):
 tickerforum# cksum *171*A[67]*
 172998591 830621 0001017100A6.bz2
 1283345296 1541006 0001017100A7.bz2

 And off the BACKUP archive, which 

[GENERAL] Bug on pg_lesslog

2010-02-11 Thread Koichi Suzuki
Dear Folks;

A very serious bug was reported on pg_lesslog.   So far, I found it's
a bug in pg_compresslog.   Please do not use pg_compresslog and
pg_decompresslog until improved version is uploaded.

I strongly advise to take base backup of your database.

I apologize for inconvenience.   I'll upload the new version ASAP.

Warmest Regards;

--
Koichi Suzuki

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovering fs-data from previous installation

2010-02-11 Thread Adrian Klaver
On Monday 08 February 2010 7:11:28 am Marc Lustig wrote:
 I managed to install again postgresql-8.4 from hardy-backports.
 Now the installation is identical with the previous one.

 With the fresh database, the server starts up fine.

 Now I copied all from the backup to
 /usr/lib/postgresql/8.4
 /var/lib/postgresql/8.4

 The the startup fails like this:

  * Starting PostgreSQL 8.4 database server 

   * Error: could not exec
 /usr/lib/postgresql/8.4/bin/pg_ctl /usr/lib/postgresql/8.4/bin/pg_ctl start
 -D /var/lib/postgresql/8.4/main -l
 /var/log/postgresql/postgresql-8.4-main.log -s -o  -c
 config_file=/etc/postgresql/8.4/main/postgresql.conf :

 the log-file is empty.

 Can you please help to get postgresql to start again using the previous
 database. I suppose there should be no serious issues, as the version of
 postgresql is identical now.

Where are you starting this from? This start up script looks wrong. It looks 
like a combination of scripts. For a comparison here is mine for 8.4 on Ubuntu:

/usr/lib/postgresql/8.4/bin/postgres -D /var/lib/postgresql/8.4/main -c 
config_file=/etc/postgresql/8.4/main/postgresql.conf


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-11 Thread Tom Lane
Carsten Kropf ckro...@fh-hof.de writes:
 Thanks a lot so far. I adopted my structures and am now storing two fields 
 (v_len_ and dimensions) and the storage is now working properly. If I now 
 would try to combine two of these points to a range (like cube) including an 
 upper and a lower bound n-dimensional point structure, I don't get the point 
 how to achieve this.

Well, you can either make that a separate data type with its own
specialized functions for extracting the points, or you can make it into
an array, or possibly a composite type (record).  The example code seems
to be headed towards the first of these but maybe you should consider
the alternatives.

The main problem with your example code seems to be that it's confusing
a struct with a pointer to a struct.  If you made the struct members
be struct PointND rather than pointer to same, it would probably
work, as long as points aren't actually variable-length.  Otherwise
you'd need to deal with the fact that lower isn't really at a fixed
offset in the larger struct.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cache lookup failed for relation message in PG 8.3.7

2010-02-11 Thread Tom Lane
Keaton Adams keaton_ad...@mcafee.com writes:
 Any ideas why we would be receiving this cache lookup failed message?
 PostgreSQL 8.3.7 64 bit, RHEL 5 64 bit OS

 [ query applying pg_table_is_visible() to most of pg_class ]

Is it repeatable, or just something that shows up occasionally with
different OIDs mentioned?  If the latter, it's probably a known race
condition against table drops: the pg_class scan can find rows that
belong to tables that are already deleted by the time
pg_table_is_visible gets called.  There's a hack in 8.4 to avoid
the errors by having pg_table_is_visible (and related functions)
return null instead of failing, but I don't believe we changed it
in previous releases.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread Adrian Klaver
On Thursday 11 February 2010 1:57:39 am Albe Laurenz wrote:
 u235sentinel wrote:
  I have a strange problem we noticed the other day with
  triggers.  We're
  running 8.3.3 on Solaris 10 (intel) and have a feed that comes in
  regularly to populate a table we're working on.  The feed works just
  fine inserting rows however the following trigger stops the feed until
  we remove the trigger.  Any thoughts on what I'm doing wrong here?
 
  Thanks!
 
  ---
 
  CREATE OR REPLACE FUNCTION r.m_t()
  RETURNS trigger AS
  $BODY$
  BEGIN
   INSERT INTO temp_m_t VALUES (NEW.*,1+1);
  RETURN NULL;
  END;
  $BODY$
  LANGUAGE 'plpgsql';
 
 
  CREATE TRIGGER tafter
  AFTER INSERT OR UPDATE
  ON r.m_a
  FOR EACH ROW
  EXECUTE PROCEDURE r.m_t();

 What do you mean stops the feed?

 Can you describe the behaviour in database terms?
 What exactly happens, and how does it differ from what you expect?
 Are there error messages? If yes, could you quote them?

 Yours,
 Laurenz Albe

In addition to the above I am not quite sure about this:

INSERT INTO temp_m_t VALUES (NEW.*,1+1)

Are you trying to have an incrementing number for the last value? As it stands 
you are are always going to get 2 inserted into that field. 

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Versions RSS page is missing version(s)

2010-02-11 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 No, but I imagine we still would encourage people to run the latest revision
 of it. Come this time next year, I hope that we'll tell people on 7.4.2 to

 Do we really, officially, care?

Well, yes, we certainly should. Just because a branch is no longer officially 
supported doesn't mean we want to discourage people from running the latest 
available revision.

 upgrade to 9.0 as soon as possible, but to upgrade to 7.4.27 *immaediately*.

 We should be, and afaik are, telling people to upgrade away from 7.4
 immidiately *already*.

Well, sure, but there's a world of difference from upgrading from 7.4.2 to 
7.4.27 and from upgrading from 7.4.2 to 8.4.2.

 The *meaning* has always been supported versions, but if you read the
 contents of the feed it does say latest.

Well, I'd prefer to have the old versions, but I can handle the status quo. As 
long nobody pulls versions before they are really dead again. :)

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 20100228
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkt0MCUACgkQvJuQZxSWSsjKlQCgr1H+rp14YYhnByAOz2CaGqCp
+IAAoN00KX9OVwnxAOZIJpAyGgx6qCv1
=1D4+
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-11 Thread Carsten Kropf
Thanks for this hint, I already got it to work in the meantime.
My approach now (based on the fact, that PointND is indeed a variable length 
type) is to have the following structure:
struct Range
{
int vl_len_;
struct PointND limits[1];
};
whereas now vl_len_ stores the total size of the structure (including the size 
of the limiting points) and the points themselves keep the layout contained in 
one of my previous mails.
I programmed some macros to access the lower point (limits[0]) and the upper 
point (limits[1]), respectively.
This approach works also when putting it into a table, whereas it will not be 
like this, if I will be doing the whole thing for productive implementation. 
Each of the two points contains a vl_len_ field and a dimension field whereas 
the dimensions have to be the same for each point. So, according the storage of 
the range structure, this would yield an overhead of 2 * vl_len_ and once the 
dimension field each of which is int32. So, in total, we would have an overhead 
of 12 Bytes for each Range stored somewhere compared to a Range that has just 
one-dimensional offsets to show the upper/lower bound, as seen in cube.
regards
Carsten Kropf
Am 11.02.2010 um 16:39 schrieb Tom Lane:

 Carsten Kropf ckro...@fh-hof.de writes:
 Thanks a lot so far. I adopted my structures and am now storing two fields 
 (v_len_ and dimensions) and the storage is now working properly. If I now 
 would try to combine two of these points to a range (like cube) including an 
 upper and a lower bound n-dimensional point structure, I don't get the point 
 how to achieve this.
 
 Well, you can either make that a separate data type with its own
 specialized functions for extracting the points, or you can make it into
 an array, or possibly a composite type (record).  The example code seems
 to be headed towards the first of these but maybe you should consider
 the alternatives.
 
 The main problem with your example code seems to be that it's confusing
 a struct with a pointer to a struct.  If you made the struct members
 be struct PointND rather than pointer to same, it would probably
 work, as long as points aren't actually variable-length.  Otherwise
 you'd need to deal with the fact that lower isn't really at a fixed
 offset in the larger struct.
 
   regards, tom lane
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching a DB index.. possible?

2010-02-11 Thread Moe
Bump

On Fri, Feb 5, 2010 at 5:48 PM, Moe mohamed5432154...@gmail.com wrote:

 I am trying to figure out how I can introduce full text search tips...
 tips meaning, like if you start typing Mer ..
 then a box with a couple of words underneath comes up, with *Mercedes,
 Mercury *and so on :P

 I am thinking that the word suggestions needs to come from the already
 existing tsearch2 index.
 Words that are frequent in the index could be ranked higher as a search
 tip.

 Is there a way to query the actual existing index?

 How else could one approach this? There are many sites that uses this, so
 perhaps there is an easier way I haven't thought of yet.

 Thank you / Moe



Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread Igor Neyman

 -Original Message-
 From: u235sentinel [mailto:u235senti...@gmail.com] 
 Sent: Wednesday, February 10, 2010 11:15 PM
 To: pgsql-general@postgresql.org
 Subject: Postgres Triggers issue
 
 I have a strange problem we noticed the other day with 
 triggers.  We're running 8.3.3 on Solaris 10 (intel) and have 
 a feed that comes in regularly to populate a table we're 
 working on.  The feed works just fine inserting rows however 
 the following trigger stops the feed until we remove the 
 trigger.  Any thoughts on what I'm doing wrong here?
 
 Thanks!
 
 ---
 
 CREATE OR REPLACE FUNCTION r.m_t()
 RETURNS trigger AS
 $BODY$
 BEGIN
  INSERT INTO temp_m_t VALUES (NEW.*,1+1); RETURN NULL; END; 
 $BODY$ LANGUAGE 'plpgsql';
 
 
 CREATE TRIGGER tafter
 AFTER INSERT OR UPDATE
 ON r.m_a
 FOR EACH ROW
 EXECUTE PROCEDURE r.m_t();
 
 

Trigger function for an insert/update trigger should return NEW, not
NULL (OLD - for on delete trigger):

 CREATE OR REPLACE FUNCTION r.m_t()
 RETURNS trigger AS
 $BODY$
 BEGIN
  INSERT INTO temp_m_t VALUES (NEW.*,1+1); RETURN NEW; END; 
 $BODY$ LANGUAGE 'plpgsql';
 

Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching a DB index.. possible?

2010-02-11 Thread Richard Huxton

On 05/02/10 15:48, Moe wrote:

I am trying to figure out how I can introduce full text search tips...
tips meaning, like if you start typing Mer ..
then a box with a couple of words underneath comes up, with *Mercedes,
Mercury *and so on :P

I am thinking that the word suggestions needs to come from the already
existing tsearch2 index.
Words that are frequent in the index could be ranked higher as a search tip.


This could get expensive if you have a large index. You might want to 
summarize down to a separate (word,frequency) table. Particularly if you 
don't want stemming to interfere with your suggestions.


See the manuals (full text search) for details on prefix searches - 
introduced in 8.4 if you really want to go the tsearch route.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump superflous warning message

2010-02-11 Thread Bill Moran

If I do:
pg_dump -a --disable-triggers

I get a warning message:
pg_dump: NOTICE: there are circular foreign-key constraints among these 
table(s):
pg_dump:   directory
pg_dump: You may not be able to restore the dump without using 
--disable-triggers or temporarily dropping the constraints.
pg_dump: Consider using a full dump instead of a --data-only dump to avoid this 
problem.

Of course, this was confusing as hell -- we spent a good bit of time
trying to figure out why pg_dump wasn't honoring --disable-triggers.

It turns out that it is, it's just that it prints the warning even
when it's not applicable.

At least from my perspective, I don't think this warning should be
printed if the proper command line switches have been given.  Is there
a good reason to print it anyway?

This is with pg 8.4.2 (both client and server).

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching a DB index.. possible?

2010-02-11 Thread Moe
On Thu, Feb 11, 2010 at 6:56 PM, Richard Huxton d...@archonet.com wrote:

 On 05/02/10 15:48, Moe wrote:

 I am trying to figure out how I can introduce full text search tips...
 tips meaning, like if you start typing Mer ..
 then a box with a couple of words underneath comes up, with *Mercedes,
 Mercury *and so on :P

 I am thinking that the word suggestions needs to come from the already
 existing tsearch2 index.
 Words that are frequent in the index could be ranked higher as a search
 tip.


 This could get expensive if you have a large index.



 You might want to summarize down to a separate (word,frequency) table.
 Particularly if you don't want stemming to interfere with your suggestions.


Stemming is ok. That's fine.

How would this normally be implemented?  Should I go through my text word by
word myself, or the already existing index and copy over to a new table? Not
sure how this would be managable. Every change to something would require a
word count down, and then a word up after.


 See the manuals (full text search) for details on prefix searches -
 introduced in 8.4 if you really want to go the tsearch route.


I'll look into that.

-- 
  Richard Huxton
  Archonet Ltd



Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread A. Kretschmer
In response to Igor Neyman :
  
  CREATE TRIGGER tafter
  AFTER INSERT OR UPDATE
  ON r.m_a
  FOR EACH ROW
  EXECUTE PROCEDURE r.m_t();
  
  
 
 Trigger function for an insert/update trigger should return NEW, not
 NULL (OLD - for on delete trigger):

It's an AFTER TRIGGER, so the RETURN-Value ignored.

It works with NULL, see my other posting (the example there).


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching a DB index.. possible?

2010-02-11 Thread Richard Huxton

On 11/02/10 17:11, Moe wrote:

You might want to summarize down to a separate (word,frequency) table.
Particularly if you don't want stemming to interfere with your suggestions.


Stemming is ok. That's fine.

How would this normally be implemented?  Should I go through my text word by
word myself, or the already existing index and copy over to a new table? Not
sure how this would be managable. Every change to something would require a
word count down, and then a word up after.


I wouldn't bother keeping it 100% up-to-date. You only want the 
most-plausible suggestions anyway. Have a look at the ts_stat() function.


http://www.postgresql.org/docs/8.4/interactive/textsearch-features.html

Run it overnight, and summarise to a separate table, then just use 
like against it.


Otherwise, like you say, your database will spend most of its time 
adding up word counts etc.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread Andreas Kretschmer
A. Kretschmer andreas.kretsch...@schollglas.com wrote:

 In response to Igor Neyman :
   
   CREATE TRIGGER tafter
   AFTER INSERT OR UPDATE
   ON r.m_a
   FOR EACH ROW
   EXECUTE PROCEDURE r.m_t();
   
   
  
  Trigger function for an insert/update trigger should return NEW, not
  NULL (OLD - for on delete trigger):
 
 It's an AFTER TRIGGER, so the RETURN-Value ignored.

According the doc:

The return value of a BEFORE or AFTER statement-level trigger or an
AFTER row-level trigger is always ignored; it might as well be null.

http://www.postgresql.org/docs/current/static/plpgsql-trigger.html


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Bug on pg_lesslog

2010-02-11 Thread Joshua D. Drake
On Thu, 2010-02-11 at 23:39 +0900, Koichi Suzuki wrote:
 Dear Folks;
 
 A very serious bug was reported on pg_lesslog.   So far, I found it's
 a bug in pg_compresslog.   Please do not use pg_compresslog and
 pg_decompresslog until improved version is uploaded.
 
 I strongly advise to take base backup of your database.
 
 I apologize for inconvenience.   I'll upload the new version ASAP.

Should this go out on announce?

 
 Warmest Regards;
 
 --
 Koichi Suzuki
 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about a table's row estimates

2010-02-11 Thread Ben Chobot
On Feb 10, 2010, at 10:28 PM, Greg Smith wrote:

 Ben Chobot wrote:
 I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about 
 n_live_tup. Shouldn't that be at least fairly close to (n_tup_ins - 
 n_tup-del)? It doesn't seem to be, but I'm unclear why.
  
 Insert 2000 tuples.
 Delete 1000 tuples.
 vacuum
 Insert 1000 tuples. These go into the free space the deleted tuples used to 
 be in.
 analyze
 
 n_tup_ins=3000
 n_tup_del=1000
 n_live_tup=3000

Say what? It's not documented terrible well 
(http://www.postgresql.org/docs/8.4/interactive/monitoring-stats.html is the 
best I can find) but I thought n_live_tup was basically the number of tuples 
visible to a new transaction. If my assumption is wrong, that might explain 
things.

And unfortunately, Tom, we're not resetting stats counters. :(

Re: [GENERAL] pg_dump superflous warning message

2010-02-11 Thread Scott Marlowe
On Thu, Feb 11, 2010 at 10:08 AM, Bill Moran wmo...@potentialtech.com wrote:

 If I do:
 pg_dump -a --disable-triggers

 I get a warning message:
 pg_dump: NOTICE: there are circular foreign-key constraints among these 
 table(s):
 pg_dump:   directory
 pg_dump: You may not be able to restore the dump without using 
 --disable-triggers or temporarily dropping the constraints.
 pg_dump: Consider using a full dump instead of a --data-only dump to avoid 
 this problem.

 Of course, this was confusing as hell -- we spent a good bit of time
 trying to figure out why pg_dump wasn't honoring --disable-triggers.

 It turns out that it is, it's just that it prints the warning even
 when it's not applicable.

But it's not a warning, it's a NOTICE.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Bug on pg_lesslog

2010-02-11 Thread Karl Denninger
Joshua D. Drake wrote:
 On Thu, 2010-02-11 at 23:39 +0900, Koichi Suzuki wrote:
   
 Dear Folks;

 A very serious bug was reported on pg_lesslog.   So far, I found it's
 a bug in pg_compresslog.   Please do not use pg_compresslog and
 pg_decompresslog until improved version is uploaded.

 I strongly advise to take base backup of your database.

 I apologize for inconvenience.   I'll upload the new version ASAP.
 

 Should this go out on announce?
   
I certainly think so.  Anyone who gets caught by surprise on this
could quite possibly lose all their data!

I (fortunately) caught it during TESTING of my archives - before I
needed them.

-- Karl Denninger

attachment: karl.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Truncate and delete adds wal logs for slave to process.

2010-02-11 Thread Chris Barnes

 

 

I have a question regaring delete  truncate versus a drop of the tables and 
recreating it.

 

We have a database that gets recreated each week that is 31 GB in size.

 

The way that it is currently being done is to truncate all of the tables.

 

I would like to confirm.

 

Because both truncate and delete, I would think that this action would be put 
into the pg_log as a log file that can be rolled back. And, when complete, it 
would be shipped to the standby to be processed?

 

To reduce this logging, shipping and processing would it be smarter to have the 
tables dropped and recreated?

 

 

 

 

 

 
  
_



Re: [GENERAL] problems maintaining boolean columns in a large table

2010-02-11 Thread John R Pierce

Ben Campbell wrote:
I _think_ the reason it takes so long is that postgresql doesn't 
modify rows in place - it creates an entry for the modified row and 
zaps the old one. So by touching _every_ row I'm basically forcing it 
to rebuild my whole database... I've got about 2 million rows in 
'articles'.
There are a few indexes on columns in 'articles' which obviously will 
slow things down too.


at the expense of disk space, try setting fill_factor for that table to 
something like 70 instead of the default 100. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about a table's row estimates

2010-02-11 Thread Tom Lane
Ben Chobot be...@silentmedia.com writes:
 And unfortunately, Tom, we're not resetting stats counters. :(

Mph.  Well, the other thing that comes to mind is that n_live_tup
(and n_dead_tup) is typically updated by ANALYZE, but only to an
estimate based on ANALYZE's partial sample of the table.  If the
sample isn't very representative then it might be far off.
How does n_live_tup compare to an actual COUNT(*) count?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump superflous warning message

2010-02-11 Thread Tom Lane
Bill Moran wmo...@potentialtech.com writes:
 If I do:
 pg_dump -a --disable-triggers

 I get a warning message:
 pg_dump: NOTICE: there are circular foreign-key constraints among these 
 table(s):
 pg_dump:   directory
 pg_dump: You may not be able to restore the dump without using 
 --disable-triggers or temporarily dropping the constraints.
 pg_dump: Consider using a full dump instead of a --data-only dump to avoid 
 this problem.

 Of course, this was confusing as hell -- we spent a good bit of time
 trying to figure out why pg_dump wasn't honoring --disable-triggers.

 It turns out that it is, it's just that it prints the warning even
 when it's not applicable.

Yeah, the low-level routine that prints the warning doesn't have ready
access to knowledge of whether that switch has been given.  I'm not
sure it's worth doing anything about it, considering all the different
corner cases involved (in particular text vs archive output).  It'd
be way too easy to suppress the message in cases where it was actually
relevant :-(

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help with SQL join

2010-02-11 Thread Neil Stlyz
Hello, 

I need help with the an advanced SQL JOIN. I think that I can get this 
accomplished with a LEFT JOIN... perhaps some of you may be able to offer 
guidance in this... I appreciate any suggestions you may have. 


Consider the following SQL statement: 


SELECT customerid, count(disctint count1) AS TODAYS_USERS,
count(distinct count2) AS LAST 7 DAYS ,
count(distinct count3) AS LAST 30 DAYS FROM
(SELECT distinct case when modified = '2010-02-11' then modelnumber else null 
end as count1, 
   case when modified = '2010-02-04' then modelnumber else null end as count2, 
   case when modified = '2010-01-11' then modelnumber else null end as count3, 
customerid
   FROM sales WHERE modelnumber LIKE 'GH77%')
   AS results GROUP BY results.customerid


Now, the above SQL query retrieves results that will look something like this: 


 customerid    |  TODAYS_USERS   |   LAST 7 DAYS    |   LAST 30 DAYS
 bigint    |  bigint |   bigint                 
   |   bigint
--
 8699 |  1                      |   1           
   |   1
 8700 |  1                      |   
12|   17
 8701 |  3                              |   
5  |   19


Now... here is the problem I am having... the above SQL query is retrieving 
results from one table: sales
I have another table called customers with a couple of fields (customerid, and 
customername are two of the fields).
I want to join on the customerid in both tables to retrieve the customername in 
the query. 
So I need the results to look something like this: 

customerid        |    customername        |        TODAYS_USERS        
|        LAST 7 DAYS        |        LAST 30 DAYS
bigint|        varchar   |    bigint        
                  |        bigint                        |        bigint

8699                | Joe Smith  |        1                 
                   |        1   |        1
8700                |     Sara Olson            |    1                  
                  |        12                         |17
8701                | Mike Jones   |        3                   
              |        5   |   19

Can someone show me how to use a JOIN with the above SQL Statement? I need to 
bring the customername field into the query from the other table and I have 
been having issues writting the query... can this even be done? 

Thanks, 
Neil



  

[GENERAL] subscribe

2010-02-11 Thread janandith jayawardena



[GENERAL] Bug on pg_lesslog

2010-02-11 Thread Koichi Suzuki
Dear Folks;

A very serious bug was reported on pg_lesslog.   So far, I found it's
a bug in pg_compresslog.   Please do not use pg_compresslog and
pg_decompresslog until improved version is uploaded.

I strongly advise to take base backup of your database.

I apologize for inconvenience.   I'll upload the new version ASAP.

Warmest Regards;
--
Koichi Suzuki

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-02-11 Thread Jeff Ross

Greg Smith wrote:

Jeff Ross wrote:

pgbench is run with this:
pgbench -h varley.openvistas.net -U _postgresql -t 2 -c $SCALE 
pgbench
with scale starting at 10 and then incrementing by 10.  I call it 
three times for each scale.  I've turned on logging to 'all' to try 
and help figure out where the system panics, so that may lower the 
TPS somewhat but I have not been very favorably impressed with the 
speed of these U320 15K disks in RAID10 yet.


-c sets the number of clients active at once.  pgbench has a 
database scale option when you're initializing, -s, that sets how 
many records are in the tables, and therefore how large the database 
is.  If you don't set the scale to a larger number, so that -c  
-s, you'll get bad performance results.  The way you're saying scale 
but changing the client numbers is a little confusing.




My bad.  I know that more information is better than too little.

I think I'm doing it right.  Here's the whole script.  I run it from 
another server on the lan.


#!/bin/sh
#use this to automate pg_bench load
MAX_SCALE=150
HOST=varley.openvistas.net
SCALE=70
rm -rf /tmp/pgbench_results_scale*
while [ $SCALE -ne $MAX_SCALE ] ; do
 pgbench -i -s $SCALE -h $HOST -U _postgresql pgbench | tee -a 
/tmp/pgbench_results_scale_$SCALE.txt
 vacuumdb --analyze -h $HOST -U _postgresql pgbench  | tee -a 
/tmp/pgbench_results_scale_$SCALE.txt
 psql -h $HOST -U _postgresql -c checkpoint; pgbench  | tee -a 
/tmp/pgbench_results_scale_$SCALE.txt
 psql -h $HOST -U _postgresql -c SELECT relname, reltuples, 
pg_size_pretty(relpages * 8192) as size FROM pg_class \
   where relname like ('%account%'); pgbench  | tee -a 
/tmp/pgbench_results_scale_$SCALE.txt
 psql -h $HOST -U _postgresql -c select 
pg_size_pretty(pg_database_size(oid)) from pg_database \
   where datname = 'pgbench'; pgbench  | tee -a 
/tmp/pgbench_results_scale_$SCALE.txt
 time pgbench -h $HOST -U _postgresql -t 2 -c $SCALE  pgbench  | 
tee -a /tmp/pgbench_results_scale_$SCALE.txt
 time pgbench -h $HOST -U _postgresql -t 2 -c $SCALE  pgbench  | 
tee -a /tmp/pgbench_results_scale_$SCALE.txt
 time pgbench -h $HOST -U _postgresql -t 2 -c $SCALE  pgbench  | 
tee -a /tmp/pgbench_results_scale_$SCALE.txt
 cat /tmp/pgbench_results_scale_$SCALE.txt | mail -s Results for Scale 
$SCALE jr...@openvistas.net

 let SCALE=$SCALE+10

done


I can't comment how whether yours are good or bad numbers without 
knowing the actual database scale number.  When reporting a pgbench 
result, it's handy to include the complete output from one of the 
runs, just so people can see exactly what test was run.  After that 
you can just show the TPS values.  Showing the command used to 
initialize the pgbench database can also be helpful.



And here's the whole output of scale = 70.  It made it throught this 
last time without panickinging, but then did panic on scale=80.


CHECKPOINT
   relname|  reltuples  |  size  
---+-+

pgbench_accounts_pkey |   7e+06 | 120 MB
pgbench_accounts  | 6.99985e+06 | 868 MB
(2 rows)

pg_size_pretty 


994 MB
(1 row)

transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 70
number of transactions per client: 2
number of transactions actually processed: 140/140
tps = 293.081245 (including connections establishing)
tps = 293.124705 (excluding connections establishing)
transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 70
number of transactions per client: 2
number of transactions actually processed: 140/140
tps = 332.154154 (including connections establishing)
tps = 332.219275 (excluding connections establishing)
transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 70
number of transactions per client: 2
number of transactions actually processed: 140/140
tps = 354.983013 (including connections establishing)
tps = 355.181403 (excluding connections establishing)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread u235sentinel

Trigger function for an insert/update trigger should return NEW, not

NULL (OLD - for on delete trigger):
  

It's an AFTER TRIGGER, so the RETURN-Value ignored.



According the doc:

The return value of a BEFORE or AFTER statement-level trigger or an
AFTER row-level trigger is always ignored; it might as well be null.

http://www.postgresql.org/docs/current/static/plpgsql-trigger.html


Andreas
  
We found the problem.  I did some additional digging and learned the 
admin in question was trying to trigger on a schema.table that didn't 
exist!  Yeah I did slap him around a bit ;-)


remembering the schema part of the name can be important!!  ::grinz::

One further question, so we're doing inserts from a remote source (it's 
a radware system feeding us data).  Why would it stop the system from 
inserting data when it's an after statement?  I noticed a bunch of 
'connection time out' messages in our logs.


It is working so I'm good.  Still it is interesting the feed just 
stopped when the trigger was enabled.


Thanks!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] C function to create tsquery not working

2010-02-11 Thread Ivan Sergio Borgonovo
I'm still having trouble making this work:

http://pgsql.privatepaste.com/14a6d3075e

CREATE OR REPLACE FUNCTION tsvector_to_tsquery(IN tsv tsvector, op
IN char(1), weights IN varchar(4), maxpos IN smallint
)
RETURNS tsquery
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;

What I expect is:

tsvector_to_tsquery('java tano', '', 'ABCD', 100) -
java  tano

tsvector_to_tsquery('java:1A,2B tano:3C,4D', '', 'ABC', 100) -
java:A  java:B  tano:C

tsvector_to_tsquery('java:1A,2B tano:3C,4D', '|', 'ABC', 100) -
java:AB | tano:C

I've made some improvement compared to previous version I've posted
but still it returns an empty tsquery.

Things that works:
- tsvector_tsquery_size returns reasonable total length of strings
  and total number of (operand + operator)
- curout is actually filled with a lexeme
- filters (wf, posmax) work



-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump superflous warning message

2010-02-11 Thread Bill Moran
In response to Scott Marlowe scott.marl...@gmail.com:

 On Thu, Feb 11, 2010 at 10:08 AM, Bill Moran wmo...@potentialtech.com wrote:
 
  If I do:
  pg_dump -a --disable-triggers
 
  I get a warning message:
  pg_dump: NOTICE: there are circular foreign-key constraints among these 
  table(s):
  pg_dump:   directory
  pg_dump: You may not be able to restore the dump without using 
  --disable-triggers or temporarily dropping the constraints.
  pg_dump: Consider using a full dump instead of a --data-only dump to avoid 
  this problem.
 
  Of course, this was confusing as hell -- we spent a good bit of time
  trying to figure out why pg_dump wasn't honoring --disable-triggers.
 
  It turns out that it is, it's just that it prints the warning even
  when it's not applicable.
 
 But it's not a warning, it's a NOTICE.

Point taken.  However, it's still a message that pops up and confuses
the user by making him think he did something wrong.

Based on Tom's response, it's probably not worth changing, but it would
be nice if there was a reasonable way to detect when it was necessary
and when it wasn't.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with SQL join

2010-02-11 Thread John R Pierce

Neil Stlyz wrote:
Now... here is the problem I am having... the above SQL query is 
retrieving results from one table: sales
I have another table called customers with a couple of fields 
(customerid, and customername are two of the fields).
I want to join on the customerid in both tables to retrieve the 
customername in the query.

So I need the results to look something like this:
 
customerid|customername|
TODAYS_USERS|LAST 7 DAYS|LAST 30 DAYS
bigint|varchar   |
bigint  |bigint
|bigint


8699| Joe Smith  |1
|
1   |1
8700| Sara Olson|1
|12 
|17
8701| Mike Jones   |3
 |
5   |   19
 
Can someone show me how to use a JOIN with the above SQL Statement? I 
need to bring the customername field into the query from the other 
table and I have been having issues writting the query... can this 
even be done?


something like...

SELECT results.customerid, c.customername, count(distinct count1) AS 
TODAYS_USERS,

count(distinct count2) AS LAST 7 DAYS ,
count(distinct count3) AS LAST 30 DAYS
   FROM (SELECT distinct case when s.modified = '2010-02-11' then 
s.modelnumber else null end as count1,
  case when s.modified = '2010-02-04' then s.modelnumber else null 
end as count2,
  case when s.modified = '2010-01-11' then s.modelnumber else null 
end as count3, s.customerid

  FROM sales as s WHERE s.modelnumber LIKE 'GH77%') AS results
   JOIN customers as c ON (results.customerid = c.customerid)
   GROUP BY results.customerid



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread Adrian Klaver

On 02/11/2010 11:08 AM, u235sentinel wrote:

Trigger function for an insert/update trigger should return NEW, not

NULL (OLD - for on delete trigger):

It's an AFTER TRIGGER, so the RETURN-Value ignored.


According the doc:

The return value of a BEFORE or AFTER statement-level trigger or an
AFTER row-level trigger is always ignored; it might as well be null.

http://www.postgresql.org/docs/current/static/plpgsql-trigger.html


Andreas

We found the problem. I did some additional digging and learned the
admin in question was trying to trigger on a schema.table that didn't
exist! Yeah I did slap him around a bit ;-)

remembering the schema part of the name can be important!! ::grinz::

One further question, so we're doing inserts from a remote source (it's
a radware system feeding us data). Why would it stop the system from
inserting data when it's an after statement? I noticed a bunch of
'connection time out' messages in our logs.

It is working so I'm good. Still it is interesting the feed just stopped
when the trigger was enabled.



Well that would depend on any number of factors. Without information on 
how the feed is being done or more detailed logs it is hard to say for 
sure. At a guess though, I would say it is because the 'feed' is being 
done wrapped in a transaction and when the trigger errors it aborts the 
transaction.




Thanks!




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error migrating database from 8.4 to 8.3

2010-02-11 Thread Andrej
On 9 February 2010 02:55, Marc Lustig m...@marclustig.com wrote:
 Due to a server issue we needed a reinstallation of Ubuntu along with a 
 downgrade to Ubuntu Hardy.


 So this is what we did:

 - copied all from /var/lib/postgresql/8.4/main/ to the new server 
 /var/lib/postgresql/8.3/main/
 - edited /var/lib/postgresql/8.3/main/postmaster.opts to correct the path

 Now trying to start the server results in

 * Error: The server must be started under the locale : which does not exist 
 any more.

 I googled and found that people ran into this problem due to different 
 architectures (32 vs 64 bit).
 In this case, the architecture is definiately the same.

In addition to Filip's note:  this copy of datafiles ONLY works with
a) same architecture
and b) same postgres major version that's 8.X.y, where 8.x comprises
the major.

So 8.4 to 8.3 isn't going to fly. Export/import is the only option.

Cheers,
Andrej

-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error migrating database from 8.4 to 8.3

2010-02-11 Thread Scott Ribe
 Export/import is the only option.

Or upgrade PostgreSQL to 8.4 on the downgraded server.

-- 
Scott Ribe
scott_r...@killerbytes.com
http://www.killerbytes.com/
(303) 722-0567 voice



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Deadlock Detected

2010-02-11 Thread David Kerr
Is there a setting that will give me a more verbose log message when a deadlock 
is detected?

currently, i just get Error: Deadlock Detected but it doesn't tell me the 
tables involved.

Thanks

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deadlock Detected

2010-02-11 Thread Vick Khera
On Thu, Feb 11, 2010 at 4:50 PM, David Kerr d...@mr-paradox.net wrote:
 currently, i just get Error: Deadlock Detected but it doesn't tell me the 
 tables involved.


Where do you see this?  The postgres log file surely has more details.
 Also, what version etc. etc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Truncate and delete adds wal logs for slave to process.

2010-02-11 Thread Greg Stark
On Thu, Feb 11, 2010 at 5:47 PM, Chris Barnes
compuguruchrisbar...@hotmail.com wrote:

 Because both truncate and delete, I would think that this action would be
 put into the pg_log as a log file that can be rolled back. And, when
 complete, it would be shipped to the standby to be processed?

 To reduce this logging, shipping and processing would it be smarter to have
 the tables dropped and recreated?

Truncate will generate about the same amount of log data as dropping
and creating the table.

Delete will generate quite a bit more, but still much less than 31G.
It will also leave you needing to vacuum and reindex more often.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Bug on pg_lesslog

2010-02-11 Thread Koichi Suzuki
Thank you very much for the advice.   Yes I think it should go to
announce.   I will post a message.
--
Koichi Suzuki



2010/2/12 Karl Denninger k...@denninger.net:
 Joshua D. Drake wrote:

 On Thu, 2010-02-11 at 23:39 +0900, Koichi Suzuki wrote:


 Dear Folks;

 A very serious bug was reported on pg_lesslog.   So far, I found it's
 a bug in pg_compresslog.   Please do not use pg_compresslog and
 pg_decompresslog until improved version is uploaded.

 I strongly advise to take base backup of your database.

 I apologize for inconvenience.   I'll upload the new version ASAP.


 Should this go out on announce?


 I certainly think so.  Anyone who gets caught by surprise on this could
 quite possibly lose all their data!

 I (fortunately) caught it during TESTING of my archives - before I needed
 them.

 -- Karl Denninger



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] left join count

2010-02-11 Thread Greenhorn
Hi All,
I'm trying to retrieve the count of notes associated for each
transactions for table energy_transactions.
But I seem to be getting (after the join) the sum of amount x count of
notes.  Can someone enlighten me with this problem?
Here's the query that I am working with.

select
  energy_accounts_id, count(note)
  ,sum(case when t.fin_year = 2010 and t.fin_quarter = 1
  then t.total_amount else 0 end) as amount_current
  ,sum(case when t.fin_year = 2009 and t.fin_quarter = 1
  then t.total_amount else 0 end) as amount_last

from energy_transactions t
 left join energy_notes n on (t.id = n.energy_transactions_id)

group by energy_accounts_id, total_amount

Thanks in advance :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] left join count

2010-02-11 Thread Richard Huxton

On 11/02/10 22:53, Greenhorn wrote:

But I seem to be getting (after the join) the sum of amount x count of
notes.  Can someone enlighten me with this problem?



select
   energy_accounts_id, count(note)
   ,sum(case when t.fin_year = 2010 and t.fin_quarter = 1
   then t.total_amount else 0 end) as amount_current
   ,sum(case when t.fin_year = 2009 and t.fin_quarter = 1
   then t.total_amount else 0 end) as amount_last
from energy_transactions t
  left join energy_notes n on (t.id = n.energy_transactions_id)
group by energy_accounts_id, total_amount


If you were to eliminate the group by and aggregates you would see one 
row for each match either side of the join. So - if t.id=123 had three 
notes then it would be repeated three times, with the details of each 
note. As a result, so is t.total_amount repeated three times. When you 
sum(t.total_amount) you will get three times the value you expected.


How to solve this? Split the two parts of the query and join their 
results. Something like:


SELECT
  t.energy_accounts_id,
  sum(coalesce(nc.note_count,0)) as note_count,
  sum(...) as amount_current,
  sum(...) as amount_last
FROM
  energy_transactions t
LEFT JOIN (
  SELECT energy_transactions_id AS id, count(*) AS note_count
  FROM energy_notes
  GROUP BY energy_transactions_id
) AS nc
ON t.id = nc.id
GROUP BY ...


The idea is that the subquery contains only one row for each id on the 
other side of the join.


HTH
--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Truncate and delete adds wal logs for slave to process.

2010-02-11 Thread Alvaro Herrera
Chris Barnes wrote:

 Because both truncate and delete, I would think that this action would
 be put into the pg_log as a log file that can be rolled back. And,
 when complete, it would be shipped to the standby to be processed?

You would be wrong -- truncate does not log the full data, only the fact
that a truncate happened.  In order to be able to roll it back, the
underlying file is kept around until transaction commit.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Alter index .. set tablespace uses ACCESS EXCLUSIVE lock?

2010-02-11 Thread Francisco Reyes
The alter index page does not show the lock mode, but it seems it is an 
ACCESS EXCLUSIVE.


Wouldn't an EXCLUSIVE lock be more appropriate and remove the index from 
planner consideration?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deadlock Detected

2010-02-11 Thread David Kerr
On Thu, Feb 11, 2010 at 05:01:37PM -0500, Vick Khera wrote:
- On Thu, Feb 11, 2010 at 4:50 PM, David Kerr d...@mr-paradox.net wrote:
-  currently, i just get Error: Deadlock Detected but it doesn't tell me the 
tables involved.
- 
- 
- Where do you see this?  The postgres log file surely has more details.
-  Also, what version etc. etc.

you're right, my mistake. Plenty of info in the log (where I was looking, but 
not closely enough).

thanks

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] left join count

2010-02-11 Thread Greenhorn
On 12 February 2010 10:28, Richard Huxton d...@archonet.com wrote:
 On 11/02/10 22:53, Greenhorn wrote:

 But I seem to be getting (after the join) the sum of amount x count of
 notes.  Can someone enlighten me with this problem?

 select
   energy_accounts_id, count(note)
   ,sum(case when t.fin_year = 2010 and t.fin_quarter = 1
       then t.total_amount else 0 end) as amount_current
   ,sum(case when t.fin_year = 2009 and t.fin_quarter = 1
       then t.total_amount else 0 end) as amount_last
 from energy_transactions t
  left join energy_notes n on (t.id = n.energy_transactions_id)
 group by energy_accounts_id, total_amount

 If you were to eliminate the group by and aggregates you would see one row
 for each match either side of the join. So - if t.id=123 had three notes
 then it would be repeated three times, with the details of each note. As a
 result, so is t.total_amount repeated three times. When you
 sum(t.total_amount) you will get three times the value you expected.

 How to solve this? Split the two parts of the query and join their results.
 Something like:

 SELECT
  t.energy_accounts_id,
  sum(coalesce(nc.note_count,0)) as note_count,
  sum(...) as amount_current,
  sum(...) as amount_last
 FROM
  energy_transactions t
 LEFT JOIN (
  SELECT energy_transactions_id AS id, count(*) AS note_count
  FROM energy_notes
  GROUP BY energy_transactions_id
 ) AS nc
 ON t.id = nc.id
 GROUP BY ...


 The idea is that the subquery contains only one row for each id on the other
 side of the join.

 HTH
 --
  Richard Huxton
  Archonet Ltd


Thank you :)
Solved!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread u235sentinel

Adrian Klaver wrote:



Well that would depend on any number of factors. Without information 
on how the feed is being done or more detailed logs it is hard to say 
for sure. At a guess though, I would say it is because the 'feed' is 
being done wrapped in a transaction and when the trigger errors it 
aborts the transaction.




From my perspective, I only see inserts when I select * from 
pg_stat_activity.  I'm told it's a jdbc connection (don't know much 
about java myself) but it has been interesting to see that it's working 
now.  Still I did find it odd that the inserts stopped when the badly 
written trigger was there


I appreciate the help :D

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres standard versus Postgres Plus Advanced Server

2010-02-11 Thread Robert James
Hi.  I'm setting up a new workstation (Win7 64bit Quad Core 4GB) with
Postgres, for development work, and trying to pick which version I
should install.  Most of the time, Postgres is dormant - I'm not using
it all - but when I do use it, the load can be high, and I want
maximum performance.

Is there any reason to use standard Postgres over Postgres Plus
Advanced Server? My understanding is Plus adds some various tools
like standard database connectors, and that Advanced Server adds
Oracle compatibility (which I won't use) and performance improvements
(which are always welcome).  Why not get those performance
improvements? Will they cause Postgres to use more resources when
dormant? Why not just go with Plus Advanced?

Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Triggers issue

2010-02-11 Thread u235sentinel

Adrian Klaver wrote:

On Thursday 11 February 2010 1:57:39 am Albe Laurenz wrote:
  

u235sentinel wrote:


I have a strange problem we noticed the other day with
triggers.  We're
running 8.3.3 on Solaris 10 (intel) and have a feed that comes in
regularly to populate a table we're working on.  The feed works just
fine inserting rows however the following trigger stops the feed until
we remove the trigger.  Any thoughts on what I'm doing wrong here?

Thanks!

---

CREATE OR REPLACE FUNCTION r.m_t()
RETURNS trigger AS
$BODY$
BEGIN
 INSERT INTO temp_m_t VALUES (NEW.*,1+1);
RETURN NULL;
END;
$BODY$
LANGUAGE 'plpgsql';


CREATE TRIGGER tafter
AFTER INSERT OR UPDATE
ON r.m_a
FOR EACH ROW
EXECUTE PROCEDURE r.m_t();
  

What do you mean stops the feed?

Can you describe the behaviour in database terms?
What exactly happens, and how does it differ from what you expect?
Are there error messages? If yes, could you quote them?

Yours,
Laurenz Albe



In addition to the above I am not quite sure about this:

INSERT INTO temp_m_t VALUES (NEW.*,1+1)

Are you trying to have an incrementing number for the last value? As it stands 
you are are always going to get 2 inserted into that field. 

  
Yes this was intentional for testing purposes.  We were trying to see if 
we can do it and it worked.  Now we can get into the really fun stuff :-)


Thanks to all for their help!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] like any in reverse?

2010-02-11 Thread Tom Lane
Gauthier, Dave dave.gauth...@intel.com writes:
 I want to find all records where any element of lst like 'j%'.
 This does not work...
 select * from foo where 'j%' like any(lst);

 Intuitively, you'd think
 select * from foo where any(lst) like 'j%';
 ... but that's a syntax error.

Yeah, the ANY has to be on the right-hand side of the operator.
What you can do for this is build yourself a reverse like
operator, ie flip the left and right arguments within the function.
I'm pretty sure there are worked-out examples in the archives
if that's not enough of a hint for you.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] like any in reverse?

2010-02-11 Thread Gauthier, Dave
Hi:

create table foo (lst text[]);
insert into foo (lst) values (array['jack','fred','jen','sue']);

I want to find all records where any element of lst like 'j%'.

This does not work...

select * from foo where 'j%' like any(lst);

Intuitively, you'd think

select * from foo where any(lst) like 'j%';

... but that's a syntax error.


Can this be done?

Thanks in Advance !