Re: [GENERAL] Postgresql 9.1 logging

2012-01-05 Thread Birta Levente

On 04/01/2012 16:15, Andreas Kretschmer wrote:

Birta Leventeblevi.li...@gmail.com  wrote:


Hi all

I use postgresql 9.1.2 on centos 6.2 and I want to use pgfouine, but in
my log file appear #011, #015 ... characters and the pgfouine can't
handle it.
Can I configure the server or rsyslog to log without these characters or
I need filter separately?

thanks
Levi

the log section from my configuration file:

Set lc_message = 'C' in your postgresql.conf and restart/reload the server.


Andreas
Thanks for your the reply, but nothing changed. When log to stderr these 
characters not appear (even with lc_messages='en_US.utf8'), but pgfouine 
recommend to use with syslog.
It's very simple to make a sed and work fine I am just curious to 
know if it's possible.


Thanks anyway

Levi






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help... lost database after upgrade from 9.0 to 9.1

2012-01-05 Thread Radosław Smogura

On Wed, 4 Jan 2012 15:50:25 +0100, Bruno Boettcher wrote:

Hello!

just made a stupid move... upgraded a working system and without
checking if the backup was ok

so i end up with a debian system having upgraded to 9,1 without
converting the database, and a scrambled backup which is totally
unusable

i tried to start the old tree with
pg_ctlcluster 9.0 main start
Error: could not exec   start -D /var/lib/postgresql/9.0/main -l
/var/log/postgresql/postgresql-9.0-main.log -s -o  -c
config_file=/etc/postgresql/9.0/main/postgresql.conf :


so i tried to copy the old 9.0 tree to a machine with a still working 
9,0

postgres, but it stops with
Starting PostgreSQL 9.0 database server: mainError: could not exec
/usr/lib/postgresql/9.0/bin/pg_ctl /usr/lib/postgresql/9.0/bin/pg_ctl
start -D /var/lib/postgresql/9.0/main -l
/var/log/postgresql/postgresql-9.0-main.log -s -o -c
config_file=/etc/postgresql/9.0/main/postgresql.conf : ... failed!
 failed!


 so what can i do to extract the data of that tree and feed it into 
the

 9.1 tree?

 thanks in avance!
--
ciao bboett
==
bbo...@adlp.org
http://inforezo.u-strasbg.fr/~bboett/
===
If you have your data directory copy it first (just for backup) and 
install previous version of PostgreSQL, you may make this by

* apt (I use Gentoo)
* some precompiled packages
* compiling compatible previous version from sources

In last two cases, and probably in 1st too, You may need to manually 
start PG system by invoking pg server with appropriate command 
parameters (mainly cluster directory I think -D).


Personally I was in such situation and I made this what above, plus I 
changed port number in configuration to different, launched PostgreSQL 
in single-user mode and taken backup connecting to server at new port, 
then imported backup to new version.


Regards,
Radek
http://softperience.eu

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help... lost database after upgrade from 9.0 to 9.1

2012-01-05 Thread Bruno Boettcher
On Wed, Jan 04, 2012 at 10:06:53AM -0800, Adrian Klaver wrote:
Hello!

 So when you are running pg_ctlcluster 9.0 main start what user are you 
 running 
 as?
tried as root...
 
 Have you tried to directly start the 9.0 cluster as the postgres user?:
just tried, same error

postgres@agenda:~$ pg_ctlcluster 9.0 main startError: could not exec
start -D /var/lib/postgresql/9.0/main -l
/var/log/postgresql/postgresql-9.0-main.log -s -o  -c
config_file=/etc/postgresql/9.0/main/postgresql.conf : 

 usr/lib/postgresql/9.0/bin/pg_ctl /usr/lib/postgresql/9.0/bin/pg_ctl start  
 -D\ 
 /var/lib/postgresql/9.0/main\
  -l /var/log/postgresql/postgresql-9.0-main.log -s -o -c\
 config_file=/etc/postgresql/9.0/main/postgresql.conf
 
 
  
   Get the 9.0 server running.
ok, got it, from your lines i saw that the binaries of the server were
removed
so i copied them over from the other server, and got the server running!
pfo.. thanks a lot!

any suggestion how to keep informed about dying disks? (as you might
have guessed, i am only a dev playing sys-admin...)

-- 
ciao bboett
==
bbo...@adlp.org
http://inforezo.u-strasbg.fr/~bboett/
===

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help... lost database after upgrade from 9.0 to 9.1

2012-01-05 Thread Adrian Klaver
On Wednesday, January 04, 2012 11:42:01 pm Bruno Boettcher wrote:
 On Wed, Jan 04, 2012 at 10:06:53AM -0800, Adrian Klaver wrote:
 Hello!
 

 
 ok, got it, from your lines i saw that the binaries of the server were
 removed
 so i copied them over from the other server, and got the server running!
 pfo.. thanks a lot!

Glad to hear:)

 
 any suggestion how to keep informed about dying disks? (as you might
 have guessed, i am only a dev playing sys-admin...)

http://sourceforge.net/apps/trac/smartmontools/wiki

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Radial searches of cartesian points?

2012-01-05 Thread thatsanicehatyouhave
Hi,

I have a data set of several hundred thousand points. Each point is saved as a 
three dimensional coordinate, i.e. (x, y, z). What I'd like to do is given a 
point in that space, get a list of all of the points in the table within some 
radius.

I'm familiar with the q3c package that does this for points that lie on a 
sphere, but is there something comparable for radial searches on 3D cartesian 
points? Speed is definitely an issue given the number of points I have.

Thanks for any suggestions!

Cheers,
Demitri


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Radial searches of cartesian points?

2012-01-05 Thread Merlin Moncure
On Thu, Jan 5, 2012 at 11:01 AM,  thatsanicehatyouh...@mac.com wrote:
 Hi,

 I have a data set of several hundred thousand points. Each point is saved as 
 a three dimensional coordinate, i.e. (x, y, z). What I'd like to do is given 
 a point in that space, get a list of all of the points in the table within 
 some radius.

 I'm familiar with the q3c package that does this for points that lie on a 
 sphere, but is there something comparable for radial searches on 3D cartesian 
 points? Speed is definitely an issue given the number of points I have.

 Thanks for any suggestions!

see:
http://www.postgresql.org/docs/9.1/interactive/cube.html

and pay special attention to gist indexing portions.  cube only
indexes box operations, but you can cull the sphere using 3d distance
formula for points between inner and outer bounding cube.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] handling out of memory conditions when fetching row descriptions

2012-01-05 Thread 'Isidor Zeuner'

'Isidor Zeuner' postgre...@quidecco.de writes:
 using the latest git source code, I found that libpq will let the
 connection stall when getRowDescriptions breaks on an out of memory
 condition. I think this should better be handled differently to allow
 application code to handle such situations gracefully.

The basic assumption in there is that if we wait and retry, eventually
there will be enough memory.


I think the greatest problem with that approach is that there is no
(at least no documented) way for the application to find out that it
should be releasing memory.


 I agree that that's not ideal, since the
application may not be releasing memory elsewhere.  But what you propose
doesn't seem like an improvement: you're converting a maybe-failure into
a guaranteed-failure, and one that's much more difficult to recover from
than an ordinary query error.



I think it was an improvement considering that it puts the application
code back in control. Given that the application has some way to
handle connection and query errors, it can do something reasonable
about the situation. Before, the application had no way to find out
there is a (maybe-)failure at all.


Also, this patch breaks async operation, in which a failure return from
getRowDescriptions normally means that we have to wait for more data
to arrive.  The test would really need to be inserted someplace else.

In any case, getRowDescriptions is really an improbable place for an
out-of-memory to occur: it would be much more likely to happen while
absorbing the body of a large query result.


My assumption was that there is not much logic to handle such
situations because it is improbable.

I am currently using PostGreSQL under memory-constrained conditions,
so I might be getting back with more such cases if they surface.


 There already is some logic
in getAnotherTuple for dealing with that case, which I suggest is a
better model for what to do than break the connection.  But probably
making things noticeably better here would require going through all
the code to check for other out-of-memory cases, and developing some
more uniform method of representing an already-known-failed query
result.  (For instance, it looks like getAnotherTuple might not work
very well if it fails to get memory for one tuple and then succeeds
on later ones.  We probably ought to have some explicit state that
says we are absorbing the remaining data traffic for a query result
that we already ran out of memory for.)



I like this approach. I changed the out-of-memory handling to switch
to a PGASYNC_MEMORY_FULL state, which will skip all messages until the
command is complete. Patch is attached.

Best regards,

Isidor Zeunerdiff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index fb4033d..3ca2e6b 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -157,10 +157,24 @@ pqParseInput3(PGconn *conn)
 		}
 		else if (conn-asyncStatus != PGASYNC_BUSY)
 		{
+			if (conn-asyncStatus == PGASYNC_MEMORY_FULL)
+			{
+if (id == 'C')
+{
+	pqClearAsyncResult(conn);
+	conn-result = PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR);
+	printfPQExpBuffer(conn-errorMessage,
+			  libpq_gettext(out of memory during parsing\n));
+	pqSaveErrorResult(conn);
+	conn-asyncStatus = PGASYNC_READY;
+}
+conn-inCursor += msgLength;
+			}
 			/* If not IDLE state, just wait ... */
-			if (conn-asyncStatus != PGASYNC_IDLE)
+			else if (conn-asyncStatus != PGASYNC_IDLE)
+			{
 return;
-
+			}
 			/*
 			 * Unexpected message in IDLE state; need to recover somehow.
 			 * ERROR messages are displayed using the notice processor;
@@ -170,7 +184,7 @@ pqParseInput3(PGconn *conn)
 			 * it is about to close the connection, so we don't want to just
 			 * discard it...)
 			 */
-			if (id == 'E')
+			else if (id == 'E')
 			{
 if (pqGetErrorNotice3(conn, false /* treat as notice */ ))
 	return;
@@ -268,9 +282,14 @@ pqParseInput3(PGconn *conn)
 	if (conn-result == NULL ||
 		conn-queryclass == PGQUERY_DESCRIBE)
 	{
+		int const before = conn-inCursor;
 		/* First 'T' in a query sequence */
 		if (getRowDescriptions(conn))
-			return;
+		{
+			conn-asyncStatus = PGASYNC_MEMORY_FULL;
+			conn-inCursor = before + msgLength;
+			break;
+		}
 
 		/*
 		 * If we're doing a Describe, we're ready to pass the
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 31b517c..333fc7b 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -219,7 +219,8 @@ typedef enum
 	PGASYNC_READY,/* result ready for PQgetResult */
 	PGASYNC_COPY_IN,			/* Copy In data transfer in progress */
 	PGASYNC_COPY_OUT,			/* Copy Out data transfer in progress */
-	PGASYNC_COPY_BOTH			/* Copy In/Out data transfer in progress */
+	PGASYNC_COPY_BOTH,			/* Copy In/Out data transfer in progress */
+	

Re: [GENERAL] Need Help : PostgreSQL Installation on Windows 7 64 bit

2012-01-05 Thread Shawn Eckley
Wendi/Craig

I have seen an installation issue very similar to this. It has been happening 
on Windows 7 x86 systems. We are using postgres as the DB for our application 
and I have incorporated the postgres installer into our installer, we also use 
Bitrock. We are using postgres 8.4.4. It's been installing perfect fine. But 
just recently things started going wrong on only a few systems. After some 
research I found out that the postgres installer was throwing an error, that it 
can't find the postgres.conf file. When I looked I found the data folder empty. 
I also discovered that the postgres user was never installed. The one thing I 
have verified is that it has something to do with the fact that the Windows 
account that I was running the install from has a space in it. IE. MSI Test 
This is still happening in the most recent installer, postgres 9.1.2.1.

We have been installing this on both Windows 7 x86 and x64 systems,  Home, Pro 
and Ultimate. The Professional version is in a network setting connected to an 
internal netork.


Shawn M Eckley
Software Engineer
Stonewedge Corporation
240 Andover st.
Wilmington, MA 01887
978-203-0642 Ext. 113
seck...@stonewedge.netmailto:seck...@stonewedge.net



This electronic message is intended only for the use of the individual or 
entity named above and may contain information which is privileged and/or 
confidential. If you are not the intended recipient, be aware that any 
disclosure, copying, distribution, dissemination or use of the contents of this 
message is prohibited. If you have received this message in error, please 
notify the sender immediately.


[GENERAL] JOIN column maximum

2012-01-05 Thread Lee Hachadoorian
How is the number of columns in a join determined? When I combine somewhere
around 90 tables in a JOIN, the query returns:

ERROR: joins can have at most 32767 columns
SQL state: 54000

I'm sure most people will say Why the hell are you joining 90 tables.
I've asked this list before for advice on how to work with the
approximately 23,000 column American Community Survey dataset, and based on
previous responses I am trying to combine 117 sequences (basically
vertical partitions of the dataset) into one table using array columns. Of
course, I can build this up by joining a few tables at a time, so the
question is mostly curiosity, but I haven't been able to find this
documented anywhere. Moreover, the 32767 limit doesn't map to any
immediately intuitive transformation of 90, like squaring (which is much
too low) or factorial (which is much to high).

Any insight?

Regards,
--Lee

-- 
Lee Hachadoorian
PhD, Earth  Environmental Sciences (Geography)
Research Associate, CUNY Center for Urban Research
http://freecity.commons.gc.cuny.edu/


Re: [GENERAL] JOIN column maximum

2012-01-05 Thread Tom Lane
Lee Hachadoorian lee.hachadoor...@gmail.com writes:
 How is the number of columns in a join determined? When I combine somewhere
 around 90 tables in a JOIN, the query returns:

 ERROR: joins can have at most 32767 columns

It's the sum of the number of columns in the base tables.

 I'm sure most people will say Why the hell are you joining 90 tables.

Not only that, but why are you working with over-300-column tables?
Seems like your schema design needs rethinking.

 I've asked this list before for advice on how to work with the
 approximately 23,000 column American Community Survey dataset,

Are there really 23000 populated values in each row?  I hesitate to
suggest an EAV approach, but it kinda seems like you need to go in that
direction.  You're never going to get decent performance out of a schema
that requires 100-way joins, even if you avoid bumping up against hard
limits.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JOIN column maximum

2012-01-05 Thread Lee Hachadoorian

On 01/05/2012 06:18 PM, Tom Lane wrote:



ERROR: joins can have at most 32767 columns

It's the sum of the number of columns in the base tables.

That makes sense. I totally misunderstood the message to be referring to 
the number of joined columns rather than table columns.



I've asked this list before for advice on how to work with the
approximately 23,000 column American Community Survey dataset,

Are there really 23000 populated values in each row?  I hesitate to
suggest an EAV approach, but it kinda seems like you need to go in that
direction.  You're never going to get decent performance out of a schema
that requires 100-way joins, even if you avoid bumping up against hard
limits.
Many of the smaller geographies, e.g. census tracts, do in fact have 
data for the vast majority of the columns. I am trying to combine it all 
into one table to avoid the slowness of multiple JOINs (even though in 
practice I'm never joining all the tables at once). EAV sounds correct 
in terms of normalization, but isn't it usually better performance-wise 
to store write-once/read-many data in a denormalized (i.e. flattened) 
fashion? One of these days I'll have to try to benchmark some different 
approaches, but for now planning on using array columns, with each 
sequence (in the Census sense, not the Postgres sense) of 200+ 
variables in its own array rather than its own table.


--Lee

--
Lee Hachadoorian
PhD, Earth  Environmental Sciences (Geography)
Research Associate, CUNY Center for Urban Research
http://freecity.commons.gc.cuny.edu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JOIN column maximum

2012-01-05 Thread Scott Marlowe
On Thu, Jan 5, 2012 at 6:10 PM, Lee Hachadoorian
lee.hachadoor...@gmail.com wrote:


 Many of the smaller geographies, e.g. census tracts, do in fact have data
 for the vast majority of the columns. I am trying to combine it all into one
 table to avoid the slowness of multiple JOINs (even though in practice I'm
 never joining all the tables at once). EAV sounds correct in terms of
 normalization, but isn't it usually better performance-wise to store
 write-once/read-many data in a denormalized (i.e. flattened) fashion? One of
 these days I'll have to try to benchmark some different approaches, but for
 now planning on using array columns, with each sequence (in the Census
 sense, not the Postgres sense) of 200+ variables in its own array rather
 than its own table.

Are you using arrays or hstore?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JOIN column maximum

2012-01-05 Thread Darren Duncan

Lee Hachadoorian wrote:

On 01/05/2012 06:18 PM, Tom Lane wrote:

Are there really 23000 populated values in each row?  I hesitate to
suggest an EAV approach, but it kinda seems like you need to go in that
direction.  You're never going to get decent performance out of a schema
that requires 100-way joins, even if you avoid bumping up against hard
limits.
Many of the smaller geographies, e.g. census tracts, do in fact have 
data for the vast majority of the columns. I am trying to combine it all 
into one table to avoid the slowness of multiple JOINs (even though in 
practice I'm never joining all the tables at once). EAV sounds correct 
in terms of normalization, but isn't it usually better performance-wise 
to store write-once/read-many data in a denormalized (i.e. flattened) 
fashion? One of these days I'll have to try to benchmark some different 
approaches, but for now planning on using array columns, with each 
sequence (in the Census sense, not the Postgres sense) of 200+ 
variables in its own array rather than its own table.


EAV is not necessarily more correct than what you're doing.

The most correct solution is one where your database schema defines, and the 
DBMS enforces, all of the constraints or business rules on your data, so that 
you can not put something in the database that violates the business rules.


Traditional EAV, if you're talking about the common binary table of 
unconstrained field-name,field-value pairs, is not an improvement.


A more correct solution is to use different columns for things with different 
business rules or data types.  If the DBMS can't handle this then that is 
grounds for improving the DBMS.


There's no reason that joins *have* to be slow, and in some DBMS designs you can 
join in linear time, its all about how you implement.


This all being said, 23K values per row just sounds wrong, and I can't imagine 
any census forms having that many details.


Do you, by chance, have multiple values of the same type that are in different 
fields, eg telephone_1, telephone_2 or child_1, child_2 etc?  You should take 
any of those and collect them into array-typed fields, or separate tables with 
just telephone or child columns.  Or do you say have a set of coordinates in 
separate fields?  Or you may have other kinds of redundancy within single rows 
that are best normalized into separate rows.


With 23K values, these probably have many mutual associations, and you could 
split that table into a bunch of other ones where columns that relate more 
closely together are collected.


What I said in the last couple paragraphs is probably your earliest best thing 
to fix, so you both have a better design and it performs together on the DBMS 
you have.


-- Darren Duncan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Running multiple versions

2012-01-05 Thread Nishad Prakash


I'd like to keep my current installation (8.3.3) alive and running while 
installing and running 9.1.2 on the same server.  Can I do this using 
only the existing postgres superuser account?  I'd want to create two 
different initdb locations, and run the versions on different ports, of 
course, but it seems like the superuser's LD_LIBRARY_PATH would be an 
issue.  Whichever .../lib dir it points to, would be the only effective 
one, and the other version's programs wouldn't work.  Is there a way 
around this?  Or am I thinking about it all wrong?


nishad




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Radial searches of cartesian points?

2012-01-05 Thread Demitri Muna
Hi,

On Jan 5, 2012, at 12:54 PM, Merlin Moncure wrote:

 see:
 http://www.postgresql.org/docs/9.1/interactive/cube.html
 
 and pay special attention to gist indexing portions.  cube only
 indexes box operations, but you can cull the sphere using 3d distance
 formula for points between inner and outer bounding cube.

Thanks for the pointer, Merlin. I had been looking at that, and the bounding 
boxes idea is good. I'm a little concerned about the very large number of 
trigonometric calculations this will lead to. For a single, occasional search 
this would not be an issue, but I'm going to be performing this search 
thousands of times.

Is anyone aware of a datatype or third-party implementation that will index in 
three dimensions radially, or what it would take to accomplish this?

Cheers,
Demitri

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running multiple versions

2012-01-05 Thread Scott Marlowe
On Thu, Jan 5, 2012 at 9:48 PM, Nishad Prakash praka...@uci.edu wrote:

 I'd like to keep my current installation (8.3.3) alive and running while
 installing and running 9.1.2 on the same server.  Can I do this using only
 the existing postgres superuser account?  I'd want to create two different
 initdb locations, and run the versions on different ports, of course, but it
 seems like the superuser's LD_LIBRARY_PATH would be an issue.  Whichever
 .../lib dir it points to, would be the only effective one, and the other
 version's programs wouldn't work.  Is there a way around this?  Or am I
 thinking about it all wrong?

So assuming all this is done in regular userland. built from source
etc, you'll need to do a couple things.  Each version needs to be
built with a different --prefix.  I prefer something like
--prefix=/home/myusername/pg83 and --prefix=/home/myusername/pg91 and
so on.  This will put the bin, lib etc stuff in your home dir.  Then
in order to do work in one setup or the other, make a you'll need to
set LD_LIBRARY_PATH and PGDATA accordingly for each instance.  It's
often easiest to just have a simple bash script you can run that sets
those so you can be one user or the other at the running of that
script.  so what might be in one would be something like:

# pg8.3.x stuff file:
export PGDATA /home/myuserdir/pg83/data
export PATH=/usr/bin:/home/myuserdir/pg83/bin
export LD_LIBRARY_PATH=/usr/bin:/home/myuserdir/pg83/lib

# pg9.1.x stuff file:
export PGDATA /home/myuserdir/pg91/data
export PATH=/usr/bin:/home/myuserdir/pg91/bin
export LD_LIBRARY_PATH=/usr/bin:/home/myuserdir/pg91/lib

So you'd ru one file or the other to run that database.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] function return update count

2012-01-05 Thread Kevin Duffy
Hello:

I am try to get a function to return the count of the rows updated within
the function.
As in the following, I wan the number of rows updated to be returned.

This is a simple update, other update statements that I need to write will
be complicated.

CREATE OR REPLACE FUNCTION est_idio_return_stats_update()
  RETURNS integer AS

'
update est_idiosyncratic_return_stats set delta_avg_60 = avg_60 -
period_61_return,  delta_avg_last_24 = avg_last_24 - period_61_return,
delta_avg_last_18 = avg_last_18 - period_61_return,
 delta_avg_last_12 = avg_last_12 - period_61_return,
delta_avg_last_6 = avg_last_06 - period_61_return ;
'

  LANGUAGE SQL ;

The above returns the following:
ERROR:  return type mismatch in function declared to return integer
DETAIL:  Function's final statement must be SELECT or INSERT/UPDATE/DELETE
RETURNING.
CONTEXT:  SQL function est_idio_return_stats_update

** Error **

ERROR: return type mismatch in function declared to return integer
SQL state: 42P13
Detail: Function's final statement must be SELECT or INSERT/UPDATE/DELETE
RETURNING.
Context: SQL function est_idio_return_stats_update


Thanks for your kind assistance.


KD


Re: [GENERAL] Vacuum and Large Objects

2012-01-05 Thread Stefan Keller
Hi Igor
2011/12/16 Igor Neyman iney...@perceptron.com wrote:  But I think,
your problem is right here:

  running VACUUM FULL  pg_largeobject

 If you are running VACUUM FULL ... on the table, you should follow it with 
 the REINDEX TABLE ..., at least on PG versions prior to 9.0.

I'm pretty sure that VACUUM FULL builds new indexes. That's at least
of how I understand the docs, especially the first tip here
http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html

Yours, Stefan


2011/12/16 Igor Neyman iney...@perceptron.com:
 From: Simon Windsor [mailto:simon.wind...@cornfield.me.uk]
 Sent: Wednesday, December 14, 2011 3:02 PM
 To: pgsql-general@postgresql.org
 Subject: Vacuum and Large Objects

 Hi

 I am having problems recovering storage from a Postgres 9.05 database that is 
 used to hold large XML blocks for a week, before they are archived off line.

 The main tables are partitioned in daily partitions, and these are easy to 
 manage, however the DB keeps growing despite using Vacuum (daily at 0700) and 
 autovacuum (this does not seem to run, although the process is running). The 
 system is insert only, and partitions are dropped when over 7 days of age.

 I believe the issue lies with pg_largeobject, it is split between 88 files of 
 approx. 1G each.

 The Postgres settings are default, EXCEPT

 grep ^[a-z] postgresql.conf
 listen_addresses = '*'  # what IP address(es) to listen on;
 port = 5432 # (change requires restart)
 max_connections = 1000  # (change requires restart)
 shared_buffers = 256MB  # min 128kB
 work_mem = 4MB      # min 64kB
 maintenance_work_mem = 256MB    # min 1MB
 vacuum_cost_delay = 20ms    # 0-100 milliseconds
 checkpoint_segments = 32    # in logfile segments, min 1, 16MB 
 each
 checkpoint_completion_target = 0.9  # checkpoint target duration, 0.0 - 
 1.0
 checkpoint_warning = 60s    # 0 disables
 archive_mode = off  # allows archiving to be done
 constraint_exclusion = partition    # on, off, or partition
 log_destination = 'stderr'      # Valid values are combinations of
 logging_collector = on  # Enable capturing of stderr and csvlog
 silent_mode = on    # Run server silently.
 log_checkpoints = on
 log_line_prefix = '%t %d %u '   # special values:
 log_statement = 'none'  # none, ddl, mod, all
 track_activities = on
 track_counts = on
 autovacuum = on # Enable autovacuum subprocess?  'on'
 log_autovacuum_min_duration = 250   # -1 disables, 0 logs all actions and
 autovacuum_max_workers = 3  # max number of autovacuum 
 subprocesses
 autovacuum_naptime = 3min   # time between autovacuum runs
 autovacuum_vacuum_threshold = 500   # min number of row updates before
 autovacuum_analyze_threshold = 100  # min number of row updates before
 autovacuum_vacuum_scale_factor = 0.1    # fraction of table size before vacuum
 autovacuum_analyze_scale_factor = 0.05  # fraction of table size before 
 analyze
 autovacuum_vacuum_cost_delay = 5ms  # default vacuum cost delay for
 autovacuum_vacuum_cost_limit = 200  # default vacuum cost limit for
 statement_timeout = 0   # in milliseconds, 0 is disabled
 datestyle = 'iso, dmy'
 lc_messages = 'en_GB.UTF-8' # locale for system error 
 message
 lc_monetary = 'en_GB.UTF-8' # locale for monetary 
 formatting
 lc_numeric = 'en_GB.UTF-8'  # locale for number formatting
 lc_time = 'en_GB.UTF-8' # locale for time formatting
 default_text_search_config = 'pg_catalog.english'

 Besides running VACUUM FULL  pg_largeobject;, is there a way I can get 
 autovacuum to start and clear this up?

 All the best

 Simon

 Simon Windsor
 Eml: simon.wind...@cornfield.org.uk
 Tel: 01454 617689
 Mob: 07590 324560

 There is nothing in the world that some man cannot make a little worse and 
 sell a little cheaper, and he who considers price only is that man's lawful 
 prey.




 I might be a bit late in this discussion.
 But I think, your problem is right here:

  running VACUUM FULL  pg_largeobject

 If you are running VACUUM FULL ... on the table, you should follow it with 
 the REINDEX TABLE ..., at least on PG versions prior to 9.0.

 Regards,
 Igor Neyman

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function return update count

2012-01-05 Thread Misa Simic
You could try:

1) return UPDATE table

OR

2) use plpsql function instead of SQL
UPDATE table
GET DIAGNOSTICS variable = ROW_COUNT
RETURN variable

Kind regards,
Misa
Sent from my Windows Phone
--
From: Kevin Duffy
Sent: 06/01/2012 06:21
To: pgsql-general@postgresql.org
Subject: [GENERAL] function return update count

Hello:

I am try to get a function to return the count of the rows updated within
the function.
As in the following, I wan the number of rows updated to be returned.

This is a simple update, other update statements that I need to write will
be complicated.

CREATE OR REPLACE FUNCTION est_idio_return_stats_update()
  RETURNS integer AS

'
update est_idiosyncratic_return_stats set delta_avg_60 = avg_60 -
period_61_return,  delta_avg_last_24 = avg_last_24 - period_61_return,
delta_avg_last_18 = avg_last_18 - period_61_return,
 delta_avg_last_12 = avg_last_12 - period_61_return,
delta_avg_last_6 = avg_last_06 - period_61_return ;
'

  LANGUAGE SQL ;

The above returns the following:
ERROR:  return type mismatch in function declared to return integer
DETAIL:  Function's final statement must be SELECT or INSERT/UPDATE/DELETE
RETURNING.
CONTEXT:  SQL function est_idio_return_stats_update

** Error **

ERROR: return type mismatch in function declared to return integer
SQL state: 42P13
Detail: Function's final statement must be SELECT or INSERT/UPDATE/DELETE
RETURNING.
Context: SQL function est_idio_return_stats_update


Thanks for your kind assistance.


KD


Re: [GENERAL] function return update count

2012-01-05 Thread Misa Simic
Sorry,

Option 1) is wrong answer...  :)

Option 2 should work


Sent from my Windows Phone
--
From: Misa Simic
Sent: 06/01/2012 08:34
To: Kevin Duffy; pgsql-general@postgresql.org
Subject: RE: [GENERAL] function return update count

You could try:

1) return UPDATE table

OR

2) use plpsql function instead of SQL
UPDATE table
GET DIAGNOSTICS variable = ROW_COUNT
RETURN variable

Kind regards,
Misa
Sent from my Windows Phone
--
From: Kevin Duffy
Sent: 06/01/2012 06:21
To: pgsql-general@postgresql.org
Subject: [GENERAL] function return update count

Hello:

I am try to get a function to return the count of the rows updated within
the function.
As in the following, I wan the number of rows updated to be returned.

This is a simple update, other update statements that I need to write will
be complicated.

CREATE OR REPLACE FUNCTION est_idio_return_stats_update()
  RETURNS integer AS

'
update est_idiosyncratic_return_stats set delta_avg_60 = avg_60 -
period_61_return,  delta_avg_last_24 = avg_last_24 - period_61_return,
delta_avg_last_18 = avg_last_18 - period_61_return,
 delta_avg_last_12 = avg_last_12 - period_61_return,
delta_avg_last_6 = avg_last_06 - period_61_return ;
'

  LANGUAGE SQL ;

The above returns the following:
ERROR:  return type mismatch in function declared to return integer
DETAIL:  Function's final statement must be SELECT or INSERT/UPDATE/DELETE
RETURNING.
CONTEXT:  SQL function est_idio_return_stats_update

** Error **

ERROR: return type mismatch in function declared to return integer
SQL state: 42P13
Detail: Function's final statement must be SELECT or INSERT/UPDATE/DELETE
RETURNING.
Context: SQL function est_idio_return_stats_update


Thanks for your kind assistance.


KD


Re: [GENERAL] Vacuum and Large Objects

2012-01-05 Thread Guillaume Lelarge
On Fri, 2012-01-06 at 07:12 +0100, Stefan Keller wrote:
 Hi Igor
 2011/12/16 Igor Neyman iney...@perceptron.com wrote:  But I think,
 your problem is right here:
 
   running VACUUM FULL  pg_largeobject
 
  If you are running VACUUM FULL ... on the table, you should follow it 
  with the REINDEX TABLE ..., at least on PG versions prior to 9.0.
 
 I'm pretty sure that VACUUM FULL builds new indexes. That's at least
 of how I understand the docs, especially the first tip here
 http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html
 

Before 9.0, VACUUM FULL required a REINDEX afterwards if you want to
keep decent performances.

With 9.0, it is no longer required because the new VACUUM FULL doesn't
bloat the index anymore.

So, in a sense, you were both right :) The documentation you're
referring to is the 9.0 manual. And Igor specified that one need to
REINDEX after VACUUM FULL for any release prior to 9.0. Both right.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running multiple versions

2012-01-05 Thread prakashn

 So assuming all this is done in regular userland. built from source
 etc, you'll need to do a couple things.  Each version needs to be
 built with a different --prefix.  I prefer something like
 --prefix=/home/myusername/pg83 and --prefix=/home/myusername/pg91 and
 so on.  This will put the bin, lib etc stuff in your home dir.  Then
 in order to do work in one setup or the other, make a you'll need to
 set LD_LIBRARY_PATH and PGDATA accordingly for each instance.

Thanks for your reply, Scott.  Your suggestion sounds like it would be
fine, but I have a few more questions now.  The docs at
http://www.postgresql.org/docs/9.1/interactive/upgrading.html recommend
using the new version's pg_dumpall to back up the existing cluster for
reload.  In light of your reply, it seems you pretty much *have* to change
the pg superuser's LD_LIBRARY_PATH first.  If that's the case, it seems
the docs should mention that you need to do this, as it's somewhat
non-obvious.

Also, I have root access, so if there's a better solution outside of
regular user land, I'd like to know it.

Finally, the installation docs
(http://www.postgresql.org/docs/9.1/interactive/install-procedure.html)
mention relocatable installs and --disable-rpath.  Would *that* be a way
to configure the new version so that calling its .../bin/pg_dumpall (and
other things in bin) would just magically use the right .../lib/
directory?

Thanks,

nishad



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general