Re: [GENERAL] I guess I'm missing something here WRT FOUND

2010-11-08 Thread Alban Hertroys
On 9 Nov 2010, at 5:11, Ralph Smith wrote:

> How is "COLLEEN" not there and there at the same time?

Not really sure what your point is (don't have time to look closely), but...

>   IF LENGTH(Word)>0 THEN
> Word2=substring(Word from 1 for 50) ;
>  -- PERFORM SELECT COUNT(*) FROM zbus_tokens WHERE token = Word2 ;
>  -- IF FOUND THEN
>   PERFORM RNotice1(1,''value'',Word2) ;-- line 29
>   INSERT INTO zbus_tokens (token) values(Word2); 
>   J=J+1 ;
>   IF J % 100 = 0 THEN
> PERFORM Rnotice2(1,''Row'',I,''Inserts'',J) ;
>   END IF ;

> ELSE
>   K=K+1 ;
>   PERFORM RNotice2(1,''did not'',I,''K'',K) ;   -- line 37
>  -- END IF ;

You just connected this ELSE block to the IF statement it was nested inside. 
You probably need to comment out the rest of this ELSE block as well.

>   END IF ;

> Again, 7.4 BITES!

Well, 8 is better, but 7.4 was pretty ok. I think you're blaming your own error 
on the database here ;)

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4cd8fdd810262051411171!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Porting from MS Access 2007 to PostgreSQL

2010-11-08 Thread Arnaud Lesauvage

Hi Victor

Le 9/11/2010 5:22, Victor Hooi a écrit :

Has anybody had any experiencing doing a similar port (Access 2007 to
Postgres) recently, what tools did you use, were there any gotchas you hit
etc? Or just any general advice at all here?


We recently migrated from MSAccess 2000 to PostgreSQL.
We used this great script : 
http://www.rot13.org/~dpavlin/projects/sql/exportSQL3.txt

(I just realized it is mentionned in your second link)
We had to make some changes in the script. I think it was written for on 
old PostgreSQL version, so some syntax did not come out right.

But the script is simple and works well.
It won't transfer constraints I think, but it will create all the DDL 
queries and load all you data in PostgreSQL.

Name conversion works fine too.



*5. Postgres Specific Features*

The issue is we still need to maintain compatibility, where we can, with the
Access frontend.



I can't really give you any advice about this, but we kept using Access 
as our Frontend and everything runs fine.
You'll just have to make sure that you have read about the 'boolean <-> 
integer' problem. This article is a nice start I think :

http://www.postgresonline.com/journal/archives/24-Using-MS-Access-with-PostgreSQL.html


Good luck !
Arnaud

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why facebook used mysql ?

2010-11-08 Thread Scott Marlowe
On Mon, Nov 8, 2010 at 10:47 PM, Sandeep Srinivasa  wrote:

> I did actually try to search for topics on multiple cores vs MySQL, but I
> wasnt able to find anything of much use. Elsewhere (on Hacker News for
> example), I have indeed come across statements that PG scales better on
> multiple cores, which are usually offset by claims that MySQL is better.
> Google isnt of much use for this either - while MySQL has several resources
> talking about  benchmarks/tuning on multi core servers
> (e.g. http://dimitrik.free.fr/blog/archives/2010/09/mysql-performance-55-notes.html),
> I cant find any such serious discussion on Postgresql

Part of that is that 48 core machines with fast enough memory busses
to use those cores, are only now coming out in affordable packages
($10k or so for a machine with a handful of drives) that they're just
getting tested.  I have 8 core, and 12 core older gen AMDs with DDR667
and DDR800 memory, and they dont' scale PAST 8 cores, either one, but
that limitation is due more to the slower HT buss on the older AMDs.
With the much faster HT busses on the 6xxx series Magny Cours CPUs
they scale right out to 40+ cores or so, and give great numbers.  The
taper as you go past 48 processes isn't to bad.  With proper pooling
to keep the number of active connections at or below say 50, it should
run well for a pretty huge load.  And in everyday operation they are
always responsive, even when things aren't going quite right
otherwise.

> However, what I did find
> (http://www.pgcon.org/2008/schedule/events/72.en.html) was titled "Problems
> with PostgreSQL on Multi-core Systems with Multi-Terabyte Data"
>  (interestingly, published by the Postgresql Performance Team @ Sun)

We're not a company selling a product, we're enthusiasts racing our
databases on the weekends, so to speak, and if someone has ideas on
what's slow and how to make it faster we talk about it.  :)That
paper wasn't saying that postgresql is problematic at large levels so
much as to address the problems that arise when you do, and ways to
look forward to improving performance.

> Ergo, my question still stands - maybe my google-fu was bad... why is why I
> am asking for help.

To know if either is a good choice you really need to say what you're
planning on doing.  If you're building a petabyte sized datawarehouse
look at what yahoo did with a custom hacked version of pgsql.  If
you're gonna build another facebook look at what they did.  They're
both very different applications of a "database".

So, your question needs more substance.  What do you want to do with your db?
-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why facebook used mysql ?

2010-11-08 Thread Sandeep Srinivasa
On Tue, Nov 9, 2010 at 10:31 AM, Richard Broersma <
richard.broer...@gmail.com> wrote:

> The following link contains hundreds of comments that you may be
> interested in, some that address issues that are much more interesting
> and well established:
>
>
> http://search.postgresql.org/search?q=mysql+performance&m=1&l=NULL&d=365&s=r&p=1
>

I did actually try to search for topics on multiple cores vs MySQL, but I
wasnt able to find anything of much use. Elsewhere (on Hacker News for
example), I have indeed come across statements that PG scales better on
multiple cores, which are usually offset by claims that MySQL is better.

Google isnt of much use for this either - while MySQL has several resources
talking about  benchmarks/tuning on multi core servers (e.g.
http://dimitrik.free.fr/blog/archives/2010/09/mysql-performance-55-notes.html),
I cant find any such serious discussion on Postgresql

However, what I did find (
http://www.pgcon.org/2008/schedule/events/72.en.html) was titled "*Problems
with PostgreSQL on Multi-core Systems with Multi-Terabyte Data*"
 (interestingly, published by the Postgresql Performance Team @ Sun)

Ergo, my question still stands - maybe my google-fu was bad... why is why I
am asking for help.

regards
Sandeep


Re: [GENERAL] Why facebook used mysql ?

2010-11-08 Thread Richard Broersma
On Mon, Nov 8, 2010 at 8:24 PM, Sandeep Srinivasa  wrote:
> I wonder if anyone can comment on this - especially the part that PG doesnt
> scale as well as MySQL on multiple cores ?

Sorry Sandeep,  there may be some that love to re-re-re-hash these
these subjects.  I myself am losing interest.

The following link contains hundreds of comments that you may be
interested in, some that address issues that are much more interesting
and well established:

http://search.postgresql.org/search?q=mysql+performance&m=1&l=NULL&d=365&s=r&p=1



-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] One-click Installers for 9.1 Alpha 2 (ETA)

2010-11-08 Thread Richard Broersma
On Mon, Nov 8, 2010 at 6:21 PM, Dave Page  wrote:

> To make matters
> worse, the pgAdmin build has changed somewhat on Windows and requires
> some effort to update the installers to work again.

Totally understandable.  I thank you for all of your effort with the
one-click installers!


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Porting from MS Access 2007 to PostgreSQL

2010-11-08 Thread Shoaib Mir
On Tue, Nov 9, 2010 at 3:22 PM, Victor Hooi  wrote:

>
> *4. MS Access to Postgres*
>
>
>
Hmm have you tried Kettle (Pentaho) http://kettle.pentaho.com/


> Any particularly good books here that you'd recommend?
>


http://www.2ndquadrant.com/books/

-- 
Shoaib Mir
http://shoaibmir.wordpress.com/


[GENERAL] Why facebook used mysql ?

2010-11-08 Thread Sandeep Srinivasa
There was an interesting post today on highscalability -
http://highscalability.com/blog/2010/11/4/facebook-at-13-million-queries-per-second-recommends-minimiz.html

The discussion/comments touched upon why mysql is a better idea for Facebook
than Postgres. Here's an interesting one


> One is that PG doesn't scale that well on multiple cores as MySQL nowadays.

Another is in fundamental differences of storage architecture - all
> MySQL/InnoDB data is either a clustered primary key, or secondary key with
> PK pointers - logical relationships between entries allow to have index-only
> scans, which are a must for web-facing databases (good response times, no
> variance).

One more reason is that in heavily indexed databases vacuum will have to do
> full index passes, rather than working with LRU.

As for sharding, etc - there's no way to scale vertically infinitely - so
> the "stupid people shard" point is very very moot.

It is much cheaper to go the commodity hardware path.


or

In general Postgresql is faster at complex queries with a lot of joins and
> such, while MySQL is faster at simple queries such as primary key look up.


I wonder if anyone can comment on this - especially the part that PG doesnt
scale as well as MySQL on multiple cores ?

regards
Sandeep


[GENERAL] Porting from MS Access 2007 to PostgreSQL

2010-11-08 Thread Victor Hooi
Hi,

Disclaimer: Not a DBA, nor I am not a DB guy, so please excuse any ignorance
in the below.

*1. Background*

We have a MS Access 2003 database that we are using to manage registration
and workshop/accommodation allocation for a conference. The database isn't
particularly complicated (around 20 tables or so), nor is the dataset large
(largest table has around 13,000 records, most of the others have around
5000 or so records.)

The structure is a bit convoluted though (mostly for historical reasons),
and most of the queries we use are quite join-heavy. Some of these seem to
take longer than you'd expect them to, for such a small dataset.

The database design is a bit quirky - there's heavy use of varchars for many
things, stacks of NULLs everywhere, and not really much use of
validation/constraints.

*2. MS Access to MySQL *

Recently, this was ported over from a pure-Access database to a Access
front-end over a MySQL backend on somebody's desktop, mostly to provide
multi-user capabilities. I've been told automated tools were used for this,
so I assume we weren't using too many MySQL-specific features.

*3. MySQL to Postgres*

I recently looked at moving this over to a PostgreSQL in a proper server.
Postgres was chosen mainly for convenience since we already have a Postgres
instance setup there (used for some Django projects).

I tried a MySQL to PostgreSQL conversion using Enterprise DB's Migration
Studio, hit an issue with two of the tables complaining about CLOB's...

*4. MS Access to Postgres*

Anyhow, somebody else suggested it might be better to just go straight from
the original MS Access database to PostgreSQL.

My first question is, what is the current recommended procedure for this?

I saw this page from 2001:

http://wiki.postgresql.org/wiki/Microsoft_Access_to_PostgreSQL_Conversion

and the tool referenced there appears to lead to a 404 page.

I also saw the tools referenced there:

http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#Microsoft_Access

and most of them appear quite dated, from the MS Access 97 era.

Has anybody had any experiencing doing a similar port (Access 2007 to
Postgres) recently, what tools did you use, were there any gotchas you hit
etc? Or just any general advice at all here?

*5. Postgres Specific Features*

Once the port is done, I assume they'll probably be some work involved to
clean it up, and leverage on some of Postgres's features. In particular, I'm
hoping to at least get some decent data validations/constraints in.

The issue is we still need to maintain compatibility, where we can, with the
Access frontend.

That and hopefully clean up some of the queries a bit, and try and figure
out why some forms in Access are taking so long to load.

Any particularly good books here that you'd recommend? I saw some
Postgres-specific books on Amazon, but none seem to have particularly good
reviews (or were rather simplistic). Recommendations?

Cheers,
Victor


[GENERAL] I guess I'm missing something here WRT FOUND

2010-11-08 Thread Ralph Smith

How is "COLLEEN" not there and there at the same time?
-
NOTICE:  did not = 11K = 42
CONTEXT:  PL/pgSQL function "get_word" line 37 at perform
NOTICE:  value = COLLEEN
CONTEXT:  PL/pgSQL function "get_word" line 29 at perform

ERROR:  duplicate key violates unique constraint "uniq_tokens"
CONTEXT:  PL/pgSQL function "get_word" line 30 at SQL statement

#
/*
Generate a list of up to 7 tokens from the business table's 
conformedname field.

Strip off leading and trailing commans and quotes, etc.
Results are inserted into table zbus_tokens, not sorted.
*/

CREATE OR REPLACE FUNCTION get_word() RETURNS VOID AS '

DECLARE business business%ROWTYPE ;
   bnamevarchar(100) ; --business.conformedname%TYPE ;
   Word varchar(100) ;
   Word2varchar(100) ;
   Wcount   INTEGER ;
   IBIGINT DEFAULT 0 ;
   JBIGINT DEFAULT 0 ;
   K BIGINT DEFAULT 0 ;
   IsThere  INT ;

BEGIN

 FOR business IN SELECT * FROM business limit 500 LOOP
   bname=business.conformedname ;
   I=I+1 ;
   
   FOR Wcount IN 1..7  LOOP

 Word=split_part(bname,'' '',Wcount) ;
 Word2=ltrim(Word,''!?.%()+$*/0123456789'') ;
 Word=rtrim(Word2,''!?.&()+$*/0123456789'') ;
 Word2=rtrim(ltrim(Word,'',''),'','') ;
 Word=rtrim(ltrim(Word2,''"''),''"'') ;
 
 IF LENGTH(Word)>0 THEN

   Word2=substring(Word from 1 for 50) ;
-- PERFORM SELECT COUNT(*) FROM zbus_tokens WHERE token = Word2 ;
-- IF FOUND THEN
 PERFORM RNotice1(1,''value'',Word2) ;-- line 29
 INSERT INTO zbus_tokens (token) values(Word2);
 J=J+1 ;
 IF J % 100 = 0 THEN
   PERFORM Rnotice2(1,''Row'',I,''Inserts'',J) ;
 END IF ;
   ELSE
 K=K+1 ;
 PERFORM RNotice2(1,''did not'',I,''K'',K) ;   -- line 37
-- END IF ;
 END IF ;
 
   END LOOP ;


 END LOOP ;

 RETURN  ;

END ; ' LANGUAGE plpgsql;
-- ==
SELECT get_word ();
SELECT token, count(token) from zbus_tokens group by 1 order by 2 desc ;
SELECT count(*) from zbus_tokens where token='COLLEEN;

drop function get_word() ;
truncate zbus_tokens ;
drop table zbus_tokens;
create table zbus_tokens (id bigserial, token varchar(50), CONSTRAINT 
uniq_tokens UNIQUE (token)) ;

===
"DOCTOR FINN'S CARD COMPANY"
"SPECIALTY MAINTENANCE"
"RIVERS LANDING RESTAURANT"
"SEATTLE FUSION FC"
"PROFESSIONAL PRACTICE ENVIRONMENTS INC"
"CELEBRATE YOURSELF"
"NEW ACTIVITEA BEVERAGE CO"
"KARY ADAM HORWITZ"
"JOHN CASTRO "MAGICIAN""
"RELIABLE AUTO RENTAL & PARKING"
"COLLEEN CASEY, LMP"
"COLLEEN CASEY, LMP"

THANKS!
Again, 7.4 BITES!

--

Ralph
_



Re: [GENERAL] Removing pgsql_tmp files

2010-11-08 Thread Tom Lane
Alvaro Herrera  writes:
> Excerpts from Tom Lane's message of lun nov 08 22:29:28 -0300 2010:
>> I think we need to re-order the operations there to ensure that the
>> unlink will still happen if the ereport gets interrupted.

> Would it work to put the removal inside a PG_CATCH block?

Well, that still begs the question of what to do exactly.  After some
thought I believe the attached is the best fix.

regards, tom lane

diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index d9ab5e1ea2452131c2778acca6ad913ad4b333af..fd5ec7805fdcaedf73c3fa6aaa1a35970cf8e6db 100644
*** a/src/backend/storage/file/fd.c
--- b/src/backend/storage/file/fd.c
*** void
*** 1032,1038 
  FileClose(File file)
  {
  	Vfd		   *vfdP;
- 	struct stat filestats;
  
  	Assert(FileIsValid(file));
  
--- 1032,1037 
*** FileClose(File file)
*** 1055,1069 
  	}
  
  	/*
! 	 * Delete the file if it was temporary
  	 */
  	if (vfdP->fdstate & FD_TEMPORARY)
  	{
! 		/* reset flag so that die() interrupt won't cause problems */
  		vfdP->fdstate &= ~FD_TEMPORARY;
  		if (log_temp_files >= 0)
  		{
! 			if (stat(vfdP->fileName, &filestats) == 0)
  			{
  if ((filestats.st_size / 1024) >= log_temp_files)
  	ereport(LOG,
--- 1054,1089 
  	}
  
  	/*
! 	 * Delete the file if it was temporary, and make a log entry if wanted
  	 */
  	if (vfdP->fdstate & FD_TEMPORARY)
  	{
! 		/*
! 		 * If we get an error, as could happen within the ereport/elog calls,
! 		 * we'll come right back here during transaction abort.  Reset the
! 		 * flag to ensure that we can't get into an infinite loop.  This code
! 		 * is arranged to ensure that the worst-case consequence is failing
! 		 * to emit log message(s), not failing to attempt the unlink.
! 		 */
  		vfdP->fdstate &= ~FD_TEMPORARY;
+ 
  		if (log_temp_files >= 0)
  		{
! 			struct stat filestats;
! 			int		stat_errno;
! 
! 			/* first try the stat() */
! 			if (stat(vfdP->fileName, &filestats))
! stat_errno = errno;
! 			else
! stat_errno = 0;
! 
! 			/* in any case do the unlink */
! 			if (unlink(vfdP->fileName))
! elog(LOG, "could not unlink file \"%s\": %m", vfdP->fileName);
! 
! 			/* and last report the stat results */
! 			if (stat_errno == 0)
  			{
  if ((filestats.st_size / 1024) >= log_temp_files)
  	ereport(LOG,
*** FileClose(File file)
*** 1072,1081 
  	(unsigned long) filestats.st_size)));
  			}
  			else
  elog(LOG, "could not stat file \"%s\": %m", vfdP->fileName);
  		}
- 		if (unlink(vfdP->fileName))
- 			elog(LOG, "could not unlink file \"%s\": %m", vfdP->fileName);
  	}
  
  	/* Unregister it from the resource owner */
--- 1092,1108 
  	(unsigned long) filestats.st_size)));
  			}
  			else
+ 			{
+ errno = stat_errno;
  elog(LOG, "could not stat file \"%s\": %m", vfdP->fileName);
+ 			}
+ 		}
+ 		else
+ 		{
+ 			/* easy case, just do the unlink */
+ 			if (unlink(vfdP->fileName))
+ elog(LOG, "could not unlink file \"%s\": %m", vfdP->fileName);
  		}
  	}
  
  	/* Unregister it from the resource owner */

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] One-click Installers for 9.1 Alpha 2 (ETA)

2010-11-08 Thread Dave Page
On Mon, Nov 8, 2010 at 3:06 PM, Richard Broersma
 wrote:
> I'm interested in playing with some of the features in the Alpha 2.
> Is there an ETA for the release for the one-click installer?

I'm not sure if I'll get time to build them this time. Unfortunately
the release of the tarballs coincided with PGWest, so I couldn't do
them then and now have to catch up on a bunch of work. To make matters
worse, the pgAdmin build has changed somewhat on Windows and requires
some effort to update the installers to work again.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Removing pgsql_tmp files

2010-11-08 Thread Alvaro Herrera
Excerpts from Tom Lane's message of lun nov 08 22:29:28 -0300 2010:

> Hmm.  If you look at FileClose() in fd.c, you'll discover that that
> "temporary file" log message is emitted immediately before unlink'ing
> the file.  It looks pretty safe ... but, scratching around, I notice
> that there's a CHECK_FOR_INTERRUPTS at the end of ereport().  So a
> cancel that was caught by that exact CHECK_FOR_INTERRUPTS call could
> provoke this symptom.  The window for this is larger than it might seem
> since the CHECK_FOR_INTERRUPTS could be responding to an interrupt that
> came in sometime before that.
> 
> I think we need to re-order the operations there to ensure that the
> unlink will still happen if the ereport gets interrupted.

Would it work to put the removal inside a PG_CATCH block?

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux x Windows LOCALE/ENCODING compatibility

2010-11-08 Thread Carlos Henrique Reimer
Hi,

I'm currently in the process of moving the data from the Windows server to
the new Linux box but facing some problems with the encoding.

Additional configuration information: Windows is running PG 8.3 and the new
Linux box is PG 8.4.

Windows dump command:
pg_dump -U postgres -Fc -v -f "f:\backup

Linux restore command:
pg_restore -v -n brasil -d mapas /backup


pg_restore error:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3574; 0 40805 TABLE DATA
cidade
 postgres
pg_restore: [archiver (db)] COPY failed: ERROR:  character 0x81 of encoding
"WIN
1252" has no equivalent in "UTF8"
CONTEXT:  COPY cidade, line 6

I also tried to dump using pg_dump -E UTF8 but then I got:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  character 0x81 of encoding
"WIN1252" has no equivalent in "UTF8"
pg_dump: The command was: COPY brasil.cidade (gid, "municpio", "municpi0",
uf, longitude, latitude, the_geom) TO stdout;
pg_dump: *** aborted because of error

How can I fix this error?

Thank you!


2010/11/1 Filip Rembiałkowski 

> 2010/11/1 Carlos Henrique Reimer 
> >
> > Hi,
> >
> > I currently have my PostgreSQL server running in a windows box and now
> we're migrating it to a Linux operational system.
> >
> > Current windows configuration:
> > pg_controldata shows the LC_COLLATE and LC_CTYPE are
> Portuguese_Brasil.1252
> > psql \l command shows we have databases with encoding WIN1252 and UTF8
> >
> > New Linux box:
> > Which --locale and --encoding options should I use for the INITDB command
> to have a compatible locale and encoding cluster?
> > Which --encoding option should I use in the CREATEDB command to have a
> compatible encoding database?
>
> from your description I assume you have 8.3 on Windows,
> but you did not tell us which PostgreSQL version you are planning to
> deploy on Linux. I'd recommend 8.4.
>
> main difference is that on Windows, UTF-8 encoding can be used with
> any locale. on Unix, not.
>
> generally, it is safe to do ``initdb --locale=pt_BR.UTF-8'' and create
> fresh databases with default settings.
>
> then, use Linux pg_dump and psql to clone Windows databases. like this:
> pg_dump -h windowsbox -p 5432 -U postgres DBNAME | psql DBNAME
>
> UTF-8 is compatible with all client encodings.
>
> you will get compatible bahaviour for your existing apps with
> SET client_encoding TO 'whatever'; /* this can be entered as pre-SQL
> on every client connection */
>
> see http://www.postgresql.org/docs/8.4/static/multibyte.html,
> 22.2.3. Automatic Character Set Conversion Between Server and Client
>
>
> HTH
>
> --
> Filip Rembiałkowski
> JID,mailto:filip.rembialkow...@gmail.com
> http://filip.rembialkowski.net/
>



-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Removing pgsql_tmp files

2010-11-08 Thread Tom Lane
Michael Glaesemann  writes:
> On Nov 8, 2010, at 16:03 , Tom Lane wrote:
>> That's very peculiar.  Do you keep query logs?  It would be useful to
>> try to correlate the temp files' PIDs and timestamps with the specific
>> queries that must have created them.

> We don't log all of them, but I checked those we did. It looks like it's 
> happening when queries are timing out. I'm seeing this pattern pretty 
> consistently:

> temporary file + query
> canceling statement due to statement timeout 
> second temp file

> Here's a sample:

> pid | 877
> sess_id | 4ccf7257.36d
> sess_line   | 16
> filename| pgsql_tmp877.0
> accessed_at | 2010-09-15 12:14:45-04
> modified_at | 2010-11-01 22:37:00-04
> logged_at   | 2010-11-01 22:37:01.412-04
> error   | LOG
> sql_state   | 0
> message | temporary file: path 
> "pg_tblspc/16384/pgsql_tmp/pgsql_tmp877.0", size 87184416

Oh, so you've got log_temp_files enabled?

Hmm.  If you look at FileClose() in fd.c, you'll discover that that
"temporary file" log message is emitted immediately before unlink'ing
the file.  It looks pretty safe ... but, scratching around, I notice
that there's a CHECK_FOR_INTERRUPTS at the end of ereport().  So a
cancel that was caught by that exact CHECK_FOR_INTERRUPTS call could
provoke this symptom.  The window for this is larger than it might seem
since the CHECK_FOR_INTERRUPTS could be responding to an interrupt that
came in sometime before that.

I think we need to re-order the operations there to ensure that the
unlink will still happen if the ereport gets interrupted.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] It is possible to update more than 1 table in the same update statement?

2010-11-08 Thread Scott Marlowe
On Mon, Nov 8, 2010 at 5:39 PM, Andre Lopes  wrote:
> Hi,
>
> Thanks for the reply.
>
> I have write the transaction, but I have some doubt's... If in this example
> the Update is executed successfully and the Function it is not, what
> happens? The Update automatically rolls back?

Yes, transactions (without savepoints) are all or nothing.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] It is possible to update more than 1 table in the same update statement?

2010-11-08 Thread Andre Lopes
Hi,

Thanks for the reply.

I have write the transaction, but I have some doubt's... If in this example
the Update is executed successfully and the Function it is not, what
happens? The Update automatically rolls back?

Example:

[code]
Begin;
update aae_anuncios
set
n_anunciante = 'teste',
email = 'te...@email.com',
telefone_privado = '123456789',
dat_nasc = '1980-01-01',
n_anuncio = 'teste nome',
telefone_anuncio = '234567890',
id_genero = 'femi',
id_cidade = '1452003',
id_nacionalidade = 'BRAS',
id_orientacao = 'h'
where id_anuncio_externo = '38';

select apr_update_hist_2_1('aae_hist_anuncios',
'id_anuncio_externo',
'38', /* id_anuncio_externo */
'2010-08-31', /* data_inicio */
'2010-12-29', /* data_fim */
'AA' /* Motivo: Aprovação Anúncio */
);
commit;
[/code]

Best Regards,


On Sun, Nov 7, 2010 at 3:41 PM, Scott Ribe wrote:

> On Nov 7, 2010, at 8:37 AM, Andre Lopes wrote:
>
> > The only way I can guarantee a transaction is in a Function or there are
> other ways?
>
>  >
>
> --
> Scott Ribe
> scott_r...@elevated-dev.com
> http://www.elevated-dev.com/
> (303) 722-0567 voice
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Scott Marlowe
On Mon, Nov 8, 2010 at 4:41 PM, Jason Long
 wrote:
> On Mon, 2010-11-08 at 16:23 -0700, Scott Marlowe wrote:
>> On Mon, Nov 8, 2010 at 3:42 PM, Jason Long  wrote:
>> > On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
>> >> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long  
>> >> wrote:
>> >> > I currently have Postgres 9.0 install after an upgrade.  My database is
>> >> > relatively small, but complex.  The dump is about 90MB.
>> >> >
>> >> > Every night when there is no activity I do a full vacuum, a reindex,
>> >>
>> >> One question, why?
>> >>
>> >> > and then dump a nightly backup.
>> >>
>> >> Good idea.
>> >>
>> >> > Is this optimal with regards to performance?  autovacuum is set to the
>> >> > default.
>> >>
>> >> that depends very much on your answer to the question of why are you
>> >> doing it and what you're trying to gain / work around with vacuum full
>> >> / reindex every night.
>> >>
>> >
>> > I have been doing this for several years.  Since my database is small
>> > and it takes little time to do a full vacuum.  I am doing the reindex
>> > because I thought that was recommended after a full vacuum.
>>
>> Definitely reindex after a full vacuum on previous versions (i.e.
>> before 9.0) I think with 9.0 vacuum full is like a cluster without any
>> reordering, so it likely doesn't need reindexing, but I've not played
>> with 9.0 much yet.
>>
>> > As the data has grown the system is slowing down.  Right now I am
>> > looking at ways to improve performance without getting into the queries
>> > themselves because I am swamped with new development.
>>
>> OK, so it's a standard maintenance procedure you've been doing for a
>> while.  That doesn't really explain why you started doing it, but I
>> can guess that you had some bloat issues way back when and vacuum full
>> fixed them, so doing it got kind of enshrined in the nightly
>> maintenance.
> Exactly.
>>
>> > Is doing the full vacuum and reindex hurting or helping anything?
>>
>> It might help a small amount if you've got regular usage patterns.  If
>> you routinely update whole tables over and over then it might be
>> helping.
> I rarely update whole tables.
>>
>> > Any other quick fixes that I can try?
>>
>> Increasing work_mem, shared_buffers, changing random_page_cost and /
>> or seq_page_cost.
> I did up those at one point, but saw little improvement.  I will
> reinvestigate.
>>
>> Log long running queries and run explain analyze on any that show up very 
>> often.
>>
>> But for real performance, you do often have to "get into the queries"
>> because an inefficient query may be something you can cut down to
>> 1/1th the run time with a simple change, and often that change is
>> impossible to make by tuning the db, only the query can be tuned.  It
>> might be something simple like you need to cast a type to match some
>> other type.  Hard to say without looking.
>>
>> When a 90Meg database is slow, it's almost always poorly written /
>> non-optimized queries at the heart of it.
>
> I have no doubt that poorly written and non-optimized queries are at the
> heart of it.  Stupid developer I'll have to fire that lazy bastard... Oh
> wait that's me. ;)

Yeah, I've got one of those bastards where I work too.  :)

> I am going to start using auto_explain and logging long running queries.
> Also time to learn how to read query plans.  So far I have gotten by by
> throwing faster hardware at the problem.

Faster hardware, sadly only gets you so far.

For help with explain, start here:
http://explain.depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Jason Long
On Mon, 2010-11-08 at 16:23 -0700, Scott Marlowe wrote:
> On Mon, Nov 8, 2010 at 3:42 PM, Jason Long  wrote:
> > On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
> >> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long  wrote:
> >> > I currently have Postgres 9.0 install after an upgrade.  My database is
> >> > relatively small, but complex.  The dump is about 90MB.
> >> >
> >> > Every night when there is no activity I do a full vacuum, a reindex,
> >>
> >> One question, why?
> >>
> >> > and then dump a nightly backup.
> >>
> >> Good idea.
> >>
> >> > Is this optimal with regards to performance?  autovacuum is set to the
> >> > default.
> >>
> >> that depends very much on your answer to the question of why are you
> >> doing it and what you're trying to gain / work around with vacuum full
> >> / reindex every night.
> >>
> >
> > I have been doing this for several years.  Since my database is small
> > and it takes little time to do a full vacuum.  I am doing the reindex
> > because I thought that was recommended after a full vacuum.
> 
> Definitely reindex after a full vacuum on previous versions (i.e.
> before 9.0) I think with 9.0 vacuum full is like a cluster without any
> reordering, so it likely doesn't need reindexing, but I've not played
> with 9.0 much yet.
> 
> > As the data has grown the system is slowing down.  Right now I am
> > looking at ways to improve performance without getting into the queries
> > themselves because I am swamped with new development.
> 
> OK, so it's a standard maintenance procedure you've been doing for a
> while.  That doesn't really explain why you started doing it, but I
> can guess that you had some bloat issues way back when and vacuum full
> fixed them, so doing it got kind of enshrined in the nightly
> maintenance.
Exactly.
> 
> > Is doing the full vacuum and reindex hurting or helping anything?
> 
> It might help a small amount if you've got regular usage patterns.  If
> you routinely update whole tables over and over then it might be
> helping.
I rarely update whole tables.
> 
> > Any other quick fixes that I can try?
> 
> Increasing work_mem, shared_buffers, changing random_page_cost and /
> or seq_page_cost.
I did up those at one point, but saw little improvement.  I will
reinvestigate.
> 
> Log long running queries and run explain analyze on any that show up very 
> often.
> 
> But for real performance, you do often have to "get into the queries"
> because an inefficient query may be something you can cut down to
> 1/1th the run time with a simple change, and often that change is
> impossible to make by tuning the db, only the query can be tuned.  It
> might be something simple like you need to cast a type to match some
> other type.  Hard to say without looking.
> 
> When a 90Meg database is slow, it's almost always poorly written /
> non-optimized queries at the heart of it.

I have no doubt that poorly written and non-optimized queries are at the
heart of it.  Stupid developer I'll have to fire that lazy bastard... Oh
wait that's me. ;)

I am going to start using auto_explain and logging long running queries.
Also time to learn how to read query plans.  So far I have gotten by by
throwing faster hardware at the problem.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Scott Marlowe
On Mon, Nov 8, 2010 at 3:42 PM, Jason Long  wrote:
> On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
>> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long  wrote:
>> > I currently have Postgres 9.0 install after an upgrade.  My database is
>> > relatively small, but complex.  The dump is about 90MB.
>> >
>> > Every night when there is no activity I do a full vacuum, a reindex,
>>
>> One question, why?
>>
>> > and then dump a nightly backup.
>>
>> Good idea.
>>
>> > Is this optimal with regards to performance?  autovacuum is set to the
>> > default.
>>
>> that depends very much on your answer to the question of why are you
>> doing it and what you're trying to gain / work around with vacuum full
>> / reindex every night.
>>
>
> I have been doing this for several years.  Since my database is small
> and it takes little time to do a full vacuum.  I am doing the reindex
> because I thought that was recommended after a full vacuum.

Definitely reindex after a full vacuum on previous versions (i.e.
before 9.0) I think with 9.0 vacuum full is like a cluster without any
reordering, so it likely doesn't need reindexing, but I've not played
with 9.0 much yet.

> As the data has grown the system is slowing down.  Right now I am
> looking at ways to improve performance without getting into the queries
> themselves because I am swamped with new development.

OK, so it's a standard maintenance procedure you've been doing for a
while.  That doesn't really explain why you started doing it, but I
can guess that you had some bloat issues way back when and vacuum full
fixed them, so doing it got kind of enshrined in the nightly
maintenance.

> Is doing the full vacuum and reindex hurting or helping anything?

It might help a small amount if you've got regular usage patterns.  If
you routinely update whole tables over and over then it might be
helping.

> Any other quick fixes that I can try?

Increasing work_mem, shared_buffers, changing random_page_cost and /
or seq_page_cost.

Log long running queries and run explain analyze on any that show up very often.

But for real performance, you do often have to "get into the queries"
because an inefficient query may be something you can cut down to
1/1th the run time with a simple change, and often that change is
impossible to make by tuning the db, only the query can be tuned.  It
might be something simple like you need to cast a type to match some
other type.  Hard to say without looking.

When a 90Meg database is slow, it's almost always poorly written /
non-optimized queries at the heart of it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Jason Long
On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long  wrote:
> > I currently have Postgres 9.0 install after an upgrade.  My database is
> > relatively small, but complex.  The dump is about 90MB.
> >
> > Every night when there is no activity I do a full vacuum, a reindex,
> 
> One question, why?
> 
> > and then dump a nightly backup.
> 
> Good idea.
> 
> > Is this optimal with regards to performance?  autovacuum is set to the
> > default.
> 
> that depends very much on your answer to the question of why are you
> doing it and what you're trying to gain / work around with vacuum full
> / reindex every night.
> 

Sorry I am not bumping this.  I meant to send this to the list as well.

I have been doing this for several years.  Since my database is small
and it takes little time to do a full vacuum.  I am doing the reindex
because I thought that was recommended after a full vacuum.

As the data has grown the system is slowing down.  Right now I am
looking at ways to improve performance without getting into the queries
themselves because I am swamped with new development.

Is doing the full vacuum and reindex hurting or helping anything?

Any other quick fixes that I can try?

-- 
Thank you for your time,

Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder and President
http://www.hjbug.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Solaris install - "cannot compute sizeof (off_t)" error - readline issue?

2010-11-08 Thread gabrielle
> First and foremost, I would highly recommend that you use the Sun
> compiler to build it.

...

> Try:
>    CC=/your/path/to/suncc CFLAGS="-I/your/non-standard/include
> -L/your/non-standard/lib -R/your/non-standard/lib ..." \
>    ./configure ...

This did the trick!  Thank you *very* much.

(Sorry for the delayed reply - I had to get the compiler installed on
this machine [it's not mine] and then I was at PgWest all last week.)

gabrielle

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Removing pgsql_tmp files

2010-11-08 Thread Michael Glaesemann

On Nov 8, 2010, at 16:03 , Tom Lane wrote:

> Michael Glaesemann  writes:
>> We've got over 250GB of files in a pgsql_tmp directory, some with 
>> modification timestamps going back to August 2010 when the server was last 
>> restarted.
> 
> That's very peculiar.  Do you keep query logs?  It would be useful to
> try to correlate the temp files' PIDs and timestamps with the specific
> queries that must have created them.


We don't log all of them, but I checked those we did. It looks like it's 
happening when queries are timing out. I'm seeing this pattern pretty 
consistently:

temporary file + query
canceling statement due to statement timeout 
second temp file

Here's a sample:

pid | 877
sess_id | 4ccf7257.36d
sess_line   | 16
filename| pgsql_tmp877.0
accessed_at | 2010-09-15 12:14:45-04
modified_at | 2010-11-01 22:37:00-04
logged_at   | 2010-11-01 22:37:01.412-04
error   | LOG
sql_state   | 0
message | temporary file: path "pg_tblspc/16384/pgsql_tmp/pgsql_tmp877.0", 
size 87184416

pid | 877
sess_id | 4ccf7257.36d
sess_line   | 17
filename| pgsql_tmp877.0
accessed_at | 2010-09-15 12:14:45-04
modified_at | 2010-11-01 22:37:00-04
logged_at   | 2010-11-01 22:37:01.412-04
error   | ERROR
sql_state   | 57014
message | canceling statement due to statement timeout

pid | 877
sess_id | 4ccf7257.36d
sess_line   | 18
filename| pgsql_tmp877.0
accessed_at | 2010-09-15 12:14:45-04
modified_at | 2010-11-01 22:37:00-04
logged_at   | 2010-11-01 22:37:01.434-04
error   | LOG
sql_state   | 0
message | temporary file: path "pg_tblspc/16384/pgsql_tmp/pgsql_tmp877.1", 
size 5480448
query   | 

It looks like the pgsql_tmp877.1 file is cleaned up as it doesn't appear in the 
pgsql_tmp directory.

> Personally, I'd not risk trying to match on PID; it should be sufficient
> to delete anything with a timestamp older than the oldest active
> backend.  (Unless you've got some really long-lived sessions in
> there...)

That's easily-enough determined from pg_stat_activity.

> What PG version is this?

select version();
  version   
  
---
 PostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
20080704 (Red Hat 4.1.2-48), 64-bit
(1 row)

Michael Glaesemann
michael.glaesem...@myyearbook.com




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] One-click Installers for 9.1 Alpha 2 (ETA)

2010-11-08 Thread Richard Broersma
I'm interested in playing with some of the features in the Alpha 2.
Is there an ETA for the release for the one-click installer?

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Jason Long
On Mon, 2010-11-08 at 13:28 -0800, John R Pierce wrote:
> On 11/08/10 10:50 AM, Jason Long wrote:
> > I currently have Postgres 9.0 install after an upgrade.  My database is
> > relatively small, but complex.  The dump is about 90MB.
> >
> > Every night when there is no activity I do a full vacuum, a reindex, and
> > then dump a nightly backup.
> >
> > Is this optimal with regards to performance?  autovacuum is set to the
> > default.
> 
> 
> if you have frequently updated tables that are accessed mostly from 
> their primary key, it may pay to CLUSTER those tables on said index 
> rather than doing the full vacuum.
> 
> VACUUM FULL is usually not recommended, btw.
> 
> Also, if you have tables that get lots of updates that only affect data 
> and not indexed columns, setting a FILL FACTOR of, say, 70 or 80 (its in 
> %) might help with performance by better facilitating HOT updates (HOT 
> is a internal feature added to pg 8.3 to speed up these sorts of updates)
> 
> 
> 

Thanks for the tip on CLUSTER.

My application has a couple hundred tables that all have an int8 for
their primary key.  They are joined heavily on their primary key from
views and dynamically generated SQL.  I am going to looking into
clustering the most frequently updated tables.  Thanks for the tip.

Currently my performance problems are reads to display data.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Jason Long
On Mon, 2010-11-08 at 13:28 -0800, John R Pierce wrote:
> On 11/08/10 10:50 AM, Jason Long wrote:
> > I currently have Postgres 9.0 install after an upgrade.  My database is
> > relatively small, but complex.  The dump is about 90MB.
> >
> > Every night when there is no activity I do a full vacuum, a reindex, and
> > then dump a nightly backup.
> >
> > Is this optimal with regards to performance?  autovacuum is set to the
> > default.
> 
> 
> if you have frequently updated tables that are accessed mostly from 
> their primary key, it may pay to CLUSTER those tables on said index 
> rather than doing the full vacuum.
> 
> VACUUM FULL is usually not recommended, btw.
> 
> Also, if you have tables that get lots of updates that only affect data 
> and not indexed columns, setting a FILL FACTOR of, say, 70 or 80 (its in 
> %) might help with performance by better facilitating HOT updates (HOT 
> is a internal feature added to pg 8.3 to speed up these sorts of updates)
> 
> 
> 

Just so I understand, why is full vacuum not recommended?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Scott Marlowe
On Mon, Nov 8, 2010 at 11:50 AM, Jason Long  wrote:
> I currently have Postgres 9.0 install after an upgrade.  My database is
> relatively small, but complex.  The dump is about 90MB.
>
> Every night when there is no activity I do a full vacuum, a reindex,

One question, why?

> and then dump a nightly backup.

Good idea.

> Is this optimal with regards to performance?  autovacuum is set to the
> default.

that depends very much on your answer to the question of why are you
doing it and what you're trying to gain / work around with vacuum full
/ reindex every night.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding the other statement causing a sharelock

2010-11-08 Thread Ivan Sergio Borgonovo
On Mon, 8 Nov 2010 14:22:16 -0700
Scott Marlowe  wrote:

> Don't know how much it helps here, but this page:
> http://wiki.postgresql.org/wiki/Lock_Monitoring
> is priceless when you're having issues midday with a lock that
> won't go away.

I was thinking to reinvent the wheel and write something similar.

But I was already thinking how am I supposed to "intercept" a lock
that is caused by a long transaction that I know and a process
happening at some unknown time?

I've some strong suspect... and I'd like to exit earlier from a
function if a process is running but I'm not really sure how to add a
semaphore...

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread John R Pierce

On 11/08/10 10:50 AM, Jason Long wrote:

I currently have Postgres 9.0 install after an upgrade.  My database is
relatively small, but complex.  The dump is about 90MB.

Every night when there is no activity I do a full vacuum, a reindex, and
then dump a nightly backup.

Is this optimal with regards to performance?  autovacuum is set to the
default.



if you have frequently updated tables that are accessed mostly from 
their primary key, it may pay to CLUSTER those tables on said index 
rather than doing the full vacuum.


VACUUM FULL is usually not recommended, btw.

Also, if you have tables that get lots of updates that only affect data 
and not indexed columns, setting a FILL FACTOR of, say, 70 or 80 (its in 
%) might help with performance by better facilitating HOT updates (HOT 
is a internal feature added to pg 8.3 to speed up these sorts of updates)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temporary table as a subset of an existing table and indexes

2010-11-08 Thread Merlin Moncure
On Mon, Nov 8, 2010 at 12:15 PM, Matthieu Huin  wrote:
> Greetings all,
>
> I am trying to optimize SELECT queries on a large table (10M rows and more)
> by using temporary tables that are subsets of my main table, thus narrowing
> the search space to a more manageable size.
> Is it possible to transfer indices (or at least use the information from
> existing indices) from the big table to its subset in a reasonable amount of
> time ?

Are you sure that the benefit of creating scratch tables is worth the
overhead?   Can you give explain/analyze of the query you are trying
to optimize?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding the other statement causing a sharelock

2010-11-08 Thread Scott Marlowe
On Mon, Nov 8, 2010 at 2:18 PM, Ivan Sergio Borgonovo
 wrote:
> On Mon, 08 Nov 2010 15:45:12 -0500
> Tom Lane  wrote:
>
>> Ivan Sergio Borgonovo  writes:
>> > I get
>> > DETAIL:  Process 24749 waits for ShareLock on transaction
>> > 113443492; blocked by process 25199. Process 25199 waits for
>> > ShareLock on transaction 113442820; blocked by process 24749.
>>
>> > I would like to know both statements that caused the sharelock
>> > problem.
>
>> Recent versions of PG record both (or all) statements involved in a
>> deadlock in the postmaster log.
>

Don't know how much it helps here, but this page:
http://wiki.postgresql.org/wiki/Lock_Monitoring
is priceless when you're having issues midday with a lock that won't go away.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding the other statement causing a sharelock

2010-11-08 Thread Ivan Sergio Borgonovo
On Mon, 08 Nov 2010 15:45:12 -0500
Tom Lane  wrote:

> Ivan Sergio Borgonovo  writes:
> > I get
> > DETAIL:  Process 24749 waits for ShareLock on transaction
> > 113443492; blocked by process 25199. Process 25199 waits for
> > ShareLock on transaction 113442820; blocked by process 24749.
> 
> > I would like to know both statements that caused the sharelock
> > problem.

> Recent versions of PG record both (or all) statements involved in a
> deadlock in the postmaster log.

What about not so recent 8.3.9?
thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Removing pgsql_tmp files

2010-11-08 Thread Tom Lane
Michael Glaesemann  writes:
> We've got over 250GB of files in a pgsql_tmp directory, some with 
> modification timestamps going back to August 2010 when the server was last 
> restarted.

That's very peculiar.  Do you keep query logs?  It would be useful to
try to correlate the temp files' PIDs and timestamps with the specific
queries that must have created them.

> Does this query look reasonable? What other things should I take into account 
> before I start deleting files from the file system? Why might these files not 
> be cleaned up on their own?

Personally, I'd not risk trying to match on PID; it should be sufficient
to delete anything with a timestamp older than the oldest active
backend.  (Unless you've got some really long-lived sessions in
there...)

What PG version is this?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] temporary table as a subset of an existing table and indexes

2010-11-08 Thread Matthieu Huin

Greetings all,

I am trying to optimize SELECT queries on a large table (10M rows and 
more) by using temporary tables that are subsets of my main table, thus 
narrowing the search space to a more manageable size.
Is it possible to transfer indices (or at least use the information from 
existing indices) from the big table to its subset in a reasonable 
amount of time ?


When I try :

CREATE TEMPORARY TABLE tmp AS
SELECT * FROM big_table WHERE condition;

The table creation is fast ( a few seconds ) as there are indices on the 
big table that are optimized for condition, but then indexing the data 
is rather costly (the new table would have around 100k rows) and takes a 
few minutes to complete. This is not acceptable as the whole process 
aims at reducing the query time.


I get even worse results with the following transaction :

CREATE TEMPORARY TABLE tmp ( LIKE big_table INCLUDING INDEXES );
INSERT INTO tmp SELECT * FROM big_table WHERE condition;

Also, partitioning my big table from the very beginning is not an 
option, as it doesn't guarantee index key unicity ( according to 
http://www.postgresql.org/docs/current/interactive/ddl-partitioning.html ).


Any suggestions on this ?

Kind regards,

Matthieu Huin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Jason Long
I currently have Postgres 9.0 install after an upgrade.  My database is
relatively small, but complex.  The dump is about 90MB.

Every night when there is no activity I do a full vacuum, a reindex, and
then dump a nightly backup.

Is this optimal with regards to performance?  autovacuum is set to the
default.

-- 
Thank you for your time,

Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder and President
http://www.hjbug.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka

Replaying to my own mail. Maybe we've found the root cause:

In one database there was a table with 200k records where each record 
contained 15kB bytea field. Auto-ANALYZE was running on that table 
continuously (with statistics target 500). When we avoid the 
auto-ANALYZE via UPDATE table set bytea_column = null; CLUSTER table; 
the problem with ERROR: out of memory went away.


Could it be that the failed connections were issued by autovacuum?

Thanks,

Kuba

Dne 8.11.2010 19:19, Jakub Ouhrabka napsal(a):

Hi,

we have several instances of following error in server log:

2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory
2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384.

It's always the first log message from the backend. We're trying to
trace it down. Whether it's always connection attempt to the same
database or not - I don't know at the moment.

Sometimes the error message is preceded by memory stats which are below
in the email.

Other relevant data:
Linux, PostgreSQL 8.2.10
RAM 28GB

max_connections = 2048

shared_buffers = 2048MB

temp_buffers = 32MB
max_prepared_transactions = 0

max_fsm_pages = 1000
max_fsm_relations = 10


There are cca 1200 concurrent database connections (active backends). I
know it's too much, we're trying to reduce the number but it's not that
easy because of large number of databases and heavy use of listen/notify
so connection pooler doesn't help...

What can cause this error? What parameter should be raised?

Thanks,

Kuba

Messages preceding ERROR: out ouf memory message

TopMemoryContext: 581920 total in 32 blocks; 13760 free (15 chunks);
568160 used
TopTransactionContext: 85077936 total in 20 blocks; 4523352 free (9
chunks); 80554584 used
Analyze Index: 1358288 total in 3 blocks; 7408 free (0 chunks); 1350880
used
ExecutorState: 8192 total in 1 blocks; 4928 free (0 chunks); 3264 used
ExprContext: 13664019952 total in 564677 blocks; 968292944 free (117296
chunks); 12695727008 used
Analyze Column: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
Autovacuum context: 57344 total in 3 blocks; 35624 free (13 chunks);
21720 used
CFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
MbProcContext: 17408 total in 2 blocks; 17184 free (8 chunks); 224 used
smgr relation table: 24576 total in 2 blocks; 5648 free (4 chunks);
18928 used
TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
CacheMemoryContext: 800552 total in 19 blocks; 162536 free (2 chunks);
638016 used
pg_toast_1152341368_index: 2048 total in 1 blocks; 680 free (0 chunks);
1368 used
index_name1: 2048 total in 1 blocks; 512 free (0 chunks); 1536 used
index_name2: 2048 total in 1 blocks; 528 free (0 chunks); 1520 used
...
Per-database table: 253952 total in 5 blocks; 120064 free (21 chunks);
133888 used
Per-database table: 253952 total in 5 blocks; 138544 free (22 chunks);
115408 used
Per-database table: 253952 total in 5 blocks; 138544 free (22 chunks);
115408 used
Per-database table: 57344 total in 3 blocks; 34352 free (11 chunks);
22992 used
...
Databases hash: 122880 total in 4 blocks; 66912 free (13 chunks); 55968
used
MdSmgr: 8192 total in 1 blocks; 1664 free (0 chunks); 6528 used
LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
Rendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks);
6512 used
PLpgSQL function cache: 8192 total in 1 blocks; 3744 free (0 chunks);
4448 used
Timezones: 53584 total in 2 blocks; 3744 free (0 chunks); 49840 used
Postmaster: 253952 total in 5 blocks; 65400 free (8 chunks); 188552 used
ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Tom Lane
Jakub Ouhrabka  writes:
>>> They clearly were: notice the reference to "Autovacuum context" in the
>>> memory map.  I think you are right to suspect that auto-analyze was
>>> getting blown out by the wide bytea columns.  Did you have any
>>> expression indexes involving those columns?

> Yes, there are two unique btree indexes:

> (col1, col2, col3, md5(array_to_string(col_bytea_arr, ''::text)) where 
> col4 is not null

> (col1, col2, col3, md5(array_to_string(col_bytea_arr, ''::text)) where 
> col4 is null

> What is the cause of the out of memory then?

Hmm.  I suspect that evaluation of that md5() call is resulting in
memory leakage, but not sure why or where exactly.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding the other statement causing a sharelock

2010-11-08 Thread Tom Lane
Ivan Sergio Borgonovo  writes:
> I get
> DETAIL:  Process 24749 waits for ShareLock on transaction 113443492;
> blocked by process 25199. Process 25199 waits for ShareLock on
> transaction 113442820; blocked by process 24749.

> I would like to know both statements that caused the sharelock
> problem.

Recent versions of PG record both (or all) statements involved in a
deadlock in the postmaster log.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Removing pgsql_tmp files

2010-11-08 Thread Michael Glaesemann
We've got over 250GB of files in a pgsql_tmp directory, some with modification 
timestamps going back to August 2010 when the server was last restarted.

select pg_postmaster_start_time();
 pg_postmaster_start_time
---
2010-08-08 22:53:31.999804-04
(1 row)

I'm not sure why the temp files aren't being cleaned up. I can confirm we 
haven't had a backend crash in at least a week (from inspection of the log 
files). The oldest backend goes back to the end of October:

production=# select current_timestamp, min(backend_start) from pg_stat_activity;
now  |  min  
---+---
2010-11-08 15:23:25.331311-05 | 2010-10-27 05:51:02.707688-04
(1 row)

To confirm which files are no longer being used, I've come up with the 
following query (where 16384 is the tablespace in question):

SELECT filename, pid, (pg_stat_file(dir || '/' || filename)).modification AS 
modified_at
FROM (SELECT CAST('pg_tblspc/16384/pgsql_tmp' AS TEXT) as dir, filename, 
CAST(regexp_replace(filename, $r$^pgsql_tmp(\d+)\..*$$r$, 
$rr$\1$rr$, 'g') AS INT) as pid 
FROM pg_ls_dir('pg_tblspc/16384/pgsql_tmp') AS the (filename)) AS 
temp_files (dir, filename, pid)
LEFT JOIN pg_stat_activity ON procpid = pid
WHERE procpid IS NULL
ORDER BY modified_at;

 filename  |  pid  |  modified_at   
---+---+
 pgsql_tmp29522.1  | 29522 | 2010-08-09 00:26:42-04
 pgsql_tmp31962.0  | 31962 | 2010-08-09 00:45:41-04
 pgsql_tmp29258.0  | 29258 | 2010-08-09 00:46:01-04
 pgsql_tmp1478.0   |  1478 | 2010-08-09 00:46:16-04
 pgsql_tmp1482.2   |  1482 | 2010-08-09 00:46:18-04
 pgsql_tmp29267.1  | 29267 | 2010-08-09 01:02:34-04
...

 pgsql_tmp21928.0  | 21928 | 2010-11-08 00:32:48-05
 pgsql_tmp20825.0  | 20825 | 2010-11-08 02:17:51-05
 pgsql_tmp878.0|   878 | 2010-11-08 02:25:23-05
 pgsql_tmp8064.0   |  8064 | 2010-11-08 03:47:26-05
 pgsql_tmp31645.0  | 31645 | 2010-11-08 07:09:40-05
 pgsql_tmp25245.0  | 25245 | 2010-11-08 09:33:24-05
 pgsql_tmp2302.3   |  2302 | 2010-11-08 09:59:45-05
 pgsql_tmp17961.0  | 17961 | 2010-11-08 11:29:12-05
(2685 rows)

Does this query look reasonable? What other things should I take into account 
before I start deleting files from the file system? Why might these files not 
be cleaned up on their own?

Cheers,
Michael Glaesemann
michael.glaesem...@myyearbook.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka

> They clearly were: notice the reference to "Autovacuum context" in the
> memory map.  I think you are right to suspect that auto-analyze was
> getting blown out by the wide bytea columns.  Did you have any
> expression indexes involving those columns?

Yes, there are two unique btree indexes:

(col1, col2, col3, md5(array_to_string(col_bytea_arr, ''::text)) where 
col4 is not null


(col1, col2, col3, md5(array_to_string(col_bytea_arr, ''::text)) where 
col4 is null


What is the cause of the out of memory then?

Thank you,

Kuba

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Tom Lane
Jakub Ouhrabka  writes:
> Could it be that the failed connections were issued by autovacuum?

They clearly were: notice the reference to "Autovacuum context" in the
memory map.  I think you are right to suspect that auto-analyze was
getting blown out by the wide bytea columns.  Did you have any
expression indexes involving those columns?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] finding the other statement causing a sharelock

2010-11-08 Thread Ivan Sergio Borgonovo
I get

DETAIL:  Process 24749 waits for ShareLock on transaction 113443492;
blocked by process 25199. Process 25199 waits for ShareLock on
transaction 113442820; blocked by process 24749.

I would like to know both statements that caused the sharelock
problem.

This is a long running transaction. I know one of the statement. I'd
like to know the other.

How?

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Leif Biberg Kristensen
On Monday 8. November 2010 20.06.13 Jason Long wrote:
> I currently have Postgres 9.0 install after an upgrade.  My database is
> relatively small, but complex.  The dump is about 90MB.
> 
> Every night when there is no activity I do a full vacuum, a reindex, and
> then dump a nightly backup.
> 
> Is this optimal with regards to performance?  autovacuum is set to the
> default.

I've got a database about the same size order (65 MB on disk, 5 MB dump.tgz) 
and I never bother with neither full vacuum nor reindexing. I run the default 
autovacuum, and if the db becomes bloated for some reason, I just do a 
dump/drop/reload cycle. It's done in a few seconds.

regards,
Leif B. Kristensen
http://solumslekt.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Pavel Stehule
2010/11/8 Jakub Ouhrabka :
> Replaying to my own mail. Maybe we've found the root cause:
>
> In one database there was a table with 200k records where each record
> contained 15kB bytea field. Auto-ANALYZE was running on that table
> continuously (with statistics target 500). When we avoid the auto-ANALYZE
> via UPDATE table set bytea_column = null; CLUSTER table; the problem with
> ERROR: out of memory went away.
>
> Could it be that the failed connections were issued by autovacuum?
>

I think so not. Probably it use a different plan with different memory
requests. This is relative typical situation when statistics are out
together with HASH JOIN or HASH AGG. These two operations can get
unlimited memory. Send a plans of your queries for both cases.

Regards

Pavel Stehule

> Thanks,
>
> Kuba
>
> Dne 8.11.2010 19:19, Jakub Ouhrabka napsal(a):
>>
>> Hi,
>>
>> we have several instances of following error in server log:
>>
>> 2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory
>> 2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384.
>>
>> It's always the first log message from the backend. We're trying to
>> trace it down. Whether it's always connection attempt to the same
>> database or not - I don't know at the moment.
>>
>> Sometimes the error message is preceded by memory stats which are below
>> in the email.
>>
>> Other relevant data:
>> Linux, PostgreSQL 8.2.10
>> RAM 28GB
>>
>> max_connections = 2048
>>
>> shared_buffers = 2048MB
>>
>> temp_buffers = 32MB
>> max_prepared_transactions = 0
>>
>> max_fsm_pages = 1000
>> max_fsm_relations = 10
>>
>>
>> There are cca 1200 concurrent database connections (active backends). I
>> know it's too much, we're trying to reduce the number but it's not that
>> easy because of large number of databases and heavy use of listen/notify
>> so connection pooler doesn't help...
>>
>> What can cause this error? What parameter should be raised?
>>
>> Thanks,
>>
>> Kuba
>>
>> Messages preceding ERROR: out ouf memory message
>>
>> TopMemoryContext: 581920 total in 32 blocks; 13760 free (15 chunks);
>> 568160 used
>> TopTransactionContext: 85077936 total in 20 blocks; 4523352 free (9
>> chunks); 80554584 used
>> Analyze Index: 1358288 total in 3 blocks; 7408 free (0 chunks); 1350880
>> used
>> ExecutorState: 8192 total in 1 blocks; 4928 free (0 chunks); 3264 used
>> ExprContext: 13664019952 total in 564677 blocks; 968292944 free (117296
>> chunks); 12695727008 used
>> Analyze Column: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
>> Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks);
>> 12688 used
>> Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
>> used
>> Autovacuum context: 57344 total in 3 blocks; 35624 free (13 chunks);
>> 21720 used
>> CFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
>> MbProcContext: 17408 total in 2 blocks; 17184 free (8 chunks); 224 used
>> smgr relation table: 24576 total in 2 blocks; 5648 free (4 chunks);
>> 18928 used
>> TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
>> 32 used
>> Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
>> PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
>> Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
>> CacheMemoryContext: 800552 total in 19 blocks; 162536 free (2 chunks);
>> 638016 used
>> pg_toast_1152341368_index: 2048 total in 1 blocks; 680 free (0 chunks);
>> 1368 used
>> index_name1: 2048 total in 1 blocks; 512 free (0 chunks); 1536 used
>> index_name2: 2048 total in 1 blocks; 528 free (0 chunks); 1520 used
>> ...
>> Per-database table: 253952 total in 5 blocks; 120064 free (21 chunks);
>> 133888 used
>> Per-database table: 253952 total in 5 blocks; 138544 free (22 chunks);
>> 115408 used
>> Per-database table: 253952 total in 5 blocks; 138544 free (22 chunks);
>> 115408 used
>> Per-database table: 57344 total in 3 blocks; 34352 free (11 chunks);
>> 22992 used
>> ...
>> Databases hash: 122880 total in 4 blocks; 66912 free (13 chunks); 55968
>> used
>> MdSmgr: 8192 total in 1 blocks; 1664 free (0 chunks); 6528 used
>> LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
>> Rendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks);
>> 6512 used
>> PLpgSQL function cache: 8192 total in 1 blocks; 3744 free (0 chunks);
>> 4448 used
>> Timezones: 53584 total in 2 blocks; 3744 free (0 chunks); 49840 used
>> Postmaster: 253952 total in 5 blocks; 65400 free (8 chunks); 188552 used
>> ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need help with Triggers

2010-11-08 Thread akp geek
I have listed functions, triggers , tables and view for your reference.
Thanks for helping me out

Regards

CREATE OR REPLACE FUNCTION fnc_loadDenormdata()
  RETURNS trigger AS
$BODY$
DECLARE
  v_transactionid numeric;
  v_startdate text;
  v_enddate text;
  v_statuscode character varying(10);
  v_enddate_ts timestamp without time zone;
  v_canceldate_ts timestamp without time zone;
  v_firstname character varying(100);
  v_lastname character varying(100);
  v_phone character varying(20);
  v_fax character varying(20);
  v_usercomments character varying;
BEGIN
  if(TG_OP='INSERT') THEN

v_transactionid=NEW.transactionid;
select transactionid,
startdate,enddate,statuscode,enddate_ts,canceldate_ts,firstname,lastname,phone,fax
into
v_transactionid,v_startdate,v_enddate,v_statuscode,v_enddate_ts,v_canceldate_ts,v_firstname,v_lastname,v_phone,v_fax,v_usercomments
from v_search where transactionid=v_transactionid ;
insert into t_search values(
v_transactionid,v_startdate,v_enddate,v_statuscode,v_enddate_ts,v_canceldate_ts,v_firstname,v_lastname,v_phone,v_fax);
 return NEW;
elsif(TG_OP='UPDATE') then
v_transactionid=OLD.transactionid;
select transactionid,
startdate,enddate,statuscode,enddate_ts,canceldate_ts,firstname,lastname,phone,fax
into
v_transactionid,v_startdate,v_enddate,v_statuscode,v_enddate_ts,v_canceldate_ts,v_firstname,v_lastname,v_phone,v_fax,v_usercomments
from v_search where transactionid=v_transactionid ;
update t_search set
issuedate=v_issuedate,startdate=v_startdate,enddate=v_enddate,statuscode=v_statuscode,enddate_ts=v_enddate_ts,canceldate_ts=v_canceldate_ts,
firstname=v_firstname,lastname=v_lastname,phone=v_phone,fax=v_fax,comments=v_usercomments
where transactionid=v_transactionid  ;
return OLD;
END IF;
EXCEPTION
when others then
insert into tb_DEBUG
values
(nextval('seq_errorid'),current_timestamp,'fnc_fnc_loadDenormdata',SQLSTATE||':
 '||SQLERRM);
raise exception 'fnc_loadDenormdata Failed: %-%', SQLSTATE, SQLERRM;

END;$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 1;

---  these are triggers

CREATE TRIGGER trig_loadDenormdata
  AFTER INSERT OR UPDATE
  ON t_items
  FOR EACH ROW
  EXECUTE PROCEDURE fnc_loadDenormdata();

  CREATE TRIGGER trig_loadDenormdata
  AFTER INSERT OR UPDATE
  ON t_comments
  FOR EACH ROW
  EXECUTE PROCEDURE fnc_loadDenormdata();

-



CREATE TABLE t_comments
(
  transactionid numeric(9),
  usercomments character varying,
  publiccomments character varying,

)
WITH (
  OIDS=FALSE
)



CREATE TABLE t_items
(
  transactionid numeric,
  startdate timestamp without time zone,
  statuscode character varying,
  enddate timestamp without time zone,
  canceldate timestamp without time zone,
  fax character varying(20),
  phone character varying(20),
  userid numeric,
 )
WITH (
  OIDS=FALSE
)

create view v_search as
SELECT itm.transactionid, to_char(itm.issuedate::timestamp with time zone,
'MM/DD/ HH24MI'::text) AS issuedate,
to_char(itm.startdate::timestamp with time zone, 'MM/DD/ HH24MI'::text)
AS startdate,
to_char(itm.enddate::timestamp with time zone, 'MM/DD/ HH24MI'::text) AS
enddate, itm.statuscode, itm.enddate AS enddate_ts, itm.canceldate AS
canceldate_ts, usr.firstname, usr.lastname, itm.phone, itm.fax,
com.usercomments,
   itm.lastupdatedate AS last_update_timestamp, btrim(
FROM t_items itm, t_comments com, t_user usr
  WHERE  itm.transactionid = com.transactionid AND itm.userid = usr.userid ;

--


On Mon, Nov 8, 2010 at 1:54 PM, hubert depesz lubaczewski  wrote:

> On Mon, Nov 08, 2010 at 01:45:49PM -0500, akp geek wrote:
> > Hi All -
> >
> >   Can you please share your thoughts and help me ?
> >
> >  1.  I have 4 ( T1, T2 , T3, T4 ) tables where I have the data
> from
> > a transactional system
> >
> >  2.  I have created one more table D1  to denormalize the data
> from
> > the 4 tables ( T1, T2 , T3, T4  )
> >
> >  3. I have created function that returns trigger (TGR_1) .
> >
> >  4. I have create one trigger ( TGR_1) after insert or update on
> T1
> > & T2.
> >
> >  5. Now when I insert data from the application, the save
> function
> > will write data to T1 & T2.
> >
> >  6. The problem is the table D1 gets 2 rows, One with correct
> data
> > and the other is Blank row.  I am not able to understand why I am getting
> a
> > blank row.
> >
> > Any thoughts on this? Appreciate your help
>
> well - without any kind of code to look at it's impossible to guess what
> might be wrong.
>
> please provide function source, so we can see if there is a problem.
>
> Best regards,
>
> depesz
>
> --
> Linkedin: http://www.linkedin.com/in/depesz  /  blog:
> http://www.depesz.com/
> jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl /
> gg:6749007
>


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Chris Barnes


> Date: Mon, 8 Nov 2010 20:05:23 +0100
> From: k...@comgate.cz
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] ERROR: Out of memory - when connecting to database
> 
> Replaying to my own mail. Maybe we've found the root cause:
> 
> In one database there was a table with 200k records where each record 
> contained 15kB bytea field. Auto-ANALYZE was running on that table 
> continuously (with statistics target 500). When we avoid the 
> auto-ANALYZE via UPDATE table set bytea_column = null; CLUSTER table; 
> the problem with ERROR: out of memory went away.


Run pgtune against you configuration and it will tell you what is recommended.



Do you really have 2048 connections to the box?

If yes, maybe you need to run pgbouncer with 2048 connections into pgbouncer 
concentrator and

100 connections to postgres? Will reduce your resource used significantly.

Chris
  

Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka

what's the work_mem?


64MB


that's *way* too much with 24GB of ram and>  1k connections. please
lower it to 32MB or even less.


Thanks for your reply. You are generally right. But in our case most of 
the backends are only waiting for notify so not consuming any work_mem. 
The server is not swapping. On the other hand some queries profit from 
it. So we think it's fine as is.


Regards,

Kuba


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Full Vacuum/Reindex vs autovacuum

2010-11-08 Thread Jason Long
I currently have Postgres 9.0 install after an upgrade.  My database is
relatively small, but complex.  The dump is about 90MB.

Every night when there is no activity I do a full vacuum, a reindex, and
then dump a nightly backup.

Is this optimal with regards to performance?  autovacuum is set to the
default.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread hubert depesz lubaczewski
On Mon, Nov 08, 2010 at 08:04:32PM +0100, Jakub Ouhrabka wrote:
> > is it 32bit or 64bit machine?
> 
> 64bit
> 
> > what's the work_mem?
> 
> 64MB

that's *way* too much with 24GB of ram and > 1k connections. please
lower it to 32MB or even less.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka

Replaying to my own mail. Maybe we've found the root cause:

In one database there was a table with 200k records where each record 
contained 15kB bytea field. Auto-ANALYZE was running on that table 
continuously (with statistics target 500). When we avoid the 
auto-ANALYZE via UPDATE table set bytea_column = null; CLUSTER table; 
the problem with ERROR: out of memory went away.


Could it be that the failed connections were issued by autovacuum?

Thanks,

Kuba

Dne 8.11.2010 19:19, Jakub Ouhrabka napsal(a):

Hi,

we have several instances of following error in server log:

2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory
2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384.

It's always the first log message from the backend. We're trying to
trace it down. Whether it's always connection attempt to the same
database or not - I don't know at the moment.

Sometimes the error message is preceded by memory stats which are below
in the email.

Other relevant data:
Linux, PostgreSQL 8.2.10
RAM 28GB

max_connections = 2048

shared_buffers = 2048MB

temp_buffers = 32MB
max_prepared_transactions = 0

max_fsm_pages = 1000
max_fsm_relations = 10


There are cca 1200 concurrent database connections (active backends). I
know it's too much, we're trying to reduce the number but it's not that
easy because of large number of databases and heavy use of listen/notify
so connection pooler doesn't help...

What can cause this error? What parameter should be raised?

Thanks,

Kuba

Messages preceding ERROR: out ouf memory message

TopMemoryContext: 581920 total in 32 blocks; 13760 free (15 chunks);
568160 used
TopTransactionContext: 85077936 total in 20 blocks; 4523352 free (9
chunks); 80554584 used
Analyze Index: 1358288 total in 3 blocks; 7408 free (0 chunks); 1350880
used
ExecutorState: 8192 total in 1 blocks; 4928 free (0 chunks); 3264 used
ExprContext: 13664019952 total in 564677 blocks; 968292944 free (117296
chunks); 12695727008 used
Analyze Column: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
Autovacuum context: 57344 total in 3 blocks; 35624 free (13 chunks);
21720 used
CFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
MbProcContext: 17408 total in 2 blocks; 17184 free (8 chunks); 224 used
smgr relation table: 24576 total in 2 blocks; 5648 free (4 chunks);
18928 used
TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
CacheMemoryContext: 800552 total in 19 blocks; 162536 free (2 chunks);
638016 used
pg_toast_1152341368_index: 2048 total in 1 blocks; 680 free (0 chunks);
1368 used
index_name1: 2048 total in 1 blocks; 512 free (0 chunks); 1536 used
index_name2: 2048 total in 1 blocks; 528 free (0 chunks); 1520 used
...
Per-database table: 253952 total in 5 blocks; 120064 free (21 chunks);
133888 used
Per-database table: 253952 total in 5 blocks; 138544 free (22 chunks);
115408 used
Per-database table: 253952 total in 5 blocks; 138544 free (22 chunks);
115408 used
Per-database table: 57344 total in 3 blocks; 34352 free (11 chunks);
22992 used
...
Databases hash: 122880 total in 4 blocks; 66912 free (13 chunks); 55968
used
MdSmgr: 8192 total in 1 blocks; 1664 free (0 chunks); 6528 used
LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
Rendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks);
6512 used
PLpgSQL function cache: 8192 total in 1 blocks; 3744 free (0 chunks);
4448 used
Timezones: 53584 total in 2 blocks; 3744 free (0 chunks); 49840 used
Postmaster: 253952 total in 5 blocks; 65400 free (8 chunks); 188552 used
ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka

> is it 32bit or 64bit machine?

64bit

> what's the work_mem?

64MB

Kuba

Dne 8.11.2010 19:52, hubert depesz lubaczewski napsal(a):

On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote:

Hi,

we have several instances of following error in server log:

2010-11-08 18:44:18 CET 5177 1 @  ERROR:  out of memory
2010-11-08 18:44:18 CET 5177 2 @  DETAIL:  Failed on request of size 16384.

It's always the first log message from the backend. We're trying to
trace it down. Whether it's always connection attempt to the same
database or not - I don't know at the moment.

Sometimes the error message is preceded by memory stats which are
below in the email.

Other relevant data:
Linux, PostgreSQL 8.2.10
RAM 28GB

max_connections = 2048

shared_buffers = 2048MB

temp_buffers = 32MB
max_prepared_transactions = 0

max_fsm_pages = 1000
max_fsm_relations = 10


is it 32bit or 64bit machine?

what's the work_mem?

Best regards,

depesz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Failed archive_command copy - number of attempts configurable?

2010-11-08 Thread dan.m.harris

Hi all,

I'm doing some testing of Postgres 9.0 archiving and streaming replication
between a couple of Solaris 10 servers. Recently I was trying to test how
well the standby server catches up after an outage, and a question arose.

It seems that if the standby is uncontactable by the primary when it is
attempting WAL archiving, the primary will attempt the copy three times,
then log that the log file could not be archived, as there were too many
failures. See:

ssh: connect to host 172.18.131.212 port 22: Connection timed out^M
lost connection
LOG:  archive command failed with exit code 1
DETAIL:  The failed archive command was: scp
pg_xlog/00010006
postg...@172.18.131.212:/postgres/postgres/9.0-pgdg/primary_archive
ssh: connect to host 172.18.131.212 port 22: Connection timed out^M
lost connection
LOG:  archive command failed with exit code 1
DETAIL:  The failed archive command was: scp
pg_xlog/00010006
postg...@172.18.131.212:/postgres/postgres/9.0-pgdg/primary_archive
ssh: connect to host 172.18.131.212 port 22: Connection timed out^M
lost connection
LOG:  archive command failed with exit code 1
DETAIL:  The failed archive command was: scp
pg_xlog/00010006
postg...@172.18.131.212:/postgres/postgres/9.0-pgdg/primary_archive
WARNING:  transaction log file "00010006" could not be
archived: too many failures


But then the primary retries this another 49 times! So 150 attempts in all.

What I need to know is whether these numbers are configurable? Can they be
timed? How long before the primary stops retrying altogether?

Any help appreciated. Thanks!
Dan
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Failed-archive-command-copy-number-of-attempts-configurable-tp3255563p3255563.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need help with Triggers

2010-11-08 Thread hubert depesz lubaczewski
On Mon, Nov 08, 2010 at 01:45:49PM -0500, akp geek wrote:
> Hi All -
> 
>   Can you please share your thoughts and help me ?
> 
>  1.  I have 4 ( T1, T2 , T3, T4 ) tables where I have the data from
> a transactional system
> 
>  2.  I have created one more table D1  to denormalize the data from
> the 4 tables ( T1, T2 , T3, T4  )
> 
>  3. I have created function that returns trigger (TGR_1) .
> 
>  4. I have create one trigger ( TGR_1) after insert or update on T1
> & T2.
> 
>  5. Now when I insert data from the application, the save function
> will write data to T1 & T2.
> 
>  6. The problem is the table D1 gets 2 rows, One with correct data
> and the other is Blank row.  I am not able to understand why I am getting a
> blank row.
> 
> Any thoughts on this? Appreciate your help

well - without any kind of code to look at it's impossible to guess what
might be wrong.

please provide function source, so we can see if there is a problem.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread hubert depesz lubaczewski
On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote:
> Hi,
> 
> we have several instances of following error in server log:
> 
> 2010-11-08 18:44:18 CET 5177 1 @  ERROR:  out of memory
> 2010-11-08 18:44:18 CET 5177 2 @  DETAIL:  Failed on request of size 16384.
> 
> It's always the first log message from the backend. We're trying to
> trace it down. Whether it's always connection attempt to the same
> database or not - I don't know at the moment.
> 
> Sometimes the error message is preceded by memory stats which are
> below in the email.
> 
> Other relevant data:
> Linux, PostgreSQL 8.2.10
> RAM 28GB
> 
> max_connections = 2048
> 
> shared_buffers = 2048MB
> 
> temp_buffers = 32MB
> max_prepared_transactions = 0
> 
> max_fsm_pages = 1000
> max_fsm_relations = 10

is it 32bit or 64bit machine?

what's the work_mem?

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] need help with Triggers

2010-11-08 Thread akp geek
Hi All -

  Can you please share your thoughts and help me ?

 1.  I have 4 ( T1, T2 , T3, T4 ) tables where I have the data from
a transactional system

 2.  I have created one more table D1  to denormalize the data from
the 4 tables ( T1, T2 , T3, T4  )

 3. I have created function that returns trigger (TGR_1) .

 4. I have create one trigger ( TGR_1) after insert or update on T1
& T2.

 5. Now when I insert data from the application, the save function
will write data to T1 & T2.

 6. The problem is the table D1 gets 2 rows, One with correct data
and the other is Blank row.  I am not able to understand why I am getting a
blank row.

Any thoughts on this? Appreciate your help

Regards


[GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka

Hi,

we have several instances of following error in server log:

2010-11-08 18:44:18 CET 5177 1 @  ERROR:  out of memory
2010-11-08 18:44:18 CET 5177 2 @  DETAIL:  Failed on request of size 16384.

It's always the first log message from the backend. We're trying to 
trace it down. Whether it's always connection attempt to the same 
database or not - I don't know at the moment.


Sometimes the error message is preceded by memory stats which are below 
in the email.


Other relevant data:
Linux, PostgreSQL 8.2.10
RAM 28GB

max_connections = 2048

shared_buffers = 2048MB

temp_buffers = 32MB
max_prepared_transactions = 0

max_fsm_pages = 1000
max_fsm_relations = 10


There are cca 1200 concurrent database connections (active backends). I 
know it's too much, we're trying to reduce the number but it's not that 
easy because of large number of databases and heavy use of listen/notify 
so connection pooler doesn't help...


What can cause this error? What parameter should be raised?

Thanks,

Kuba

Messages preceding ERROR: out ouf memory message

TopMemoryContext: 581920 total in 32 blocks; 13760 free (15 chunks); 
568160 used
TopTransactionContext: 85077936 total in 20 blocks; 4523352 free (9 
chunks); 80554584 used

Analyze Index: 1358288 total in 3 blocks; 7408 free (0 chunks); 1350880 used
ExecutorState: 8192 total in 1 blocks; 4928 free (0 chunks); 3264 used
ExprContext: 13664019952 total in 564677 blocks; 968292944 free (117296 
chunks); 12695727008 used

Analyze Column: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 
12688 used
Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 
used
Autovacuum context: 57344 total in 3 blocks; 35624 free (13 chunks); 
21720 used

CFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
MbProcContext: 17408 total in 2 blocks; 17184 free (8 chunks); 224 used
smgr relation table: 24576 total in 2 blocks; 5648 free (4 chunks); 
18928 used
TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 
32 used

Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
CacheMemoryContext: 800552 total in 19 blocks; 162536 free (2 chunks); 
638016 used
pg_toast_1152341368_index: 2048 total in 1 blocks; 680 free (0 chunks); 
1368 used

index_name1: 2048 total in 1 blocks; 512 free (0 chunks); 1536 used
index_name2: 2048 total in 1 blocks; 528 free (0 chunks); 1520 used
...
Per-database table: 253952 total in 5 blocks; 120064 free (21 chunks); 
133888 used
Per-database table: 253952 total in 5 blocks; 138544 free (22 chunks); 
115408 used
Per-database table: 253952 total in 5 blocks; 138544 free (22 chunks); 
115408 used
Per-database table: 57344 total in 3 blocks; 34352 free (11 chunks); 
22992 used

...
Databases hash: 122880 total in 4 blocks; 66912 free (13 chunks); 55968 used
MdSmgr: 8192 total in 1 blocks; 1664 free (0 chunks); 6528 used
LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
Rendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks); 
6512 used
PLpgSQL function cache: 8192 total in 1 blocks; 3744 free (0 chunks); 
4448 used

Timezones: 53584 total in 2 blocks; 3744 free (0 chunks); 49840 used
Postmaster: 253952 total in 5 blocks; 65400 free (8 chunks); 188552 used
ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syntax of: alter table ... add constraint ...

2010-11-08 Thread Tom Lane
Alban Hertroys  writes:
> On 8 Nov 2010, at 16:18, Alexander Farber wrote:
>> alter table pref_users add constraint pref_users_medals_check check
>> (medals >= 0);
>> 
>> has worked!

> To clarify a bit on this; if you add a constraint, you specify its name and 
> what type of constraint it is, before specifying the actual constraint 
> expression.
> Hence the need to add 'check' (the constraint type) between 
> 'pref_users_medals_check' (the name) and '(medals >= 0)' (the expression).

One other thing that's maybe worth remembering about the syntax of
constraint clauses: the word CONSTRAINT is really used to introduce a
constraint name.  If you want to create a constraint with no preselected
name, you leave off both the name and the word CONSTRAINT --- but you
still need the word(s) specifying the constraint type, such as CHECK or
FOREIGN KEY.

So either of these syntaxes are legal:

alter table pref_users add constraint pref_users_medals_check check (medals >= 
0);
alter table pref_users add check (medals >= 0);

In the latter case the constraint will be created with some
system-selected name.  (In fact, it looks like pref_users_medals_check
is exactly the name you'd get by default, if there were no such
constraint name already in use.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syntax of: alter table ... add constraint ...

2010-11-08 Thread Alban Hertroys
On 8 Nov 2010, at 16:18, Alexander Farber wrote:

> Thank you,
> 
> alter table pref_users add constraint pref_users_medals_check check
> (medals >= 0);
> 
> has worked!


To clarify a bit on this; if you add a constraint, you specify its name and 
what type of constraint it is, before specifying the actual constraint 
expression.
Hence the need to add 'check' (the constraint type) between 
'pref_users_medals_check' (the name) and '(medals >= 0)' (the expression).

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4cd82a7b10261263518415!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql scalability issue

2010-11-08 Thread John R Pierce

On 11/08/10 7:33 AM, umut orhan wrote:

Hi all,

I've collected some interesting results during my experiments which I 
couldn't figure out the reason behind them and need your assistance.


I'm running PostgreSQL 9.0 on a quad-core machine having two level 
on-chip cache hierarchy. PostgreSQL has a large and warmed-up buffer
cache thus, no disk I/O is observed during experiments (i.e. for each 
query buffer cache hit rate is 100%). I'm pinning each query/process 
to an individual core. Queries are simple read-only queries (only 
selects). Nested loop (without materialize) is used for the join operator.





did pinning the processes to CPU cores make any measurable difference ?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql scalability issue

2010-11-08 Thread Scott Marlowe
On Mon, Nov 8, 2010 at 8:33 AM, umut orhan  wrote:
> Hi all,
> I've collected some interesting results during my experiments which I
> couldn't figure out the reason behind them and need your assistance.
> I'm running PostgreSQL 9.0 on a quad-core machine having two level on-chip
> cache hierarchy. PostgreSQL has a large and warmed-up buffer
> cache thus, no disk I/O is observed during experiments (i.e. for each query
> buffer cache hit rate is 100%). I'm pinning each query/process to an
> individual core. Queries are simple read-only queries (only selects). Nested
> loop (without materialize) is used for the join operator.
> When I pin a single query to an individual core, its execution time is
> observed as 111 seconds. This result is my base case. Then, I fire two
> instances of the same query concurrently and pin them to two different cores
> separately. However, each execution time becomes 132 seconds in this case.
> In a similar trend, execution times are increasing for three instances (164
> seconds) and four instances (201 seconds) cases too. What I was expecting is
> a linear improvement in throughput (at least). I tried several different
> queries and got the same trend at each time.
> I wonder why execution times of individual queries are increasing when I
> increase the number of their instances.
> Btw, I don't think on-chip cache hit/miss rates make a difference since L2
> cache misses are decreasing as expected. I'm not an expert in PostgreSQL
> internals. Maybe there is a lock-contention (spinlocks?) occurring even if
> the queries are read-only. Anyways, all ideas are welcome.

My guess would be it's memory contention.  What architecture is your
quad core cpu?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgresql scalability issue

2010-11-08 Thread umut orhan
Hi all,


I've collected some interesting results during my experiments which I couldn't 
figure out the reason behind them and need your assistance.

I'm running PostgreSQL 9.0 on a quad-core machine having two level on-chip 
cache 
hierarchy. PostgreSQL has a large and warmed-up  buffer
cache thus, no disk I/O is observed during experiments (i.e. for each query 
buffer cache hit rate is 100%). I'm pinning each query/process to an individual 
core. Queries are simple read-only queries (only selects). Nested loop (without 
materialize) is used for the join operator.

When I pin a single query to an individual core, its execution time is observed 
as 111 seconds. This result is my base case. Then, I fire two instances of the 
same query concurrently and pin them to two different cores separately. 
However, 
each execution time becomes 132 seconds in this case. In a similar trend, 
execution times are increasing for three instances (164 seconds)  and four 
instances (201 seconds) cases too. What I was expecting is a linear improvement 
in throughput (at least). I tried several different queries and got the same 
trend at each time.

I wonder why execution times of individual queries are increasing when I 
increase the number of their instances.

Btw, I don't think on-chip cache hit/miss rates make a  difference since L2 
cache misses are decreasing as expected. I'm not an expert in PostgreSQL 
internals. Maybe there is a lock-contention (spinlocks?) occurring even if the 
queries are read-only. Anyways, all ideas are welcome.

Thanks in advance,
Regards,
Umut



  

Re: [GENERAL] Syntax of: alter table ... add constraint ...

2010-11-08 Thread Alexander Farber
Oh right, I meant phpPgAdmin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syntax of: alter table ... add constraint ...

2010-11-08 Thread Guillaume Lelarge
Le 08/11/2010 16:18, Alexander Farber a écrit :
> Thank you,
> 
> alter table pref_users add constraint pref_users_medals_check check
> (medals >= 0);
> 
> has worked!
> 
> I do not use pgAdmin, because I see in the logs of my 2 web server,
> that attackers look for it all the time. But I'll install it at my
> development VM at home now.
> 

I don't really see what an attacker can look for wrt pgAdmin. Be careful
that there is pgAdmin and phpPgAdmin and those are two different things
(even if they are both admin tools for PostgreSQL).


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syntax of: alter table ... add constraint ...

2010-11-08 Thread Alexander Farber
Thank you,

alter table pref_users add constraint pref_users_medals_check check
(medals >= 0);

has worked!

I do not use pgAdmin, because I see in the logs of my 2 web server,
that attackers look for it all the time. But I'll install it at my
development VM at home now.

Regards
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syntax of: alter table ... add constraint ...

2010-11-08 Thread Raymond O'Donnell

On 08/11/2010 14:50, Alexander Farber wrote:


alter table pref_users add constraint pref_users_medals_check (medals>= 0);
ERROR:  syntax error at or near "("
LINE 1: ...pref_users add constraint pref_users_medals_check (medals>=...
  ^
and many combinations of quotes and "check" inbetween,
but can't find the correct syntax


Working from memory, I think you need the word "check" before the 
opening parenthesis.


Do you use PgAdmin? - You can go through the motions of creating the 
constraint there, and see what SQL it generates.


Ray.


--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syntax of: alter table ... add constraint ...

2010-11-08 Thread Thomas Kellerer

Alexander Farber, 08.11.2010 15:50:
 

And then I realized that I actually want

 medals smallint default 0 check (medals>= 0)

So I've dropped the old constraint with

 alter table pref_users drop constraint "pref_users_medals_check";

but how can I add the new contraint please? I'm trying:

alter table pref_users add constraint pref_users_medals_check (medals>= 0);
ERROR:  syntax error at or near "("
LINE 1: ...pref_users add constraint pref_users_medals_check (medals>=...
  ^
and many combinations of quotes and "check" inbetween,
but can't find the correct syntax


That should work:

alter table pref_users add constraint pref_users_medals_check check check (medals 
>= 0);

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migrate from 8.1 to 9.0

2010-11-08 Thread Cédric Villemain
2010/11/8 Vick Khera :
> On Mon, Nov 8, 2010 at 5:23 AM, Thom Brown  wrote:
>> Implicit casting might bite you since that was removed in 8.3.
>>
>
> Also if you use bytea fields to store binary data, the encoding format
> on return of the data is different.  Make sure your client library
> handles that for you (or explicitly code for it).
>
> These are the two major issues we had to address between our
> 8.1->8.3->9.0 update over the last few  years.

It is also possible to hit issues with plpgsql code quality.

>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Syntax of: alter table ... add constraint ...

2010-11-08 Thread Alexander Farber
Hello,

I'm having this table filled with data:

 \d pref_users;
Table "public.pref_users"
   Column   |Type |   Modifiers
+-+---
 id | character varying(32)   | not null
 first_name | character varying(32)   |
 last_name  | character varying(32)   |
 female | boolean |
 avatar | character varying(128)  |
 city   | character varying(32)   |
 lat| real|
 lng| real|
 last_login | timestamp without time zone | default now()
 last_ip| inet|
 medals | smallint|
Indexes:
"pref_users_pkey" PRIMARY KEY, btree (id)
Check constraints:
"pref_users_lat_check" CHECK ((-90)::double precision <= lat AND
lat <= 90::double precision)
"pref_users_lng_check" CHECK ((-90)::double precision <= lng AND
lng <= 90::double precision)
"pref_users_medals_check" CHECK (medals > 0)
Referenced by:
TABLE "pref_luck" CONSTRAINT "pref_luck_id_fkey" FOREIGN KEY (id)
REFERENCES pref_users(id)
TABLE "pref_misere" CONSTRAINT "pref_misere_id_fkey" FOREIGN KEY
(id) REFERENCES pref_users(id)
TABLE "pref_money" CONSTRAINT "pref_money_id_fkey" FOREIGN KEY
(id) REFERENCES pref_users(id)
TABLE "pref_pass" CONSTRAINT "pref_pass_id_fkey" FOREIGN KEY (id)
REFERENCES pref_users(id)
TABLE "pref_rate" CONSTRAINT "pref_rate_obj_fkey" FOREIGN KEY
(obj) REFERENCES pref_users(id)
TABLE "pref_rate" CONSTRAINT "pref_rate_subj_fkey" FOREIGN KEY
(subj) REFERENCES pref_users(id)

And then I realized that I actually want

medals smallint default 0 check (medals >= 0)

So I've dropped the old constraint with

alter table pref_users drop constraint "pref_users_medals_check";

but how can I add the new contraint please? I'm trying:

alter table pref_users add constraint pref_users_medals_check (medals >= 0);
ERROR:  syntax error at or near "("
LINE 1: ...pref_users add constraint pref_users_medals_check (medals >=...
 ^
and many combinations of quotes and "check" inbetween,
but can't find the correct syntax

Regards
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migrate from 8.1 to 9.0

2010-11-08 Thread Vick Khera
On Mon, Nov 8, 2010 at 5:23 AM, Thom Brown  wrote:
> Implicit casting might bite you since that was removed in 8.3.
>

Also if you use bytea fields to store binary data, the encoding format
on return of the data is different.  Make sure your client library
handles that for you (or explicitly code for it).

These are the two major issues we had to address between our
8.1->8.3->9.0 update over the last few  years.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need magic to clean strings from unconvertible UTF8

2010-11-08 Thread Dimitri Fontaine
Andreas  writes:
> I can find the problematic rows.
> How could I delete every char in a string that can't be converted to
> WIN1252?

  http://tapoueh.org/articles/blog/_Getting_out_of_SQL_ASCII,_part_1.html
  http://tapoueh.org/articles/blog/_Getting_out_of_SQL_ASCII,_part_2.html

That's using an hand-crafted translate expression, you could also use
the recode library that does a pretty good job. Maybe the easiest way
here would be using some plpythonu procedure using librecode?

  http://packages.debian.org/sid/python-bibtex

Well or the same in plperl… or even easier, process the source files
before importing them?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migrate from 8.1 to 9.0

2010-11-08 Thread Cédric Villemain
2010/11/8 AI Rumman :
> I am going to migrate my produciton DB from postgresql 8.1 to 9.0.1.
> Anyone please tell me what the important things I have to look for this
> migration.
> Thanking you all.

You MUST read Releases Notes for each major version between to see
what change and what may impact your application.



-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migrate from 8.1 to 9.0

2010-11-08 Thread Thom Brown
On 8 November 2010 10:08, AI Rumman  wrote:

> I am going to migrate my produciton DB from postgresql 8.1 to 9.0.1.
> Anyone please tell me what the important things I have to look for this
> migration.
> Thanking you all.
>

Implicit casting might bite you since that was removed in 8.3.

Take a careful look at the migration notes for each release inbetween to see
how they affect you:
http://www.postgresql.org/docs/9.0/static/release-8-2.html#AEN106653
http://www.postgresql.org/docs/9.0/static/release-8-3.html#AEN104194
http://www.postgresql.org/docs/9.0/static/release-8-4.html#AEN101283
http://www.postgresql.org/docs/9.0/static/release-9-0.html#AEN98988

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935


[GENERAL] migrate from 8.1 to 9.0

2010-11-08 Thread AI Rumman
I am going to migrate my produciton DB from postgresql 8.1 to 9.0.1.
Anyone please tell me what the important things I have to look for this
migration.
Thanking you all.