Re: [GENERAL] exception

2008-11-12 Thread Klint Gore

hendra kusuma wrote:

but I really like to ask if there is a way to catch any exeption
instead of defining what exception we would like to catch?
something like ELSE or what :)
  


From the manual

"The special condition name |OTHERS| matches every error type except 
|QUERY_CANCELED|. "


klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] exception

2008-11-12 Thread hendra kusuma
Dear all,

I'll have a little question here

I try to catch an exception as follows

CREATE OR REPLACE FUNCTION gruptambah(pnama character varying)
  RETURNS integer AS
$BODY$
declare
  ret integer;
begin
  insert into grup (nama_grup, mulai_dibuat) values (pNama, now());
  select last_value into ret from grup_id_seq;
  return ret;
EXCEPTION
  WHEN unique_violation THEN
RAISE NOTICE 'error';
return 0;
end;
$BODY$
  LANGUAGE 'plpgsql';

it works perfectly
but I really like to ask if there is a way to catch any exeption
instead of defining what exception we would like to catch?
something like ELSE or what :)

Regards
Hendra

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored function - send array as parameter to stored function

2008-11-12 Thread brian

Yes, this one got me, also. Strangely, you need to do:

select iterate('{1,2}');

Also, you have a typo in your function: arry_upper(a,1)

b



hendra kusuma wrote:

Hi,
I'm new here
I've been doing a little self-learning about postgresql
and find it very interesting

I've been trying to create a stored procedure that receive array as
parameter and find a good and working example on the web

the code looks like this

create or replace function iterate(a int[])
returns void as $$
begin
 for i in array_lower(a,1)..arry_upper(a,1) loop
   raise notice '%', a[i];
 end loop;
end;
$$ language plpgsql strict;

my problem is, how should I call this function?
I tried
select iterate(1,2)
select iterate([1,2])
select iterate({1,2})
none of them are working

how is it really to call this function

Thank you
Hendra



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading side by side in Gentoo

2008-11-12 Thread Greg Smith

On Wed, 12 Nov 2008, Andrus wrote:

How to install latest version in gentoo side-by side so that old version is 
not touched?


You need to get one of the experimental builds that include slotted 
support.  Take a look at 
http://overlays.gentoo.org/proj/postgresql/wiki/ExperimentalFAQ and the 
page linked to by the blog article you mentioned at 
http://dev-zero.ch/blog/archives/6-PostgreSQL-Status.html


This looks like it's still in the early stages of release, so you might 
have to get some suggestions from the Gentoo package maintainers if you 
run into problems.  The fact that this problem has been lingering around 
for over four years (http://bugs.gentoo.org/show_bug.cgi?id=42894) and 
it's not completely cleaned up gives you an idea how well the PostgreSQL 
packages are (not) maintained on Gentoo.  You really should consider just 
installing your own PostgreSQL from source rather than fooling with the 
official pacakges.  I would wager it will take you less time to figure out 
how to do that than to fight these experimental packages into submission.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Stored function - send array as parameter to stored function

2008-11-12 Thread hendra kusuma
Hi,
I'm new here
I've been doing a little self-learning about postgresql
and find it very interesting

I've been trying to create a stored procedure that receive array as
parameter and find a good and working example on the web

the code looks like this

create or replace function iterate(a int[])
returns void as $$
begin
 for i in array_lower(a,1)..arry_upper(a,1) loop
   raise notice '%', a[i];
 end loop;
end;
$$ language plpgsql strict;

my problem is, how should I call this function?
I tried
select iterate(1,2)
select iterate([1,2])
select iterate({1,2})
none of them are working

how is it really to call this function

Thank you
Hendra

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] can someone help me to make a sql more pretty and more concise?

2008-11-12 Thread Yi Zhao
I want to select some column(a, b) from the table with the specified
condition, so, i can do like this: 
select a, b from mytable where id = (select id from temptable where
tname = 'df' ) and stype = 'def' and range = 'afk'

but, I want the result contains a sum(c) and a count value extra,
so, I use the sql below:
select a, b,
(select count(1) from mytable where id = 
  ( 
select id from temptable where tname = 'df'
  ) and stype = 'def' and range = 'afk'  
),  
(select sum(c) from mytable where id =
  ( 
select id from temptable where tname = 'df'
  ) and stype = 'def' and range = 'afk'
)   
  from mytable where id = ( 
select id from temptable where tname = 'df'
  ) and stype = 'def' and range = 'afk';

can someone help me to make this sql statement above more pretty and more 
concise?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading side by side in Gentoo

2008-11-12 Thread Guy Rouillier

Andrus wrote:

I have separate production server running

"PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC 
i686-pc-linux-gnu-gcc (GCC) 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0, 
pie-8.7.9)"


This is a bad idea.  I run Gentoo at home on a workstation, and I like 
the approach it uses.  But it is really targeted to the individual user 
who likes to be on the bleeding edge.  *Everything* on Gentoo is 
compiled from source; that's the whole point of the distribution.  This 
is not what you want in a production platform in a business environment.


You should be looking at a distribution that has a support package (even 
if you elect not to use it), a minimum number of years that each version 
will be supported, and a predictable maintenance schedule.  You have 
many to choose from: Redhat and its free derivatives like CentOS, Ubuntu 
with its supported releases, etc.


To answer your question directly, you won't find a prepackaged solution 
to running simultaneous version of PG (or any other software package) on 
Gentoo.  That's not how Gentoo is designed to be used.  Having said 
that, I remember reading about slots, which may allow what you are 
trying to do.  But I've never investigated.


--
Guy Rouillier

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sort_mem param of postgresql.conf

2008-11-12 Thread Greg Smith

On Wed, 12 Nov 2008, Grzegorz Ja?~[kiewicz wrote:


that's on 8.3, by default there was no sort_mem , I hadded it , changed it
to 512MB and all of the sudent everything started to fly - wherever it
required hudge ammounts of memory for queries. The queries are reports, so
they basicaly join few tables, and dump all ofthat content - so it takes a
bit of a memory to do it ;)


Be advised that the work_mem setting (and its deprecated alias sort_mem) 
are on a per-client basis.  So if you have a bunch of people running 
reports with that setting, you might discover your server running out of 
memory; that's a really high setting.  Generally, if it's only a report or 
two that need a lot more working memory for sorts, you can do this at the 
beginning of them instead:


set work_mem='512MB';

Which will set the value only for that session.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple COPY FROM issue

2008-11-12 Thread brian
The file must be on the server. IIRC, with COPY FROM (as opposed to COPY 
TO) the path can be relative to the server process dir but it's probably 
a good idea to always use an absolute path.


If you wish to copy from the client machine you can use \copy within psql.

b

Kevin Duffy wrote:

Hello:

 


A follow up question that may clear this all up:

 


Is the 'filename' relative to the server machine or the client where
pgAdmin is running?

 

 


Kevin Duffy

WR Capital Management

40 Signal Rd

Stamford, CT 


203-504-6221

 





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple COPY FROM issue

2008-11-12 Thread brian

> HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.

Try:

COPY imagineoptions FROM E'C:\\Options20081112.csv' ...

or:

COPY imagineoptions FROM 'C:/Options20081112.csv' ...


Kevin Duffy wrote:

Hello All:

 


I am attempting to import data from a CSV file.

 


The command I am using is as follows:

copy imagineoptions   from 'C:\\Options20081112.csv'   DELIMITERS
','  CSV ;

 


I get the following:

WARNING:  nonstandard use of \\ in a string literal

 LINE 1: copy imagineoptions   from 'C:\\Options20081112.csv'
DELIM...

   ^

HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.

 


ERROR: could not open file "C:\Options20081112.csv" for reading: No
such file or directory

SQL state: 58P01 

 


If I run DIR at the command prompt I get:

 


C:\>dir C:\Options20081112.csv

 Volume in drive C has no label.

 Volume Serial Number is 1824-0333

 


 Directory of C:\

 


11/12/2008  04:04 PM 1,300,220 Options20081112.csv

   1 File(s)  1,300,220 bytes

   0 Dir(s)  40,459,689,984 bytes free

 


C:\>

 


I have tried many variations of slashes and back-slashes, but nothing
works.

I am working within pgAdmin under Windows XP.

 


Please help me preserve my sanity.  I do not see that I am doing wrong.

But I am certain it is very simple.

 


Best Regards

 

 


Kevin Duffy

 





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] simple COPY FROM issue

2008-11-12 Thread Kevin Duffy
Hello:

 

A follow up question that may clear this all up:

 

Is the 'filename' relative to the server machine or the client where
pgAdmin is running?

 

 

Kevin Duffy

WR Capital Management

40 Signal Rd

Stamford, CT 

203-504-6221

 



[GENERAL] simple COPY FROM issue

2008-11-12 Thread Kevin Duffy
Hello All:

 

I am attempting to import data from a CSV file.

 

The command I am using is as follows:

copy imagineoptions   from 'C:\\Options20081112.csv'   DELIMITERS
','  CSV ;

 

I get the following:

WARNING:  nonstandard use of \\ in a string literal

 LINE 1: copy imagineoptions   from 'C:\\Options20081112.csv'
DELIM...

   ^

HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.

 

ERROR: could not open file "C:\Options20081112.csv" for reading: No
such file or directory

SQL state: 58P01 

 

If I run DIR at the command prompt I get:

 

C:\>dir C:\Options20081112.csv

 Volume in drive C has no label.

 Volume Serial Number is 1824-0333

 

 Directory of C:\

 

11/12/2008  04:04 PM 1,300,220 Options20081112.csv

   1 File(s)  1,300,220 bytes

   0 Dir(s)  40,459,689,984 bytes free

 

C:\>

 

I have tried many variations of slashes and back-slashes, but nothing
works.

I am working within pgAdmin under Windows XP.

 

Please help me preserve my sanity.  I do not see that I am doing wrong.

But I am certain it is very simple.

 

Best Regards

 

 

Kevin Duffy

 



Re: [GENERAL] Using dblink to connect as non-superuser

2008-11-12 Thread Tom Lane
Robert Fitzpatrick <[EMAIL PROTECTED]> writes:
> I try to use dblink_connect while logged into the database as the
> superuser and it returns OK, but if I disconnect and use the same
> command as follows, it give and error, password is required. Even if I
> try to use the superuser login in the connect string below while
> connected as a non-superuser, I get the error...

This probably means that the remote database is not in fact prompting
for a password on this connection.  For security reasons dblink wants
non-superuser callers to supply a password that actually gets used
--- otherwise they are effectively able to masquerade as the local
superuser to the remote database.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using dblink to connect as non-superuser

2008-11-12 Thread Roderick A. Anderson

Robert Fitzpatrick wrote:

I try to use dblink_connect while logged into the database as the
superuser and it returns OK, but if I disconnect and use the same
command as follows, it give and error, password is required. Even if I
try to use the superuser login in the connect string below while
connected as a non-superuser, I get the error...

select dblink_connect('dbname=egroupware user=myuser password=mypass');

Does this mean I have to be connected to the db as the superuser? I
assume not since reading the docs and having the ability to use user and
password in my connection string.


Did you install it as postgres and not follow up with a GRANT to PUBLIC?

Anyway a very old memory and a quick look at the dblink.sql indicates 
that may be needed.



Rod
--

I can try the same command as above
with only the dbname while logged in as the superuser and works. All
works fine with superuser, but not any non-superuser. I am trying to
develop a trigger pl/pgsql on this 8.3 server, but the connections to
the db will be from a non-superuser from a web site connection. What am
I doing wrong?

My non-superuser owns both db's. The error hints that the target
server's authentication method must be changed. I have tried to setup
localhost to trust in pg_hba.conf, still no help.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using dblink to connect as non-superuser

2008-11-12 Thread Robert Fitzpatrick
I try to use dblink_connect while logged into the database as the
superuser and it returns OK, but if I disconnect and use the same
command as follows, it give and error, password is required. Even if I
try to use the superuser login in the connect string below while
connected as a non-superuser, I get the error...

select dblink_connect('dbname=egroupware user=myuser password=mypass');

Does this mean I have to be connected to the db as the superuser? I
assume not since reading the docs and having the ability to use user and
password in my connection string. I can try the same command as above
with only the dbname while logged in as the superuser and works. All
works fine with superuser, but not any non-superuser. I am trying to
develop a trigger pl/pgsql on this 8.3 server, but the connections to
the db will be from a non-superuser from a web site connection. What am
I doing wrong?

My non-superuser owns both db's. The error hints that the target
server's authentication method must be changed. I have tried to setup
localhost to trust in pg_hba.conf, still no help.

-- 
Robert


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] still gin index creation takes forever

2008-11-12 Thread Tom Lane
Teodor Sigaev <[EMAIL PROTECTED]> writes:
>> I'm not following.  Rightmost page of what --- it can't be the whole
>> index, can it, or the case would hardly ever apply?

> GIN's index contains btree over keys (entry tree) and for each key it 
> contains list of ItemPointers (posting list) or btree over ItemPointers 
> (posting tree or data tree) depending on its quantity. Bulk insertion 
> process collects into memory keys and sorted arrays of ItemPointers, and 
> then for each keys, it tries to insert every ItemPointer from array into 
> corresponding data tree one by one. But if the smallest ItemPointer in 
> array is greater than the biggest stored one then algorithm will insert 
> the whole array on rightmost page in data tree.

> So, in that case process can insert about 1000 ItemPointers per one data 
> tree lookup, in opposite case it does 1000 lookups in data tree.

I see.  So this could explain Ivan's issue if his table contains large
numbers of repeated GIN keys.  Ivan, is that what your data looks like?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Res: [ADMIN] [GENERAL] MAX_CONNECTIONS ??

2008-11-12 Thread paulo matadr
Only  kernel.shmmax have 

importance in this case?
about kernel.shmmax it has configured not by me.




De: Richard Huxton <[EMAIL PROTECTED]>
Para: paulo matadr <[EMAIL PROTECTED]>
Cc: GENERAL ; admin <[EMAIL PROTECTED]>
Enviadas: Quarta-feira, 12 de Novembro de 2008 14:41:06
Assunto: Re: [ADMIN] [GENERAL] MAX_CONNECTIONS ??

paulo matadr wrote:
> I need help to alter max_connections em my database, this parameter
> stay in defalt MAX_CONNECTIONS=100
> I want to change for MAX_CONNECTIONS=300. about parameters below,
> need to change anything?
> 
> 1 -sysctl.conf
> 
> kernel.shmmax = 68719476736
> kernel.shmall = 4294967296

Does your shmmax have one digit too many? 68GB?

> # - Memory -
> 
> 2-Postgresql.conf
> shared_buffers = 2048MB  
> temp_buffers = 12MB
> work_mem = 12MB
> maintenance_work_mem = 128MB  
> max_fsm_pages =   400  
> max_fsm_relations = 5000

Should be OK, depending on your work-load.

One question - what do you plan on doing with 300 active connections?
Unless you've got more than 100 processors in this machine you might
want to consider connection-pooling.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-admin mailing list ([EMAIL PROTECTED])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses

Re: [GENERAL] Upgrading side by side in Gentoo

2008-11-12 Thread Adrian Klaver

- "Adriana Alfonzo" <[EMAIL PROTECTED]> wrote:

> *can you teach me?? i don't know unsuscribe!!! please..*
> 
> Raymond O'Donnell escribió:
> > On 12/11/2008 17:37, Adriana Alfonzo wrote:
> >
> >> *no more mesages please*
> >>
> >
> > You have subscribed to a mailing list, or someone has done it on
> your
> > behalf.
> >
> > If you don't want any posts from the list, please unsubscribe.
> >
> > Ray.
> >
> > --
> > Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
> > [EMAIL PROTECTED]
> > Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
> > --
> >
> >
> 
> Aviso Legal  Este mensaje puede contener informacion de interes solo
> para CVG Venalum. Solo esta permitida su copia, distribucion o uso a
> personas autorizadas. Si recibio este corre por error, por favor
> destruyalo. Eventualmentew los correos electonicos pueden ser
> alterados. Al respecto, CVG Venalum no se hace responsable por los
> errores que pudieran afectar al mensaje original.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  ^^^

Also at bottom of this message.


Adrian Klaver
[EMAIL PROTECTED]


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table bloat and vacuum

2008-11-12 Thread Jack Orenstein

Scott Marlowe wrote:

On Wed, Nov 12, 2008 at 8:13 AM, Jack Orenstein <[EMAIL PROTECTED]> wrote:

My application is running on 7.4. We have one huge table that drives

SNIP

We're in the process of upgrading to 8.3.4, so I'd appreciate any
throughs on whether and how this behavior will change with the newer
release.


You will not believe how much faster 8.3 is, and how much easier
maintenance is.  You'll be like a kid in a candy store for months
looking at and using all the new features in it.  The improvements are
enormous.  Biggest difference for you is that 8.3 can do vacuums in a
background method (it sleeps x milliseconds between pages), can run 3
or more threads, and autovacuum daemon is on by default.  For the most
part, your vacuuming issues will no longer exist.


Our 7.4 vacuuming strategy has gotten pretty complex:

- Entire database once a week.

- (Eventually) biggest table in database: Vacuumed/analyzed every 1000 updates 
until there are 10,000 rows, to ensure that optimizer does the right thing, 
(discussed recently on this mailing list).


- Medium-sized table containing single-row concurrency hotspots. Usually less 
than 1M rows: vacuumed every 2000 updates.


- Single-row tables - these are permanent hotspots, updated in every 
transaction: vacuumed every 2000 updates.


Can you comment on how I'll be able to simplify this vacuum schedule by relying 
on autovacuum? Can you point me at a document describing how autovacuum decides 
when to vacuum a table?


I've also had some trouble figuring out which VACUUMs should ANALYZE. 
Originally, I had every vacuum also run analyze (except for the tiny-table 
vacuums). But I ran into the "tuple concurrently updated" problem (see 
http://archives.postgresql.org/pgsql-sql/2005-05/msg00148.php), so I've had to 
back off from that. Are concurrent analyzes OK in 8.3?


Jack


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading side by side in Gentoo

2008-11-12 Thread Adriana Alfonzo

*can you teach me?? i don't know unsuscribe!!! please..*

Raymond O'Donnell escribió:

On 12/11/2008 17:37, Adriana Alfonzo wrote:


*no more mesages please*



You have subscribed to a mailing list, or someone has done it on your
behalf.

If you don't want any posts from the list, please unsubscribe.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--




Aviso Legal  Este mensaje puede contener informacion de interes solo para CVG 
Venalum. Solo esta permitida su copia, distribucion o uso a personas 
autorizadas. Si recibio este corre por error, por favor destruyalo. 
Eventualmentew los correos electonicos pueden ser alterados. Al respecto, CVG 
Venalum no se hace responsable por los errores que pudieran afectar al mensaje 
original.
begin:vcard
fn:Adriana Alfonzo
n:Alfonzo;Adriana
org;quoted-printable:Gcia. Sistemas y Organizaci=C3=B3n;Proy. Software Libre
adr;quoted-printable:;;;Pto. Ordaz;Bol=C3=ADvar;;Venezuela
email;internet:[EMAIL PROTECTED]
title:Analista Ext. 5694
tel;work:5694
version:2.1
end:vcard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to "group" several records with same timestamp into one line?

2008-11-12 Thread Sam Mason
On Thu, Nov 13, 2008 at 06:56:43AM +1300, Brent Wood wrote:
> You need to use a self relation, not a group by, as no data are
> being aggregated into a new single value, which is what the group by
> achieves.

It's perfectly possible to use a GROUP BY clause; all rows from one time
period want to be accumulated into a single row.  To get somewhat close
to Brent's query, the OP could do something like:

  SELECT create_on,
array_accum(CASE channel when 'channel1' THEN data END) AS data1,
array_accum(CASE channel when 'channel1' THEN unit END) AS unit1,
array_accum(CASE channel when 'channel2' THEN data END) AS data2,
array_accum(CASE channel when 'channel2' THEN unit END) AS unit2,
array_accum(CASE channel when 'channel3' THEN data END) AS data3,
array_accum(CASE channel when 'channel3' THEN unit END) AS unit3,
array_accum(CASE channel when 'channel4' THEN data END) AS data4,
array_accum(CASE channel when 'channel4' THEN unit END) AS unit4
  FROM record_data
  GROUP BY create_on;

If the number of channels were unknown, a possibility would be:

  SELECT create_on, array_accum(channel||' '||data||' '||unit)
  FROM record_data
  GROUP BY create_on;

If this is being used for things outside PG, turning the resulting
arrays into text can make things easier; array_to_string() is good for
this.  More docs are in:

  http://www.postgresql.org/docs/current/static/functions-aggregate.html
  http://www.postgresql.org/docs/current/static/functions-array.html

If you've got a unique constraint on (create_on,channel) then you
could replace the array_accum() aggregate with MIN.  I've also just
realized that PG doesn't come with array_accum by default, you can find
a definition of it here:

  http://www.postgresql.org/docs/current/static/xaggr.html


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing FROM-clause entry for table

2008-11-12 Thread Erwin Moller

Raymond O'Donnell schreef:

On 12/11/2008 18:08, Erwin Moller wrote:
  

LEFT OUTER JOIN tblcategorypropertylang AS CPL ON
((CLP.languageid=DRV1.languageid) AND
(CPL.categorypropertyid=DRV1.categorypropertyid));

ERROR:  missing FROM-clause entry for table "clp"



You've a typo in the query - the table alias is CPL, not CLP.
  

Oh my
/me slaps head against the wall.
And I checked that query 2 times before posting here

Maybe it is time to call it a day. ;-)

Thanks Ray! (And Adrian)

Regards,
Erwin Moller

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--


  



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing FROM-clause entry for table

2008-11-12 Thread Adrian Klaver


- "Erwin Moller" <[EMAIL PROTECTED]> wrote:

> Hi group,
> 
> I get a mysterious errormessage while executing the following query:
> (I left the tabledefinitions out, since I think they don't matter. If
> I 
> am wrong I'll be happy to post them.)
> 
> SELECT DRV1.languageid, DRV1.categorypropertyid, 
> CPL.categorypropertylangname
> FROM (SELECT L.languageid, CP.categorypropertyid FROM tbllanguage AS
> L, 
> tblcategoryproperty AS CP) AS DRV1
> LEFT OUTER JOIN tblcategorypropertylang AS CPL ON 
> ((CLP.languageid=DRV1.languageid) AND 
^^^ instead of CPL

> (CPL.categorypropertyid=DRV1.categorypropertyid));
> 
> ERROR:  missing FROM-clause entry for table "clp"
> 
> While this one is working just fine:
> 
> SELECT DRV1.languageid, DRV1.categorypropertyid,
> categorypropertylangname
> FROM (SELECT L.languageid, CP.categorypropertyid FROM tbllanguage
> AS 
> L, tblcategoryproperty AS CP) AS DRV1
> LEFT OUTER JOIN tblcategorypropertylang ON 
> ((tblcategorypropertylang.languageid=DRV1.languageid) AND 
> (tblcategorypropertylang.categorypropertyid=DRV1.categorypropertyid));
> 
> The only difference is that I didn't use an alias CPL for 
> tblcategorypropertylang.
> (The above queries are dynamically created by PHP based on a lot of 
> other, not related, things, and I would like to use the first way 
> because that makes sense in my current project.)
> 
> I Googled a little for the errormessage, but to no avail.
> The second query is also good enough for me, but I am curious what I
> am 
> missing here.
> 
> What is causing this?
> 
> Thanks for your time.
> 
> Regards,
> Erwin Moller
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

Adrian Klaver
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing FROM-clause entry for table

2008-11-12 Thread Raymond O'Donnell
On 12/11/2008 18:08, Erwin Moller wrote:
> LEFT OUTER JOIN tblcategorypropertylang AS CPL ON
> ((CLP.languageid=DRV1.languageid) AND
> (CPL.categorypropertyid=DRV1.categorypropertyid));
> 
> ERROR:  missing FROM-clause entry for table "clp"

You've a typo in the query - the table alias is CPL, not CLP.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] missing FROM-clause entry for table

2008-11-12 Thread Erwin Moller

Hi group,

I get a mysterious errormessage while executing the following query:
(I left the tabledefinitions out, since I think they don't matter. If I 
am wrong I'll be happy to post them.)


SELECT DRV1.languageid, DRV1.categorypropertyid, 
CPL.categorypropertylangname
FROM (SELECT L.languageid, CP.categorypropertyid FROM tbllanguage AS L, 
tblcategoryproperty AS CP) AS DRV1
LEFT OUTER JOIN tblcategorypropertylang AS CPL ON 
((CLP.languageid=DRV1.languageid) AND 
(CPL.categorypropertyid=DRV1.categorypropertyid));


ERROR:  missing FROM-clause entry for table "clp"

While this one is working just fine:

SELECT DRV1.languageid, DRV1.categorypropertyid, categorypropertylangname
   FROM (SELECT L.languageid, CP.categorypropertyid FROM tbllanguage AS 
L, tblcategoryproperty AS CP) AS DRV1
   LEFT OUTER JOIN tblcategorypropertylang ON 
((tblcategorypropertylang.languageid=DRV1.languageid) AND 
(tblcategorypropertylang.categorypropertyid=DRV1.categorypropertyid));


The only difference is that I didn't use an alias CPL for 
tblcategorypropertylang.
(The above queries are dynamically created by PHP based on a lot of 
other, not related, things, and I would like to use the first way 
because that makes sense in my current project.)


I Googled a little for the errormessage, but to no avail.
The second query is also good enough for me, but I am curious what I am 
missing here.


What is causing this?

Thanks for your time.

Regards,
Erwin Moller

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading side by side in Gentoo

2008-11-12 Thread Adriana Alfonzo

*no more mesages please*

Andrus escribió:

I have separate production server running

"PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC
i686-pc-linux-gnu-gcc (GCC) 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)"

and want to upgrade this to something newer.
I can allow some hours of downtime for upgrade at night.

I have found Gentoo upgrade instructions here
http://hasno.info/2008/7/31/upgrading-to-postgresql-8-3-on-gentoo

I have no idea how PostgreSql is installed in this server, maybe it is
compiled from source.
So I'm afraid that using those instructions may corrupt the server so
it will no longer come up in morning and I'm not able to fix it due to
lack of allowed downtime and knowledge.

So I'm planning  the following strategy:

1. Create backup copy in this 8.1.4
2. Stop postmaster
3. Install latest version of PostgreSql to separate directory
4. Restore to it from backup
5. If something fails, stop new and re-start old 8.1.4 postmaster.

The difficult point for me is (3).
How to install latest version in gentoo side-by side so that old
version is not touched?
In windows it is easy: I can select other directory from installer.
How to do same thing in this Gentoo ? I need also adminpack to read
log files from pgAdmin.

Andrus.



Aviso Legal  Este mensaje puede contener informacion de interes solo para CVG 
Venalum. Solo esta permitida su copia, distribucion o uso a personas 
autorizadas. Si recibio este corre por error, por favor destruyalo. 
Eventualmentew los correos electonicos pueden ser alterados. Al respecto, CVG 
Venalum no se hace responsable por los errores que pudieran afectar al mensaje 
original.
begin:vcard
fn:Adriana Alfonzo
n:Alfonzo;Adriana
org;quoted-printable:Gcia. Sistemas y Organizaci=C3=B3n;Proy. Software Libre
adr;quoted-printable:;;;Pto. Ordaz;Bol=C3=ADvar;;Venezuela
email;internet:[EMAIL PROTECTED]
title:Analista Ext. 5694
tel;work:5694
version:2.1
end:vcard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrading side by side in Gentoo

2008-11-12 Thread Andrus

I have separate production server running

"PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC 
i686-pc-linux-gnu-gcc (GCC) 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0, 
pie-8.7.9)"


and want to upgrade this to something newer.
I can allow some hours of downtime for upgrade at night.

I have found Gentoo upgrade instructions here
http://hasno.info/2008/7/31/upgrading-to-postgresql-8-3-on-gentoo

I have no idea how PostgreSql is installed in this server, maybe it is 
compiled from source.
So I'm afraid that using those instructions may corrupt the server so it 
will no longer come up in morning and I'm not able to fix it due to lack of 
allowed downtime and knowledge.


So I'm planning  the following strategy:

1. Create backup copy in this 8.1.4
2. Stop postmaster
3. Install latest version of PostgreSql to separate directory
4. Restore to it from backup
5. If something fails, stop new and re-start old 8.1.4 postmaster.

The difficult point for me is (3).
How to install latest version in gentoo side-by side so that old version is 
not touched?
In windows it is easy: I can select other directory from installer. How to 
do same thing in this Gentoo ? I need also adminpack to read log files from 
pgAdmin.


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to "group" several records with same timestamp into one line?

2008-11-12 Thread Brent Wood
You need to use a self relation, not a group by, as no data are being 
aggregated into a new single value, which is what the group by achieves.

This joins a table to itself, so that columns in it can be replicated. The key 
is that the where clause in each case
needs to just select one channel, so it acts like a similar table but contains 
different rows.

Because you used a char() instead of varchar() for channel, you may find your 
string 'channel1' has spaces in it to pad it to the specified length, in which 
case the where clauses below can use "like '%channel1%'" instead of "= 
'channel1'"
or you can strip the spaces before the comparison, eg: where 
"trim(a.channel)='channel1'". I hope this makes sense.

eg: select a.data, 
a.unit,
b.data,
b.unit,
c.data,
c.unit,
d.data,
d.unit,
a.create_on
   from record data a,
  record-data b,
  record_data c,
  record_data d
   where a.channel='channel1'
   and b.channel='channel2'
   and c.channel='channel3'
   and d.channel=channel4
   and b.create_on = a.create_on
   and c.create_on = a.create_on
   and d.create_on = a.create on;

Thus table a comprises all records from channel1, etc... and they are joined on 
a common timestamp.

NOTE: if any of the channels are null for any given timestamp, you will get no 
record for that timestamp using this syntax, even if other channels had data, 
because the query uses an inner join. If this is a problem then you'll need to 
reword the query as an outer join.

HTH,

   Brent Wood


Brent Wood
DBA/GIS consultant
NIWA, Wellington
New Zealand
>>> "zxo102 ouyang" <[EMAIL PROTECTED]> 11/13/08 3:15 AM >>>
 Hi everyone,
 My data with same timestamp "2008-11-12 12:12:12" in postgresql are as
follows

rowid  data   unitchannel  create_on
--
 11.5 MPa  channel1  2008-11-12 12:12:12
 22.5 M3   channel2   2008-11-12 12:12:12
  33.5 M3   channel3   2008-11-12 12:12:12
  44.5 t   channel4   2008-11-12 12:12:12
--

I would like to "group" them into one line with SQL like

   1.5 MPa   2.5M3  3.5  M3   4.5   t   2008-11-12 12:12:12


The table is defined as

CREATE TABLE record_data
(
  rowid serial NOT NULL,
  data double precision,
  unit character(10),
   channel character(20),
  create_on timestamp
)

Is that possible?   Thanks for your help in advance.

Ouyang

NIWA is the trading name of the National Institute of Water & Atmospheric 
Research Ltd.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MAX_CONNECTIONS ??

2008-11-12 Thread Richard Huxton
paulo matadr wrote:
> I need help to alter max_connections em my database, this parameter
> stay in defalt MAX_CONNECTIONS=100
> I want to change for MAX_CONNECTIONS=300. about parameters below,
> need to change anything?
> 
> 1 -sysctl.conf
> 
> kernel.shmmax = 68719476736
> kernel.shmall = 4294967296

Does your shmmax have one digit too many? 68GB?

> # - Memory -
> 
> 2-Postgresql.conf
> shared_buffers = 2048MB  
> temp_buffers = 12MB
> work_mem = 12MB 
> maintenance_work_mem = 128MB  
> max_fsm_pages =   400   
> max_fsm_relations = 5000

Should be OK, depending on your work-load.

One question - what do you plan on doing with 300 active connections?
Unless you've got more than 100 processors in this machine you might
want to consider connection-pooling.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade in Place

2008-11-12 Thread Richard Huxton
Quick tip: don't start a new thread by replying to a previous message,
it can mean your message gets missed.

Naomi Walker wrote:
> We have a very large new customer that is considering using Postgres for 
> their RDBMS.  The size
> of the database will be in the several terrabytes range, making 
> unloading/reloading a time consuming
> and disk-eating process.  This customer will be running on a Windows 
> platform. 

For such a large database, I would give serious thought to not running
on Windows. PostgreSQL has a much longer history on the unix side of the
fence, and a larger *nix community of users and developers. If you're
worried about support, I'm sure Red Hat, Novell or Sun would be more
than happy with the money you'd have spent on Windows licencing. There
are also several well established PostgreSQL consulting companies who
can support both the database and operating-system.

> I've been looking at the upgrade_in_place docs in pgfoundry, but it 
> seems it is in the "trying to be
> revived" stage.
> 
> Does anyone have newer information about the move towards upgrades-in-place?

Work is ongoing for 8.4 (to allow you to upgrade in-place to 8.5). You
have pretty much full access to all development discussions by checking
the pgsql-hackers mailing list (there's also a developers side to the
website). Search for "WIP" on the hackers list. Bear in mind that
there's no guarantee that *any* feature not yet complete will make it
into 8.4 (although there are a lot of big users who really want this
feature, so it's a strong candidate).

HTH

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrade in Place

2008-11-12 Thread Naomi Walker
We have a very large new customer that is considering using Postgres for 
their RDBMS.  The size
of the database will be in the several terrabytes range, making 
unloading/reloading a time consuming
and disk-eating process.  This customer will be running on a Windows 
platform. 

I've been looking at the upgrade_in_place docs in pgfoundry, but it 
seems it is in the "trying to be
revived" stage.

Does anyone have newer information about the move towards upgrades-in-place?

Thanks,
Naomi

-- 

Naomi Walker  Chief Information Officer
Eldorado Computing, Inc   [EMAIL PROTECTED]
  ---An EDS Company   602-604-3100

Hard things are put in our way, not to stop us, but to call out our
courage and strength. -- Anonymous


-- CONFIDENTIALITY NOTICE --

Information transmitted by this e-mail is proprietary to MphasiS and/or its 
Customers and is intended for use only by the individual or entity to which it 
is addressed, and may contain information that is privileged, confidential or 
exempt from disclosure under applicable law. If you are not the intended 
recipient or it appears that this e-mail has been forwarded to you without 
proper authority, you are notified that any use or dissemination of this 
information in any manner is strictly prohibited. In such cases, please notify 
us immediately at [EMAIL PROTECTED] and delete this mail from your records.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post to another db using pl/pgsql

2008-11-12 Thread Steve Atkins


On Nov 12, 2008, at 8:55 AM, Robert Fitzpatrick wrote:


I've worked with pgsql for a while, but never needed to post from a
database trigger to another db. Is this possible? And if so, can  
someone
offer a pointer to the docs on how to refer to other db's in my  
script,

etc?


Look for dblink (if it's another postgresql database) or dbilink (if
it isn't).

dblink is documented in one of the appendices of the postgresql
manual.

Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] still gin index creation takes forever

2008-11-12 Thread Teodor Sigaev
GIN's build algorithm could use bulk insert of ItemPointers if and only if they 
should be inserted on rightmost page (exact piece of code - dataPlaceToPage() in 
gindatapage.c, lines 407-427)


I'm not following.  Rightmost page of what --- it can't be the whole
index, can it, or the case would hardly ever apply?


GIN's index contains btree over keys (entry tree) and for each key it 
contains list of ItemPointers (posting list) or btree over ItemPointers 
(posting tree or data tree) depending on its quantity. Bulk insertion 
process collects into memory keys and sorted arrays of ItemPointers, and 
then for each keys, it tries to insert every ItemPointer from array into 
corresponding data tree one by one. But if the smallest ItemPointer in 
array is greater than the biggest stored one then algorithm will insert 
the whole array on rightmost page in data tree.


So, in that case process can insert about 1000 ItemPointers per one data 
tree lookup, in opposite case it does 1000 lookups in data tree.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table bloat and vacuum

2008-11-12 Thread Alvaro Herrera
Adriana Alfonzo escribió:
> Please, i wan't recive more mails

Por favor visita esta pagina:
http://www.postgresql.org/mailpref/pgsql-general
   
y desuscribete tu misma de la lista.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post to another db using pl/pgsql

2008-11-12 Thread Robert Fitzpatrick
On Wed, 2008-11-12 at 18:02 +0100, [EMAIL PROTECTED] wrote:
> > I've worked with pgsql for a while, but never needed to post from a
> > database trigger to another db. Is this possible? And if so, can someone
> > offer a pointer to the docs on how to refer to other db's in my script,
> > etc?
> 
> What do you mean by 'other db'? Does that mean other PostgreSQL database,
> or a completely different db (say MySQL for example)?

Sorry, yes, I'm wanting to post to another pgsql db on the same server.

-- 
Robert


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post to another db using pl/pgsql

2008-11-12 Thread tv
> I've worked with pgsql for a while, but never needed to post from a
> database trigger to another db. Is this possible? And if so, can someone
> offer a pointer to the docs on how to refer to other db's in my script,
> etc?

What do you mean by 'other db'? Does that mean other PostgreSQL database,
or a completely different db (say MySQL for example)?

There is a dblink contrib package - I've never worked with it, but I guess
you can use it to post to a different PostgreSQL db. AFAIK posting to a
MySQL from pl/pgsql is not possible, but you can use pl/perl with proper
packages for example.

regards
Tomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post to another db using pl/pgsql

2008-11-12 Thread tv
> I've worked with pgsql for a while, but never needed to post from a
> database trigger to another db. Is this possible? And if so, can someone
> offer a pointer to the docs on how to refer to other db's in my script,
> etc?

What do you mean by 'other db'? Does that mean other PostgreSQL database,
or a completely different db (say MySQL for example)?

There is a dblink contrib package - I've never worked with it, but I guess
you can use it to post to a different PostgreSQL db. AFAIK posting to a
MySQL from pl/pgsql is not possible, but you can use pl/perl with proper
packages for example.

regards
Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sequence not restarting in a for loop (actually semi-random numbers)

2008-11-12 Thread Ivan Sergio Borgonovo
On Wed, 12 Nov 2008 11:17:03 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:

> Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> >   insert into mytop (id,n) select id, nextval('tmp_seq')
> > from biglist join mylist on biglist.id=mylist
> > order by biglist.something limit 3;
> 
> I suspect you are expecting the nextval()s to be done after the
> LIMIT, but actually they'll most likely be done before it, ie
> you're getting a lot of unintended nextvals happening.  Try doing
> the ORDER BY/LIMIT in a sub-select.

Thanks. That did it.

I think I've seen similar replies before... but I couldn't relate
them to my problem.

I'd do like Bart Simpson: order by, limit, nextval, sequence, order
by, limit, nextval, sequence...
so that this answer will be well indexed ;)

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Post to another db using pl/pgsql

2008-11-12 Thread Robert Fitzpatrick
I've worked with pgsql for a while, but never needed to post from a
database trigger to another db. Is this possible? And if so, can someone
offer a pointer to the docs on how to refer to other db's in my script,
etc?

-- 
Robert


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sequence not restarting in a for loop (actually semi-random numbers)

2008-11-12 Thread Tom Lane
Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
>   insert into mytop (id,n) select id, nextval('tmp_seq')
> from biglist join mylist on biglist.id=mylist
> order by biglist.something limit 3;

I suspect you are expecting the nextval()s to be done after the LIMIT,
but actually they'll most likely be done before it, ie you're getting
a lot of unintended nextvals happening.  Try doing the ORDER BY/LIMIT
in a sub-select.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] merge 2 dumps

2008-11-12 Thread Alan Hodgson
On Tuesday 11 November 2008, Joao Ferreira gmail 
<[EMAIL PROTECTED]> wrote:
> could I just get the "COPY TO" sections from the files and load them one
> after the other ?
>
> I never tried this before...

You might have drop foreign keys before doing so and recreate them after - 
the dumps aren't careful to maintain ordering.

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] MAX_CONNECTIONS ??

2008-11-12 Thread paulo matadr
I need help to alter max_connections em my database, this parameter stay in 
defalt  MAX_CONNECTIONS=100
I want to change for MAX_CONNECTIONS=300.
about parameters below, need to change anything?

1 -sysctl.conf

kernel.shmmax = 68719476736
kernel.shmall = 4294967296
# - Memory -

2-Postgresql.conf
shared_buffers = 2048MB  
temp_buffers = 12MB
work_mem = 12MB 
maintenance_work_mem = 128MB  
max_fsm_pages =   400   
max_fsm_relations = 5000
  
Thanks for help



  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses

[GENERAL] sequence not restarting in a for loop (actually semi-random numbers)

2008-11-12 Thread Ivan Sergio Borgonovo
I've a function that does something like:

create temp sequence tmp_seq;
for _row in (select id from mylist where Weight<_Weight)
loop
  alter sequence tmp_seq restart with 1;
-- perform setval('tmp_seq',1,false);
  insert into mytop (id,n) select id, nextval('tmp_seq')
from biglist join mylist on biglist.id=mylist
order by biglist.something limit 3;
end loop;
drop sequence tmp_seq;

I'd expect something like

1,1
1,2
1,3
127,1
127,2
127,3

but I get something like

1,5
1,6
1,7
127,55
127,56
127,57
152,6
152,7
152,8

what should I do to obtain the result I was expecting?


-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Very slow queries w/ NOT IN preparation (seems like a bug, test case)

2008-11-12 Thread Richard Huxton
Sergey Konoplev wrote:
>>> What configure options did you use, what locale/encoding are you using,
>>> what nondefault settings have you got in postgresql.conf?
>>>
>>>regards, tom lane
>> You are right. I've found the odd thing (that completely drives me
>> mad) in postgresql.conf.
>>
>> You are able to reproduce slow-not-in queries by switching
>> constraint_exclusion to on in your postgresql.conf and running my test
>> (which is attached to the first message).
> 
> On more thing:
> 
> If you do
> 
> EXPLAIN SELECT 1 FROM table1 WHERE table1_id NOT IN (SELECT column1
> FROM (VALUES (123),(456),(789),... a lot of IDs here...)_);
> 
> it works as fast as with constraint_exclusion turned to off.

Good detective work sir! I can reproduce the problem here with
constraint_exclusion = on.

Presumably what it's doing is looking to see if the target table has any
relevant CHECK constraints for each of the 2000 values provided. It
won't do so for the second example because it's not smart enough to look
into the results of another select clause.

Hmm - a little bit of grepping...
 backend/optimizer/util/plancat.c
 566:relation_excluded_by_constraints(PlannerInfo *root
called from
 backend/optimizer/path/allpaths.c

You could presumably cache the results of the exclusion test, but that's
only going to be relevant where you have the same value more than once.
You could try to be smarter and evaluate all values in one go I suppose,
or limit how many you'll test against. I'm over my head here though -
you'll have to see what Tom says.

The good news is that you can just issue "SET constraint_exclusion"
before individual queries as a temporary workaround.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table bloat and vacuum

2008-11-12 Thread Adriana Alfonzo

Please, i wan't recive more mails

Thanks

Scott Marlowe escribió:

On Wed, Nov 12, 2008 at 8:13 AM, Jack Orenstein <[EMAIL PROTECTED]> wrote:


My application is running on 7.4. We have one huge table that drives


SNIP


We're in the process of upgrading to 8.3.4, so I'd appreciate any
throughs on whether and how this behavior will change with the newer
release.



You will not believe how much faster 8.3 is, and how much easier
maintenance is.  You'll be like a kid in a candy store for months
looking at and using all the new features in it.  The improvements are
enormous.  Biggest difference for you is that 8.3 can do vacuums in a
background method (it sleeps x milliseconds between pages), can run 3
or more threads, and autovacuum daemon is on by default.  For the most
part, your vacuuming issues will no longer exist.




Aviso Legal  Este mensaje puede contener informacion de interes solo para CVG 
Venalum. Solo esta permitida su copia, distribucion o uso a personas 
autorizadas. Si recibio este corre por error, por favor destruyalo. 
Eventualmentew los correos electonicos pueden ser alterados. Al respecto, CVG 
Venalum no se hace responsable por los errores que pudieran afectar al mensaje 
original.
begin:vcard
fn:Adriana Alfonzo
n:Alfonzo;Adriana
org;quoted-printable:Gcia. Sistemas y Organizaci=C3=B3n;Proy. Software Libre
adr;quoted-printable:;;;Pto. Ordaz;Bol=C3=ADvar;;Venezuela
email;internet:[EMAIL PROTECTED]
title:Analista Ext. 5694
tel;work:5694
version:2.1
end:vcard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table bloat and vacuum

2008-11-12 Thread Scott Marlowe
On Wed, Nov 12, 2008 at 8:13 AM, Jack Orenstein <[EMAIL PROTECTED]> wrote:
> My application is running on 7.4. We have one huge table that drives
SNIP
> We're in the process of upgrading to 8.3.4, so I'd appreciate any
> throughs on whether and how this behavior will change with the newer
> release.

You will not believe how much faster 8.3 is, and how much easier
maintenance is.  You'll be like a kid in a candy store for months
looking at and using all the new features in it.  The improvements are
enormous.  Biggest difference for you is that 8.3 can do vacuums in a
background method (it sleeps x milliseconds between pages), can run 3
or more threads, and autovacuum daemon is on by default.  For the most
part, your vacuuming issues will no longer exist.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Suboptimal execution plan for simple query

2008-11-12 Thread Markus Wollny
Hi!

We've got a table with the following definition:

CREATE TABLE image_relation
(
  id integer,
  article_id integer NOT NULL,
  entity_id integer NOT NULL,
  image_id integer NOT NULL,
  subline text,
  "position" integer,
  article_headline text,
  entity_name text,
  entity_type_id integer,
  entity_source text,
  default_pic character varying(3) NOT NULL,
  last_updated timestamp without time zone NOT NULL DEFAULT now(),
  CONSTRAINT pkey_image_relation PRIMARY KEY (article_id, entity_id, image_id)
)
WITH (OIDS=FALSE);

There are simple btree indexes on article_id, default_pic, entity_id, id, 
image_id, last_updated and position. The table has about 723,000 rows, stats 
say table size is 135MB, toast tables are 184MB and index size was at a 
whopping 727MB - so I thought I might do some additional maintenance. After 
reindexing, I got index size down to 131MB. This however did not affect the 
planner choices in any way, as they and the resulting execution times stayed 
the same before and after table maintenance (reindex and subsequent vacuum 
analyze). Our PostgreSQL version is 8.2.4 (I am going to move on to the latest 
and greatest 8.3 in about two weeks).

Now I've got this simple query

SELECT  image_id
FROM image_relation
WHERE entity_id = 69560::integer
ORDER BY last_updated DESC
LIMIT1;

which currently runs for something around 600ms. Here's the explain analyze 
output:

"Limit  (cost=0.00..144.78 rows=1 width=12) (actual time=599.745..599.747 
rows=1 loops=1)"
"  ->  Index Scan Backward using idx_image_relation_last_updated on 
image_relation  (cost=0.00..39525.70 rows=273 width=12) (actual 
time=599.741..599.741 rows=1 loops=1)"
"Filter: (entity_id = 69560)"
"Total runtime: 599.825 ms"

SELECT  image_id
FROM image_relation
WHERE entity_id = 69560::integer;

only returns three rows. So I wonder why the planner chooses to use the index 
on last_updated instead of the index on entity_id; I found out that I can get 
it to reconsider and make a wiser choice by adding some seemingly superfluous 
statement to the WHERE clause (notice the AND... bit):

SELECT  image_id
FROM image_relation
WHERE entity_id = 69560
AND entity_id = entity_id
ORDER BY last_updated DESC
LIMIT1

"Limit  (cost=881.82..881.82 rows=1 width=12) (actual time=0.097..0.099 rows=1 
loops=1)"
"  ->  Sort  (cost=881.82..881.82 rows=1 width=12) (actual time=0.094..0.094 
rows=1 loops=1)"
"Sort Key: last_updated"
"->  Index Scan using idx_image_relation_entity_id on image_relation  
(cost=0.00..881.81 rows=1 width=12) (actual time=0.063..0.075 rows=3 loops=1)"
"  Index Cond: (entity_id = 69560)"
"  Filter: (entity_id = entity_id)"
"Total runtime: 0.128 ms"

That's much more like it. The table is being vacuumed on a regular basis by 
both a nightly cron and the autovacuum daemon. 

Any ideas on what's going wrong here?

Kind regards

   Markus


Computec Media AG
Sitz der Gesellschaft und Registergericht: Fürth (HRB 8818)
Vorstandsmitglieder: Johannes S. Gözalan (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jürg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Very slow queries w/ NOT IN preparation (seems like a bug, test case)

2008-11-12 Thread Sergey Konoplev
>> What configure options did you use, what locale/encoding are you using,
>> what nondefault settings have you got in postgresql.conf?
>>
>>regards, tom lane
>
> You are right. I've found the odd thing (that completely drives me
> mad) in postgresql.conf.
>
> You are able to reproduce slow-not-in queries by switching
> constraint_exclusion to on in your postgresql.conf and running my test
> (which is attached to the first message).
>

On more thing:

If you do

EXPLAIN SELECT 1 FROM table1 WHERE table1_id NOT IN (SELECT column1
FROM (VALUES (123),(456),(789),... a lot of IDs here...)_);

it works as fast as with constraint_exclusion turned to off.

-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english & russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table bloat and vacuum

2008-11-12 Thread Alvaro Herrera
Jack Orenstein wrote:

> 1) Why does the tiny table bloat during a vacuum? Is it because the
> scan of the huge table is run as a transaction, forcing maintenance of
> dead versions of the tiny table's one row?

Yes.

> 2) Why does the bloat resolve itself? We're not doing any full
> vacuums.

Probably the one live tuple bounces to the first page at some point and
then the rest of the pages are truncated by vacuum.

> We're in the process of upgrading to 8.3.4, so I'd appreciate any
> throughs on whether and how this behavior will change with the newer
> release.

In 8.3, vacuuming the big table will not delay dead tuple removal of the
small table.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Table bloat and vacuum

2008-11-12 Thread Jack Orenstein

My application is running on 7.4. We have one huge table that drives
our application, and also a tiny (single-row) table used to maintain
some aggregate information. Each transaction inserts or updates 1-2
rows in the huge table, and updates the tiny table.

We vacuum the entire database once a week, and the tiny table every
2000 transactions.

I'm trying to understand some odd behavior observed regarding the tiny
table: The tiny table's disk file is usually 8K or 16K.  During the
weekly vacuum, the tiny table bloats. It's still one row, but the size
of the file grows. I've seen it get as high as 1M. But then after the
vacuum, it returns to its normal size.

1) Why does the tiny table bloat during a vacuum? Is it because the
scan of the huge table is run as a transaction, forcing maintenance of
dead versions of the tiny table's one row?

2) Why does the bloat resolve itself? We're not doing any full
vacuums.

We're in the process of upgrading to 8.3.4, so I'd appreciate any
throughs on whether and how this behavior will change with the newer
release.

Jack Orenstein

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using refcursors in application code (php, python, whatever...)

2008-11-12 Thread Webb Sprague
Hi all,

Does anyone have any advice on using application code with a
refcursor? This is a follow up to my "is there a safe-ish way to
execute arbitrary sql" ?  Now that I have way to execute my arbitrary
sql, I need to generate an html table with the arbitrary result inside
some PHP (or whatever), so I need to loop over the results of the
cursor (or do some weird return setof magic which I just figured out).

Sorry to ask here, but I think it is more likely that the readers will
even know what a refcursor is than on a massive php email list.

Here is what I have working in psql:

select query_table_data('select * from mkn_data.datatable_00013', 'boobear');

 query_table_data
--
 boobear
(1 row)

mkn=# FETCH FORWARD ALL FROM boobear;
 col1  | col2
---+--
 bob   |   34
 alice |   32
(2 rows)

Now I need something that says

$curs.fetchall()

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sort_mem param of postgresql.conf

2008-11-12 Thread Grzegorz Jaśkiewicz
On Wed, Nov 12, 2008 at 3:01 PM, Alvaro Herrera
<[EMAIL PROTECTED]>wrote:

>
>
> sort_mem is just a convenience alias for work_mem.
>
> point taken


GJ


Re: [GENERAL] sort_mem param of postgresql.conf

2008-11-12 Thread Alvaro Herrera
Grzegorz Jaśkiewicz escribió:
> that's on 8.3, by default there was no sort_mem , I hadded it , changed it
> to 512MB and all of the sudent everything started to fly

sort_mem is just a convenience alias for work_mem.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sort_mem param of postgresql.conf

2008-11-12 Thread Grzegorz Jaśkiewicz
that's on 8.3, by default there was no sort_mem , I hadded it , changed it
to 512MB and all of the sudent everything started to fly - wherever it
required hudge ammounts of memory for queries. The queries are reports, so
they basicaly join few tables, and dump all ofthat content - so it takes a
bit of a memory to do it ;)

thanks anyway.


Re: [GENERAL] still gin index creation takes forever

2008-11-12 Thread Tom Lane
Teodor Sigaev <[EMAIL PROTECTED]> writes:
> GIN's build algorithm could use bulk insert of ItemPointers if and only if 
> they 
> should be inserted on rightmost page (exact piece of code - dataPlaceToPage() 
> in 
> gindatapage.c, lines 407-427)

I'm not following.  Rightmost page of what --- it can't be the whole
index, can it, or the case would hardly ever apply?

> Is any way to force table's scan from the beginning?

We could extend IndexBuildHeapScan's API to support that, but I'm
not quite convinced that this is the issue.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sort_mem param of postgresql.conf

2008-11-12 Thread Scott Marlowe
On Wed, Nov 12, 2008 at 7:36 AM, Grzegorz Jaśkiewicz <[EMAIL PROTECTED]> wrote:
> Hey folks,
>
> Anyone knows why the "sort_mem" parametr isn't by default in postgresql.conf
> file ? I had no idea it existed before, and we had some hudge queries
> running 1-2 minutes here, doing all sorts on disc, adding sort_mem = 512MB
> to config shortened that to 20s. These are run only ocasionaly, so the
> memory won't be always used - but helps in these few cases .

sort_mem is right there.  If you're running an older version of pgql.
I think it turned into work_mem around 8.0 or 8.1

Are you trying to use a postgresql.conf from an older version of pgsql
with a newer version of pgsql?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Very slow queries w/ NOT IN preparation (seems like a bug, test case)

2008-11-12 Thread Sergey Konoplev
> What configure options did you use, what locale/encoding are you using,
> what nondefault settings have you got in postgresql.conf?
>
>regards, tom lane

You are right. I've found the odd thing (that completely drives me
mad) in postgresql.conf.

You are able to reproduce slow-not-in queries by switching
constraint_exclusion to on in your postgresql.conf and running my test
(which is attached to the first message).

Looking forward to hearing from you.

-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english & russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] sort_mem param of postgresql.conf

2008-11-12 Thread Grzegorz Jaśkiewicz
Hey folks,

Anyone knows why the "sort_mem" parametr isn't by default in postgresql.conf
file ? I had no idea it existed before, and we had some hudge queries
running 1-2 minutes here, doing all sorts on disc, adding sort_mem = 512MB
to config shortened that to 20s. These are run only ocasionaly, so the
memory won't be always used - but helps in these few cases .

-- 
GJ


Re: [GENERAL] still gin index creation takes forever

2008-11-12 Thread Teodor Sigaev

Any suggestion about how to track down the problem?


What you are describing sounds rather like a use-of-uninitialized-memory
problem, wherein the behavior depends on what happened to be in that
memory previously.  If so, using a debug/cassert-enabled build of
Postgres might help to make the behavior more reproducible.


It seems to me, possible reason of that behavior could be an order of table's 
scanning. GIN's build algorithm prefers scan from begin to the end of table, but 
in 8.3 it's not always true - scan may begin from the middle or end of table 
depending on sequence scan's history.


GIN's build algorithm could use bulk insert of ItemPointers if and only if they 
should be inserted on rightmost page (exact piece of code - dataPlaceToPage() in 
gindatapage.c, lines 407-427)


Is any way to force table's scan from the beginning?
--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to "group" several records with same timestamp into one line?

2008-11-12 Thread Craig Ringer
zxo102 ouyang wrote:

> I would like to "group" them into one line with SQL like
>  
>1.5 MPa   2.5M3  3.5  M3   4.5   t   2008-11-12 12:12:12

Look up the "GROUP BY" clause.

http://www.postgresql.org/docs/current/static/sql-select.html

Note that with timestamps you may have to truncate/round them to the
desired precision, since current versions of PostgreSQL store timestamps
in a floating point format and two timestamps you'd expect to be equal
may not end up being seen as exactly equal.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to several records with same timestamp into one line?

2008-11-12 Thread A. Kretschmer
am  Wed, dem 12.11.2008, um 18:57:42 +0800 mailte zxo102 ouyang folgendes:
> Hi everyone,
>  My data with same timestamp "2008-11-12 12:12:12" in postgresql are as
> follows
>  
> rowid  data   unitchannel  create_on
> --
>  11.5 MPa  channel1  2008-11-12 12:12:12
>  22.5 M3   channel2   2008-11-12 12:12:12
>  33.5 M3   channel3   2008-11-12 12:12:12
>  44.5 t   channel4   2008-11-12 12:12:12
> --
>  
> I would like to "group" them into one line with SQL like
>  
>1.5 MPa   2.5M3  3.5  M3   4.5   t   2008-11-12 12:12:12

Maybe you can create a new aggregate, for instance like there:

http://www.zigo.dhs.org/postgresql/#comma_aggregate



Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to "group" several records with same timestamp into one line?

2008-11-12 Thread zxo102 ouyang
 Hi everyone,
 My data with same timestamp "2008-11-12 12:12:12" in postgresql are as
follows

rowid  data   unitchannel  create_on
--
 11.5 MPa  channel1  2008-11-12 12:12:12
 22.5 M3   channel2   2008-11-12 12:12:12
  33.5 M3   channel3   2008-11-12 12:12:12
  44.5 t   channel4   2008-11-12 12:12:12
--

I would like to "group" them into one line with SQL like

   1.5 MPa   2.5M3  3.5  M3   4.5   t   2008-11-12 12:12:12


The table is defined as

CREATE TABLE record_data
(
  rowid serial NOT NULL,
  data double precision,
  unit character(10),
   channel character(20),
  create_on timestamp
)

Is that possible?   Thanks for your help in advance.

Ouyang


[GENERAL] how to several records with same timestamp into one line?

2008-11-12 Thread zxo102 ouyang
Hi everyone,
 My data with same timestamp "2008-11-12 12:12:12" in postgresql are as
follows

rowid  data   unitchannel  create_on
--
 11.5 MPa  channel1  2008-11-12 12:12:12
 22.5 M3   channel2   2008-11-12 12:12:12
  33.5 M3   channel3   2008-11-12 12:12:12
  44.5 t   channel4   2008-11-12 12:12:12
--

I would like to "group" them into one line with SQL like

   1.5 MPa   2.5M3  3.5  M3   4.5   t   2008-11-12 12:12:12


The table is defined as

CREATE TABLE record_data
(
  rowid serial NOT NULL,
  data double precision,
  unit character(10),
   channel character(20),
  create_on timestamp
)

Is that possible?   Thanks for your help in advance.

Ouyang


Re: [GENERAL] How to define automatic filter condition?

2008-11-12 Thread Csaba Együd


Sure, no problem.


Please avoid top-posting.


Regards, Andreas
--


Great, thanks.

-- Csaba



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SHMMAX and shared_bufffers

2008-11-12 Thread Thom Brown
Thanks guys.

I think I've got a clearer idea of what's going on here.  Basically I
need to take more into account than just the shared buffers setting,
and knowing that, I can leave extra head-room for other options which
will likely take up shared memory.

Cheers

Thom

On Wed, Nov 12, 2008 at 1:18 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Tue, 11 Nov 2008, Thom Brown wrote:
>
>> I've noticed that if I set my SHMMAX to 256 * 1024 * 1024 (268435456)
>> and my shared_buffers value in postgresql.conf to 256MB the server
>> fails to start.  I managed to find a tipping point: 249MB seems to be
>> too much and 248MB seems to be okay.
>
> Buffers are allocated in 8K blocks.  When you use '256MB', that turns into
> 32768 of those.  Each buffer has a header and some other overhead such that
> they actually take up 8400 bytes[1], which totals 1.025X the amount that
> actually goes into the buffer pool.  There's another MB or two that goes
> into other overhead, that part depends on things like max_connections.  That
> overhead computation isn't even exact in the source code![2]
>
> At shared_buffers=248MB, just the buffer pool overhead pushes the required
> amount to allocate up to 254.3MB; that just slips by with the rest of your
> settings.  At 249MB, the buffer pool needs 255.3MB.  With the other overhead
> added in, that just exceeds the 256MB SHMMAX and fails.
>
> 1:
> http://www.postgresql.org/docs/current/static/kernel-resources.html#SHARED-MEMORY-PARAMETERS
> 2:
> http://doxygen.postgresql.org/ipci_8c.html#5371eff96f6dea948021ddfc9f0f5a38
>
> --
> * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to define automatic filter condition?

2008-11-12 Thread A. Kretschmer
am  Wed, dem 12.11.2008, um 11:10:41 +0100 mailte Csaba Együd folgendes:
> Andreas,
> thank you for your reply. Yes, I know that there is such a mathod but I 
> read somewhere that it is just a limited way. BTW, I  am going to dig into 
> this more deeper.
> 
> A possible solution has come into my minde: does it have any risk to use a 
> read only view for selects but inserts and updates are performed on the 
> base table? I mean that I would select data from the view and show that 
> automatically filtered row set to the user but when she/he inserts or 
> updates a row the generated query would operate on the real (base) table.

Sure, no problem.


Please avoid top-posting.


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to define automatic filter condition?

2008-11-12 Thread Csaba Együd

Andreas,
thank you for your reply. Yes, I know that there is such a mathod but I read 
somewhere that it is just a limited way. BTW, I  am going to dig into this 
more deeper.


A possible solution has come into my minde: does it have any risk to use a 
read only view for selects but inserts and updates are performed on the base 
table? I mean that I would select data from the view and show that 
automatically filtered row set to the user but when she/he inserts or 
updates a row the generated query would operate on the real (base) table.


thx,

-- Csaba

""A. Kretschmer"" <[EMAIL PROTECTED]> a következoket írta 
üzenetében news:[EMAIL PROTECTED]

am  Wed, dem 12.11.2008, um  8:08:08 +0100 mailte Csaba Együd folgendes:

Hi All,

--PG8.3
--Windows 2k3 SBS

I would like to apply an automatic filter condition to a table. I create 
a

TEMP table at the beginning of the session to store a value to build up a
filter condition and I would like to apply this condition to every select
statement to a table during the current session.
I know that I could use views but views are read only and I also need 
Update

and Insert.


You can define RULEs on insert and update on a view to do real
insert/update on the base-table.


Andreas
--
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Avoiding seq scan over 3.2 millions rows

2008-11-12 Thread tv
> "Andrus" <[EMAIL PROTECTED]> writes:
>> explain analyze SELECT sum(xxx)
>>FROM dok JOIN rid USING (dokumnr)
>>WHERE  dok.kuupaev BETWEEN '2008-04-01' AND '2008-04-30'
>
>> "Hash Join  (cost=29584.84..308259.32 rows=142686 width=0) (actual
>> time=68510.748..96932.174 rows=117883 loops=1)"
>> "  Hash Cond: ("outer".dokumnr = "inner".dokumnr)"
>> "  ->  Seq Scan on rid  (cost=0.00..195599.68 rows=3217368 width=4)
>> (actual
>> time=17.130..56572.857 rows=3247363 loops=1)"
>> "  ->  Hash  (cost=29243.76..29243.76 rows=53231 width=4) (actual
>> time=15878.782..15878.782 rows=44685 loops=1)"
>> "->  Index Scan using dok_kuupaev_idx on dok
>> (cost=0.00..29243.76
>> rows=53231 width=4) (actual time=47.102..15651.059 rows=44685 loops=1)"
>> "  Index Cond: ((kuupaev >= '2008-04-01'::date) AND (kuupaev
>> <=
>> '2008-04-30'::date))"
>> "Total runtime: 97364.282 ms"
>
>> Query performs seq scan over 3.2 million of rows.
>
> There isn't anything particularly wrong with that plan.  The alternative
> that you seem to be wishing for would involve ~5 index probes into
> "rid", which is hardly going to be free.
>
> You could try reducing random_page_cost to push the planner in the
> direction of preferring the indexscan solution, but whether this is
> actually better in your situation remains to be seen.

Or just use "enable_seqscan = off" - that should penalize the seq scan and
use an index scan instead. You can then execute the EXPLAIN ANALYZE again
and you'll see how fast the index scan is in this case.

You may try some "standard" optimization - I'd try clustering the 'rid'
table according to the "dokumnr" column, but I doubt it will outperform
the seq scan in this case. If the rows are 'wide' you may try to 'compact'
the table so that it's possible to read it faster during the seq scan. Try
something like

CREATE TABLE rid2 AS SELECT dokumnr, xxx FROM rid;

regards
Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Windows 2008 / 8.3] service account creation failure *SOLVED*

2008-11-12 Thread Lars Andersen
Just wanted to let you know that when the service account creation fails 
in 8.3 and you *do* run the installer with administrator privileges, it 
may be that you chose a password that was "too simple".


The installer of 8.3 simply outputs "The service user account 'postgres' 
could not be created."


The installer of 8.2.5, on the other hand, using the same password, 
outputs something along the lines of "Service account could not be 
created - Password is too weak!".


After I used Roboform to generate a more complex password and used that 
for 8.2.5, it installed fine.


Then I uninstalled 8.2.5 and launched the installer for 8.3 again, this 
time using the same complex password I had successfully used to get 
8.2.5 running - and voila, the install goes just fine.


Really too bad that the new installer does not specify why the creation 
failed!


Hope this helps

--Lars

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] still gin index creation takes forever

2008-11-12 Thread Ivan Sergio Borgonovo
On Tue, 11 Nov 2008 22:02:17 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:

> Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> > Any suggestion about how to track down the problem?
> 
> What you are describing sounds rather like a
> use-of-uninitialized-memory problem, wherein the behavior depends
> on what happened to be in that memory previously.  If so, using a
> debug/cassert-enabled build of Postgres might help to make the
> behavior more reproducible.
> 
> (Of course, if the result is that it's reproducibly fast, this
> doesn't get us much closer to solving the problem :-(.  But it
> seems worth trying.)

There is no such a beast for Debian etch/sid.

Fortunately the re-indexing will happens very seldom and I can just
split the 2 parts so that I'll do my superstitious rituals before
re-indexing.
But it's like living with a ghost at home and at this moment it is
out of my reach compiling postgres.

I'm surprised I'm the only one experiencing this problem and I think
I'm using a quite popular set of packages: etch + postgresql
backport so I'm wondering if postgresql really deserve the blame or
it's something else.

But I can't think of any "strange" behaviour on my part that could
justify what's happening.
There are times (seldom actually) when the index get created in
around 6 minutes and times it takes forever even when the box is not
under load. Re-indexing with gist always succede in around 2min.
I even stopped the server and reload everything from backup.
During restore index creation happens in reasonable time.
Restore didn't report any error, but the behaviour is still there.

So maybe this stuff is triggered by some combination of the
postgresql configuration.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general