[GENERAL] Automatic install on windows

2007-06-25 Thread Jason Long
I have a client that wants a disaster recovery plan put into place.  What is
the easiest way to do a hands free install of postgresql on a window box?

 

Thank you for your time,
 
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.supernovasoftware.com
HJBUG Founder and President
http://www.hjbug.com

 



Re: [GENERAL] dynamic table/col names in plpgsql

2007-06-25 Thread gary jefferson

ugh, I'm going down a rathole with this...  the dynamic part of the
query is the table name, and therefore, if I want to select into a row
variable, that variable's declaration needs to be dynamic, too.  That
seems kind of crazy, and I see no way to do that anyway.  Maybe I'm
going about this all wrong.

So, what is the right approach?  The problem I'm trying to solve is
that I want to key off one of the parameters passed into my function,
and then operate on either table_a or table_b, depending on that
parameter (do some select queries, possibly followed by an update
query).  The column names are the same for both tables, so all other
parts of the query should remain relatively static.  Its just the
table name I need to be dynamic with.

Thanks,
Gary


On 6/25/07, gary jefferson [EMAIL PROTECTED] wrote:

Thanks Michael and Steve.

Related question: If I previously had a

IF EXISTS (select ...)

statement, and the 'select ...' part now needs to be run with EXECUTE,
how do I check for existence?  It looks like I need to do an 'INTO'
with a row variable?  I can't seem to find an example snippet of how
to accomplish this in the pg docs.  Where should I be looking?

Thanks again!
Gary


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [SQL] another simple SQL question

2007-06-25 Thread Jean-David Beyer
Joshua wrote:
 Ok here is another simple question from a novice
 
 Here is what my table looks like
 
 firstname lastname fullname
 --   --   ---
  smith, john
  green, susan
  white, jeff
 
 
 How can I break the fullname field into firstname lastname fields so it
 looks like the following:
 
 firstname  lastname  fullname
 - -   -
 john smith smith, john
 susan   green green, susan
 jeff   white white, jeff
 
 Please let me know. Sorry for such simple novice questions, I appreciate
 your support.
 
How I would do it would be to write a trivial application program to do it.

I spent a long time working on databases of telephone directory information,
and we needed to look up people by name, by address, by town, etc.

It turned out that the best way to handle finding-name fields was to leave
the first, middle, and last names in one field. A big problem is
multicultural. Some people (e.g., Chinese) tend to give family name first
followed by given name. Others (e.g., English) tend to give given names
first, followed by family name. Telephone operating companies do not get
these things correct, so it better just to keep them together.

The relational database management system we used, that I originally wrote
for UNIX, allowed partial matching on fields, and I even put a SOUNDEX
scheme on the name fields.

Thus, a query like 'ristorante italiano' would locate 'Moms Pizza Italian
Restaurant' without doing a sequential search of the entire database.
Similarly, 'T S Eliot' would locate 'Eliot Thomas Stearns'.

Problems remain, such as 'Doug Mc Ilroy' would not find 'M Douglas McIlroy
the way we built the program below.

You might look here:

http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=arnumber=810466isnumber=16537

for one way to do this. It explains briefly how to make a suitable index for it.

-- 
  .~.  Jean-David Beyer  Registered Linux User 85642.
  /V\  PGP-Key: 9A2FC99A Registered Machine   241939.
 /( )\ Shrewsbury, New Jerseyhttp://counter.li.org
 ^^-^^ 12:25:02 up 4 days, 20:00, 3 users, load average: 4.25, 4.14, 4.12

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] permission denied for schema

2007-06-25 Thread Manuel Sugawara
Tom Lane [EMAIL PROTECTED] writes:

 jef peeraer [EMAIL PROTECTED] writes:
 Tom Lane schreef:
 That's a foreign-key check, which is supposed to be done as the owner of
 the table.  You did not show us who owns table clienten, but I think
 that role must be missing the intended(?) membership in deterp_group.

 registratie=# \z deterp.clienten;
   Access privileges for database registratie
   Schema |   Name   | Type  |Access privileges
 +--+---+--
   deterp | clienten | table | {jpe=arwdxt/jpe,deterp_group=arwdxt/jpe}

 Hmm.  This doesn't actually say so, but I think that jpe must be the
 owner of clienten, and he's also the owner of the schema.  Which makes
 the failure really darn odd.

 I tried to reproduce the situation on 8.1.9 and couldn't see any
 problem.  I do not see anything in the 8.1 CVS history that looks
 related, but it's slightly possible this is a bug that's fixed between
 8.1.3 and 8.1.9 (and even if that is not the case, you really really
 ought to update anyway).

Just for the record, I've seen the same problem in my database (a
superuser trying to INSERT leads to the same security exception), not
sure which release but the version was 8.1. Haven't seen the problem
for some time now, currently running 8.1.8.

Regards,
Manuel.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] minimum function

2007-06-25 Thread Gunther Mayer

Tom Lane wrote:

Gunther Mayer [EMAIL PROTECTED] writes:
  

SELECT minimum(5,6) = 5



You're looking for the least/greatest functions (in 8.1 and up IIRC).

regards, tom lane
  
Awesome, that's exactly what I was looking for. My pl/pgsql minimum() 
hack is gonna go in the dumpster now ;-)


Thanks so much guys.

Gunther

P.S.: Can't believe I didn't spot that in the documentation, that's what 
happens in the rare cases of using the wrong search terms...


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] was is the normal number of postgres.exe?

2007-06-25 Thread Wm.A.Stafford
We are testing an application received from a collaborator that uses the 
latest version of postgresql.  After a few minutes of use we often see a 
dozen or more postgres.exe listed by the windows XP task manager.  This 
is at a time when we are not doing anything with the application so 
these processes are not associated with our current use of the 
application. These processes are using zero cpu and between 2,500k and 
3,500k of memory.  Occasionally we also see one or two postgres.exe that 
are using about 50% of cpu. 

On another machine that is running postgresql as a service we usually 
see three or four postgresql.  Any ideas about what could be causing the 
large number of postgres.exe or the cpu hogging postgres.exe?


Thanks for any ideas or leads.
-=bill

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] was is the normal number of postgres.exe?

2007-06-25 Thread Bill Moran
In response to Wm.A.Stafford [EMAIL PROTECTED]:

 We are testing an application received from a collaborator that uses the 
 latest version of postgresql.  After a few minutes of use we often see a 
 dozen or more postgres.exe listed by the windows XP task manager.  This 
 is at a time when we are not doing anything with the application so 
 these processes are not associated with our current use of the 
 application. These processes are using zero cpu and between 2,500k and 
 3,500k of memory.  Occasionally we also see one or two postgres.exe that 
 are using about 50% of cpu. 
 
 On another machine that is running postgresql as a service we usually 
 see three or four postgresql.  Any ideas about what could be causing the 
 large number of postgres.exe or the cpu hogging postgres.exe?

Each connection to the database spawns a new process.  On a POSIX system,
you can use netstat to see what each process is connected to.  I'm sure
there's some equivalent on Windows.  Connections that are idle still
require a process running to handle them.  If they're using CPU, then
they are executing queries or doing other work.

There are also a few processes that are always running depending on your
configuration: the background writer, and possibly the autovacuum process.
These will use CPU if they're actually doing work.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] yet another simple SQL question

2007-06-25 Thread Joshua

Ok,

You guys must be getting sick of these newbie questions, but I can't 
resist since I am learning a lot from these email lists and getting 
results quick! Thanks to everyone for their contributions.


Here is my questions

I have a column that looks like this

firstname
-
John B
Mark A
Jennifer D

Basically I have the first name followed by a middle initial. Is there a 
quick command I can run to strip the middle initial? Basically, I just 
need to delete the middle initial so the column would then look like the 
following:


firstname
---
John
Mark
Jennifer

Thanks again for all of your help today. Everything you guys have been 
sending has produced successful results.


Thanks.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Move a database from one server to other

2007-06-25 Thread Francisco Reyes

Nicholas Barr writes:



It is much much safer to use pg_dump/pg_restore and know that your data is
consistent.
http://www.postgresql.org/docs/8.2/static/backup-dump.html


In particular look at the section 
23.1.2. Using pg_dumpall


That will copy your entire data set including users.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Automatic install on windows

2007-06-25 Thread Magnus Hagander
Jason Long wrote:
 I have a client that wants a disaster recovery plan put into place. 
 What is the easiest way to do a hands free install of postgresql on a
 window box?

http://pginstaller.projects.postgresql.org/silent.html


//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Standby servers and incrementally updated backups

2007-06-25 Thread Erik Jones
It is my understanding that once a standby server has reached the  
point where it is often waiting for wal files to replay, it is pretty  
much caught up to the primary server, with the differences being in  
whatever wal files are currently in queue to be archived by the  
primary.  If I'm correct, then for large databases wherein it can  
take hours to take a base backup, is there anything to be gained by  
using incrementally updated backups?


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] yet another simple SQL question

2007-06-25 Thread Erik Jones

On Jun 25, 2007, at 12:44 PM, Joshua wrote:


Ok,

You guys must be getting sick of these newbie questions, but I  
can't resist since I am learning a lot from these email lists and  
getting results quick! Thanks to everyone for their contributions.


Here is my questions

I have a column that looks like this

firstname
-
John B
Mark A
Jennifer D

Basically I have the first name followed by a middle initial. Is  
there a quick command I can run to strip the middle initial?  
Basically, I just need to delete the middle initial so the column  
would then look like the following:


firstname
---
John
Mark
Jennifer

Thanks again for all of your help today. Everything you guys have  
been sending has produced successful results.




Try something along the lines of:


SELECT substring(firstname from '^(\w*)\W') from table_name;


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Move a database from one server to other

2007-06-25 Thread Michael Glaesemann


On Jun 25, 2007, at 10:44 , Nicholas Barr wrote:

Only copy the data directory if both servers are offline and not  
running

and if both servers use the same version of postgres.


... and have the same architecture. As you note, copying the data  
directory is *not* the recommended way to do this. Use pg_dump[all]  
or something like Slony.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] acampada

2007-06-25 Thread Adrián Ribao Martínez
 Sorry, I have no idea about what happen!!
Sorry!


El Lunes, 25 de Junio de 2007, Andrej Ricnik-Bay escribió:
 On 6/26/07, Adrián Ribao Martínez [EMAIL PROTECTED] wrote:
  Ya he publicado la informacióne en la página y he abierto un post en el
  foro para organizarlo.
  Creo que lo mejor sería que fueras respondiendo a las preguntas que se
  formulan en el. Esta es la direccion:
  http://foro.universoheroes.com/showthread.php?tid=1105pid=21813#pid21813
 
  Un saludo!

 Ok, so you top-post, and somehow get spanish onto an
 English list  what was that all about?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] I want to apologize

2007-06-25 Thread Adrián Ribao Martínez
Please, I'm sorry, I have sent some emails to the list that shouldn't go 
there.
I have realized right now that there is a bug with Kmail, and every time I 
reply a message, the email address is [EMAIL PROTECTED]
I have no idea why.
Sorry again.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Standby servers and incrementally updated backups

2007-06-25 Thread Simon Riggs
On Mon, 2007-06-25 at 13:42 -0500, Erik Jones wrote:

 It is my understanding that once a standby server has reached the  
 point where it is often waiting for wal files to replay, it is pretty  
 much caught up to the primary server, with the differences being in  
 whatever wal files are currently in queue to be archived by the  
 primary. 

Yes. You can tell by using 

  select pg_xlogfile_name(pg_current_xlog_location());

to see what the current file on the Primary is.

  If I'm correct, then for large databases wherein it can  
 take hours to take a base backup, is there anything to be gained by  
 using incrementally updated backups?

If you are certain there are parts of the database not touched at all
between backups. The only real way to be sure is to take file level
checksums, or you can trust file dates. Many backup solutions can do
this for you.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] A problem in inheritance

2007-06-25 Thread Talha Khan

to impact that update you'd have to have a rule on
account_login. No rule on a child table will matter.


Well i had the same perception In order to double check this i created
an On insert do nothing rule on the child table and did an insertion to the
master table i.e (account_login) but this time the On insert do nothing rule
did affect and no insert was done to the child table...

Note: I did not have any on insert do nothing rule on the master table.

Regards,

Talha Amjad


On 6/23/07, Jim Nasby [EMAIL PROTECTED] wrote:


On Jun 20, 2007, at 12:53 PM, Talha Khan wrote:
 THE ACTUAL TEST:

   DROP RULE account_login_no_update ON account_login;


   UPDATE account_login set originating_source = 'xxx';

   Now the update should not effect the child table but it does,
 evident
   from the output of the following query:

That's because you dropped the rule that would have affected that
query. Remember that rules effectively work on the query strings
themselves, so to impact that update you'd have to have a rule on
account_login. No rule on a child table will matter.
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)





Re: [GENERAL] Standby servers and incrementally updated backups

2007-06-25 Thread Erik Jones


On Jun 25, 2007, at 3:40 PM, Simon Riggs wrote:


On Mon, 2007-06-25 at 13:42 -0500, Erik Jones wrote:


It is my understanding that once a standby server has reached the
point where it is often waiting for wal files to replay, it is pretty
much caught up to the primary server, with the differences being in
whatever wal files are currently in queue to be archived by the
primary.


Yes. You can tell by using

  select pg_xlogfile_name(pg_current_xlog_location());

to see what the current file on the Primary is.


Thanks for the tip.




 If I'm correct, then for large databases wherein it can
take hours to take a base backup, is there anything to be gained by
using incrementally updated backups?


If you are certain there are parts of the database not touched at all
between backups. The only real way to be sure is to take file level
checksums, or you can trust file dates. Many backup solutions can do
this for you.


Wait, um, what?  I'm still not clear on why you would want to run a  
backup of an already caught up standby server.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] VACUUM ANALYZE extremely slow

2007-06-25 Thread Sergei Shelukhin
errr... workmem is 256Mb of course, and 5m for explain analyze costs.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Dynamic Log tigger (plpgsql)

2007-06-25 Thread Noah Heusser



How can I do OLD.columnName != NEW.columnName if I don't know what the
columnNames are at Compile Time?
I have the columnName in a variable.


Are you trying to do this from a plpgsql function? If so then I think
you should try to do this from a C function.

With C functions you will get more control over the new and old
versions of the tuple since you get their pointers via
TriggerData-tg_trigtuple (old tuple) and TriggerData-tg_newtuple
(new tuple).

I think this would work. And if there is no other possibility i will do 
that.
But as I saw, it is necessary to have Sysadmin rights, if you want to 
add a C function.

If possible i want to do it as a regular DB-Owner.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] commit transaction failed

2007-06-25 Thread nasim . sindri
I m having a problem while calling the procedure in prostgresql 8.2
from adoconnection, It gets executed for some time and after 5-10 call
it gives error startTransaction failed or CommitTransaction Failed.

CREATE OR REPLACE FUNCTION sp_getnewfiles(IN strserverid character
varying, IN nmaxcount integer, OUT stroutrecno character varying) AS
$BODY$
DECLARE

cur RECORD;
i integer;
BEGIN
i:=0;


LOCK TABLE inputtable IN ROW EXCLUSIVE MODE NOWAIT;
FOR cur IN select recno from InputTable where FileState=0  order by
recno limit nMaxCount for update
LOOP
if i=0 then
strOutRecNo:='recno=';
else
strOutRecNo:=strOutRecNo || ' or recno=';
end if;

strOutRecNo:=strOutRecNo||cur.recno;
update inputtable set filestate=1,serverid=strServerID where
recno=cur.recno;
i:=i+1;
END LOOP;

EXCEPTION
WHEN no_data_found THEN
--DO NOTHING
WHEN OTHERS THEN
--rollback;
RAISE EXCEPTION 'some error';



END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE STRICT;





the calling code is


CADOXSet rset(pCnn);
ostringstream ost;
ostselect * from
sp_getnewfiles('GetServerID()',nMaxCount);
rset.Open(ost.str());
ASSERT(rset.IsEOF()==false);
strRecNo=rset.GetFieldValue(0);



the error I encountered after some number of calls is either -

Error message: Unspecified error
Engine Used: PgOleDb
Error type : StartTransaction failed

or

Error message: Unspecified error
Engine Used: PgOleDb
Error type : commitTransaction failed


Thanks in advance:
Nasim


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Rule vs Trigger

2007-06-25 Thread Alfred Zhao
Suppose I want to update a statistics table S after an insertion into an
activity table A. In order to update S automatically, I can either create a
rule on A or create an after trigger on A. What's the performance
implication on using rule vs trigger? Thanks in advance.

Alfred 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] VACUUM ANALYZE extremely slow

2007-06-25 Thread Sergei Shelukhin
Hi. Sorry for being a bit emotional, I was pretty constructive in my
earlier posts (the earlier, the more constructive if you care to
search) but I am progressively getting pissed off :(

Thanks for the initial tip, running ANALYZE w/o vacuum is faster. Are
frequent vacuums even necessary if there are virtually no deletes in
the system and updates are much less frequent compared to inserts?

Now for the answers.
* What other non-default configuration settings do you have?
I played w/shared buffers, setting them between 16k and 32k,~ 24k
seems to be the best but the difference is minimal. The work_mem
setting is 256kb, and I increased effective cache size to ~700Mb (~35%
ram) based on a tip from some article.
max_fsm_* is increased too to accomodate vacuum analyze runs.

* What version of PostgreSQL are you using and on what OS?
8.1 and Debian

What kind of disks and controllers do you have?
Umm... Unfortunately I do not know, and do not have access to check :)
THe development server probably has a fast consumer grade HDD, not a
RAID I suppose.

* My system has to go down remark was probably a bit exaggerated;
the system is noticeably slower when the vacuum is running and that is
basically without workload; it will probably become unusable under
load.

* Specific queries that were slow: there are generally no specific
queries, everything runs slowly, mostly due to estimates being grossly
inaccurate (like 800k cost for a complex query based on Explain and
then 5, actual cost based on e.analyze)

I was not able to find any special bottlenecks in the queries. I come
from MSSQL (and a bit of MySQL) background, and the only thing I
noticed is that there are a lot of seqscans in the places where
indexes should be used (imo), when I turn seqscans off indexes are
ridiculously slow (I have already posted about it), presumably due to
random access used to build a big result set; however, result set is
not in fact big in real queries and postgres supposedly doesn't do
some magic that MSSQL does (join order or something) that makes the
join that should utilize the index narrower and thus faster.

There are also some weird development-stage-specific bottlenecks like
adding a column to 34m row table and updating it to default value (I
wasn't able to wait till the end of that query, it probably runs for
days); I think I understand why it is slow based on forum posts, but
again, MSSQL and MySQL do it better.

Also there's a general impression of everything being slow even after
MySQL on the same server; starting from seqscans to aggregate
operations; I won't be able to supply concrete evidence tho (not yet
anyway), it's just the gut feeling.




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] pg_dump to include multiple tables

2007-06-25 Thread cha

Hi All,

I want to  dump the database by using pg_dump command but the problem is the
version at the server is 7.4 which doesn't support the provision for
excluding tables as in version 8.2.

There are 500+ tables in the database,from which 15-20 are of huge sizes.I
want to exclude them.Is there any way to do it?

I knw that I have to use include instead of excluding the tables.Do I have
to include each and every tables manually or is there way to do that?

Do i have to write the script for this? if yes, how should I proceed?

Please help me out with this issue.

Thanks in Advance.


Cheers,
Cha
-- 
View this message in context: 
http://www.nabble.com/pg_dump-to-include-multiple-tables-tf3944396.html#a11188783
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] fsync error when restoring from archived xlogs

2007-06-25 Thread Tom Lanyon

On 20/06/2007, at 6:48 AM, Simon Riggs wrote:

This problem is fixed in 8.1.9. Seems it was inadvertently  
introduced in

8.1.7 (Jan 2007) and was not reported as a bug until after 8.1.8 was
out, in early April.

The 8.1.9 release notes make no mention of the bug fix, which isn't  
very

helpful, I'll agree.

It's a duplicate bug. Please check next time before you report.



Simon,

Thanks for the reply. Good to know that I wasn't just doing something  
stupid. :)


To be fair, I did search for an answer before posting to the list.  
However, I must've been searching for the wrong terms as my search  
didn't turn up anything on the -bugs list!


Regards,
Tom



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Standby servers and incrementally updated backups

2007-06-25 Thread Simon Riggs
On Mon, 2007-06-25 at 16:00 -0500, Erik Jones wrote:
 On Jun 25, 2007, at 3:40 PM, Simon Riggs wrote:
   If I'm correct, then for large databases wherein it can
  take hours to take a base backup, is there anything to be gained by
  using incrementally updated backups?
 
  If you are certain there are parts of the database not touched at all
  between backups. The only real way to be sure is to take file level
  checksums, or you can trust file dates. Many backup solutions can do
  this for you.
 
 Wait, um, what?  I'm still not clear on why you would want to run a  
 backup of an already caught up standby server.

Sorry, misread your question.

While you are running a warm standby config, you will still want to take
regular backups for recoverability and DR. These are additional backups,
i.e they are not required to maintain the warm standby.

You can backup the Primary, or you can backup the Standby, so most
people will choose to backup the Standby to reduce the overhead on the
Primary.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] unexpected shutdown

2007-06-25 Thread Marco Colombo

[EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] writes:

My database has shutdown several times in the last couple days.  I have
no
idea why.  I am running centos and I have not rebooted the server or
made
any configuration changes.

So in particular, you didn't disable memory overcommit?


LOG:  server process (PID 501) was terminated by signal 9

If you didn't issue a manual kill -9, then this is almost certainly a
trace of the kernel OOM killer at work.  Google for OOM kill to learn
more, or see memory overcommit in the PG docs.

Memory overcommit is evil on a server.

regards, tom lane




You guys were right
:Jun 17 11:04:57 kernel: Out of Memory: Killed process 24928 (postmaster).

I did not disable memory overcommit.  I guess this is something I will
have to do.  I have actually never seen this before or heard of memory
overcommit.  I am surprised a setting like this comes enabled by default. 
I read a bit about it and it seems to make sense to disable it, but from

practical experience do you know of any negative side effects?


The consensus on using overcommit_memory = 2 is far from general.

Your problem is a java application with memory issues, so I think you 
should address that directly first. Either run it elsewhere (and turn 
the host running PG into a dedicated one) or fix its memory leaks or use 
  resource limits provided by the OS to limit the java app.


Linux kernel people aren't totally clueless about VM. If they chose to 
keep overcommiting and the OOM killer enabled by default, there're reasons.


With overcommitting on, you save al lot of swap space from being 
allocated, leaving it for stuff that is actually used and not just 
potentially used. The overall system throughput is thus higher.


When it comes to OOM situation, with overcommitting off things aren't 
much better. First, OOM happens much before than with overcommiting on. 
This usually isn't perceived as a big advantage, since 95% of the cases 
the OOM is caused by one runaway process, so sooner or later it will 
cause OOM either way. But in a correctly administered server, with OS 
limits configured, a single runaway process doesn't cause OOM. OOM may 
still happen for excessive load, and I'd rather see my system handle 
some high load spikes than go into OOM situation. So lowering the 
threshold of what 'excessive load' is, isn't necessarily a good idea.


And OK, let's say you've hit OOM anyway. There's no win-win solution. 
Having PG processes SIGKILL'd is quite bad. But sitting in front of a 
keyboard watching your system die w/o being able to login (OOM, so fork 
fails) isn't much better. You may be able to do something (sysrq, maybe) 
but the chances you manage to run a proper shutdown are quite thin, in 
the general case. So you have to choose between the risk of PG being 
SIGKILL'd (but the OOM _may_ pick the right process instead) and the 
risk of being forced into hitting the 'reset' button. Either way, your 
precious data isn't happy at all.


So the bottom line is, avoid OOM by properly configuing OS resource 
limits. If you don't, then overcommit_memory = 2 is _definitely_ better. 
If you do, it's a hard call. If you think about it, the funny thing is 
that the more experienced the sysadm you're talking to is, the less 
experience he has about handling OOM situations. By definition. :)


.TM.
--
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/ [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Duplicate records returned

2007-06-25 Thread daniel65456
I'm getting duplicate rows returned.  I don't know know and can't find
out how to construct the SQL to return what I want.  I have an old
version of postgres which I cannot upgrade.  I'm not even sure how to
get the version.  Does this help?

$ grep -i version /pi/bin/setup
wtversion='2.0'
export wtversion

My SQL knowledge is quite limited so I've searched all over the net
and read doco but I can't figure this out.

I want to find the total number of hours worked on all projects with
projects.parent=Projects, projects.pct100, restrictions.hidden=5
(not hidden)

I keep getting duplicate records and I think it's it's becuase I'm
joining a table on 2 other tables.  I've tried INTERSECT but I keep
getting parse errors and can't work out why that is.  I've tried
creating a temp table but still have to do the twin joins!!  I've also
tried creating a view but without success

Here's an example of correct data, a single user and a single project

select username, sum(hours)
from timerecs
where project like 'Testing'
and username = 'long'
group by username
;

username |sum
-+---
long |127

but there will be many projects to process and to select the
applicable projects requires test on 2 other tables, projects and
restrictions

$the_sql  =  SELECT projectname, username, sum(hours);
$the_sql .=  FROM timerecs;
$the_sql .=  WHERE projectname = projects.projectname ;
$the_sql .=  AND projectname = restrictions.projectname;
$the_sql .=  AND projects.parent = 'Projects';
$the_sql .=  AND projects.pct  100;
$the_sql .=  AND restrictions.hidden = 5;
$the_sql .=  AND projectname = 'Testing'; # just for tsting
$the_sql .=  AND username = 'long';   # just for testing
$the_sql .=  AND projectname = projects.projectname ;
$the_sql .=  GROUP BY projectname, username;
$the_sql .=  ORDER BY projectname, username;
$the_sql .=  ;;

produces

1 Testing|long|254

How do I get the right list of projectname from timerecs by joining
with the projects and restrictions tables?
I've tried SELECT DISTINCT projectname but make no difference.  If I
take the 'restrictions' join out it's fine.  I've also tried prefacing
all column names with table names without any change

Thanks


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Error or bug?

2007-06-25 Thread Vincenzo Romano
Hi all.

Under PGSQL v8.2.4, in a PL/PgSQL function whenver I try to run this
statement:

INSERT INTO table1 SELECT  var1.*;

I get his message:

ERROR:  record type has not been registered
CONTEXT:  SQL statement INSERT INTO table1 select  $1 .*

Of course var1 has been declared as table1 type.

I suspect this could be a bug. But cannot be sure.
Any hint?

-- 
Vincenzo Romano -= NotOrAnd.IT Information Technologies =-
tel +39 0823 454163 | cel +39 339 8083886 | fax +39 02 700506964
Smooth seas never make experienced sailormen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] postgresql varchar[] data type equivalent in Oracle

2007-06-25 Thread Murali Doss
Dear All,

 

I like to know about postgresql varchar[ ] data type equivalent in
Oracle.

 

Regards

Murali Doss T.S.



Re: [GENERAL] problem importing data with psql

2007-06-25 Thread cha

Hi All,

Am able to resolve the problem by setting PGPASSWORD=password in my
environment variables and passing the -q(quiet) command in my ant script.

Cheers,
Cha


cha wrote:
 
 Hi All,
 
 Am using the following command to import tables in my database. I got list
 of tables which i need to import.
 
 So have made ant script and batch file to accomplish the purpose.
 
 Ant script contains all the configurations with following command :
 
 arg line=/c psql -h ${db.host} -p ${db.port} ${db.name} -U ${db.user} -a
 -v -d  -f  ${data.dir}/${db.name}/${table.name}-data.sql  /
 
 To call this ant file am passing the following command in my batch file as
 :
 
 call ant -f impdb.xml -Dtable.name=abc
 call ant -f impdb.xml -Dtable.name=bcd
 
 But am not able to import the tables with batch file, though am able to
 run the same command from the prompt successfully.
 
 I have to import 400+ tables , so need to automate the job rather than
 manually.
 
 So Can anyone throw light on this issue? Where am wrong? What's need to be
 done?
 
 Awaiting for the reply.
 
 Cheers,
 Cha
 

-- 
View this message in context: 
http://www.nabble.com/problem-importing-data-with-psql-tf3956973.html#a11235322
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] dynamic table/col names in plpgsql

2007-06-25 Thread gary jefferson

Is there a way to use a variable as the name of a table or column in plpgsql?

This might be a simple question, but I can't find the answer in the
docs.  http://www.postgresql.org/docs/8.2/interactive/plpgsql-declarations.html
hints that there are data types that correspond to table/column, but I
don't see them listed here
http://www.postgresql.org/docs/8.2/interactive/datatype.html ...

Also, I assume that once I figure out the right datatypes, I can
assign to them from functions like substring()?

Thanks,
Gary

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Optimizer problem in 8.1.6

2007-06-25 Thread Fernando Schapachnik
En un mensaje anterior, Michael Glaesemann escribió:
 
 On Jun 22, 2007, at 10:16 , Fernando Schapachnik wrote:
 
 EXPLAIN SELECT DISTINCT p.id
 
 Can you provide EXPLAIN ANALYZE? I suspect that when you rewrote the  
 query it changed how the planner took into account the statistics. If  
 your statistics are off, perhaps this changes how the planner  
 rewrites the query.

Sure. The DB is VACUUM'ed daily, and the users database only received 
a few updates per day.

This is from the rewrote one:

  
-
 Unique  (cost=18.65..2838.38 rows=268 width=4) (actual 
time=0.265..1503.554 rows=209 loops=1)
   -  Nested Loop  (cost=18.65..2529.51 rows=123548 width=4) (actual 
time=0.257..1127.666 rows=101992 loops=1)
 -  Index Scan using partes_tecnicos_pkey on partes_tecnicos 
p  (cost=0.00..39.89 rows=268 width=4) (actual time=0.025..2.115 
rows=209 loops=1)
   Filter: ((id_cola_por_ambito = 1) AND (id_situacion  
6))
 -  Materialize  (cost=18.65..23.26 rows=461 width=0) (actual 
time=0.005..1.817 rows=488 loops=209)
   -  Nested Loop  (cost=0.00..18.19 rows=461 width=0) 
(actual time=0.209..5.670 rows=488 loops=1)
 -  Index Scan using active_users on users u  
(cost=0.00..5.97 rows=1 width=0) (actual time=0.141..0.147 rows=1 
loops=1)
   Index Cond: (login = 
'xxx'::text)
   Filter: (active AND ((field1 IS NULL) 
OR (NOT field1)))
 -  Seq Scan on rel_usr_sector_parte_tecnico r  
(cost=0.00..7.61 rows=461 width=0) (actual time=0.053..1.995 rows=488 
loops=1)
 Total runtime: 1504.500 ms
(11 rows)


The original one is taking a *lot* of time (more than an hour by now).

Thanks!

Fernando.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] problem importing data with psql

2007-06-25 Thread cha

Hi All,

Am using the following command to import tables in my database. I got list
of tables which i need to import.

So have made ant script and batch file to accomplish the purpose.

Ant script contains all the configurations with following command :

arg line=/c psql -h ${db.host} -p ${db.port} ${db.name} -U ${db.user} -a
-v -d  -f  ${data.dir}/${db.name}/${table.name}-data.sql  /

To call this ant file am passing the following command in my batch file as :

call ant -f impdb.xml -Dtable.name=abc
call ant -f impdb.xml -Dtable.name=bcd

But am not able to import the tables with batch file, though am able to run
the same command from the prompt successfully.

I have to import 400+ tables , so need to automate the job rather than
manually.

So Can anyone throw light on this issue? Where am wrong? What's need to be
done?

Awaiting for the reply.

Cheers,
Cha
-- 
View this message in context: 
http://www.nabble.com/problem-importing-data-with-psql-tf3956973.html#a11227923
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Need help with generic query

2007-06-25 Thread David Abrahams
Background: I have a number of schemas all of which contain a ticket
table having the same columns.  The goal of the function xticket1
below is to collect all ticket rows satisfying some condition from all
those schemas, and additionally label each one by adding a new column
containing the name of the schema it belongs to.

  -- Create a temporary table with the right layout
  -- for our function's return type (know a better way?)
  CREATE TEMP TABLE tix ( LIKE master.ticket );
  ALTER TABLE tix ADD COLUMN schema_name text;

  CREATE OR REPLACE FUNCTION xticket1(condition TEXT)
 RETURNS SETOF tix
 AS
  $BODY$
  DECLARE
 scm RECORD;
 result RECORD;
  BEGIN
 FOR scm IN SELECT schema_name FROM public.instance_defs LOOP
   FOR result IN EXECUTE 'SELECT '
   || scm.schema_name || '.ticket.*,'
   || ' ''' || scm.schema_name || ''' AS schema_name'
   || ' FROM ' || scm.schema_name || '.ticket'
   || ' WHERE ' || condition
   LOOP
 RETURN NEXT result;
   END LOOP;
 END LOOP;
 RETURN;
  END;
  $BODY$
 LANGUAGE plpgsql;


The problem is, psql is complaining:

  ERROR:  wrong record type supplied in RETURN NEXT

I don't know why, and I don't know how to get psql to give me useful
debugging info that would help me discover why.  Can someone help?

Thanks


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] commit failed when calling postgresql procedure

2007-06-25 Thread nasim . sindri
PLATEFORM: VC++ 7 , ADO, WINDOWS 2003, POSTGRE 8.2

I m having a problem while calling the procedure in prostgresql 8.2
from adoconnection, It gets executed for some time and after 5-10 call
it gives error either startTransaction failed or commit failed.

CREATE OR REPLACE FUNCTION sp_getnewfiles(IN strserverid character
varying, IN nmaxcount integer, OUT stroutrecno character varying) AS
$BODY$
DECLARE

cur RECORD;
i integer;
BEGIN
i:=0;


LOCK TABLE inputtable IN ROW EXCLUSIVE MODE NOWAIT;
FOR cur IN select recno from InputTable where FileState=0  order by
recno limit nMaxCount for update
LOOP
if i=0 then
strOutRecNo:='recno=';
else
strOutRecNo:=strOutRecNo || ' or recno=';
end if;

strOutRecNo:=strOutRecNo||cur.recno;
update inputtable set filestate=1,serverid=strServerID where
recno=cur.recno;
i:=i+1;
END LOOP;

EXCEPTION
WHEN no_data_found THEN
--DO NOTHING
WHEN OTHERS THEN
--rollback;
RAISE EXCEPTION 'some error';



END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE STRICT;





the calling code is


CADORecordset rset(pCnn);
rset.Open(select * from sp_getnewfiles('server',10));
strRecNo=rset.GetFieldValue(0);



the error I encountered after some number of calls is either -

Error message: Unspecified error
Engine Used: PgOleDb
Error type : StartTransaction failed

or

Error message: Unspecified error
Engine Used: PgOleDb
Error type : commit failed


Thanks in advance:
Nasim


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] LC_CTYPE and matching accented chars

2007-06-25 Thread Martin Langhoff
Trying to understand how I can get Pg 8.2 to match 'martín' when I
search for 'martin', and going through the documentation, specially

  http://www.postgresql.org/docs/8.2/static/locale.html
  http://www.postgresql.org/docs/8.1/static/functions-matching.html

Here is a transcript of my commandline - I suspect I am quite close, but
something is missing. Hints _really_ appreciated!

  # do I have a Spanish locale on this box?
  locale -a
  # no I don't - let's make one
  sudo locale-gen es_AR.utf8
  # check that I have it
  locale -a
  ...
  sudo pg_createcluster --locale=es_AR.utf8 8.2 arg
  sudo pg_ctlcluster 8.2 arg start
  sudo -u postgres createdb --cluster 8.2/arg testing
  sudo -u postgres psql --cluster 8.2/arg testing
  # in psql...
  =# create table test (value text);
  CREATE TABLE
  =#  insert into test values ('martín');
  INSERT 0 1

  # this is apparently the right way to
  # select base character based on the equivalence class
  # as defined in the LC_CTYPE
  =# select * from test where value ~ 'mart[=i=]n';
   value
  ---
  (0 rows)
  # grumble - double check
  =# SHOW LC_CTYPE;
   lc_ctype
  
  es_AR.utf8

Any hints in general?

 - How can I read the LC_CTYPE definition? IOWs locale-gen has created a
binary blob in  /usr/lib/locale/es_AR.utf8/LC_CTYPE but I sure there is
a human-readable definition somewhere...

 - Is that the correct syntax?

 - Pg bug? I doubt many people are counting on this behaviour... as it
seems to be widely announced to be brittle and slow.

cheers,


m
-- 
---
Martin @ Catalyst .Net .NZ  Ltd, PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/   PHYS: Level 2, 150-154 Willis St
OFFICE: +64(4)916-7224  UK: 0845 868 5733 ext 7224  MOB: +64(21)364-017
  Make things as simple as possible, but no simpler - Einstein
---

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Standby servers and incrementally updated backups

2007-06-25 Thread Erik Jones


On Jun 25, 2007, at 4:40 PM, Simon Riggs wrote:


On Mon, 2007-06-25 at 16:00 -0500, Erik Jones wrote:

On Jun 25, 2007, at 3:40 PM, Simon Riggs wrote:

 If I'm correct, then for large databases wherein it can
take hours to take a base backup, is there anything to be gained by
using incrementally updated backups?


If you are certain there are parts of the database not touched at  
all

between backups. The only real way to be sure is to take file level
checksums, or you can trust file dates. Many backup solutions can do
this for you.


Wait, um, what?  I'm still not clear on why you would want to run a
backup of an already caught up standby server.


Sorry, misread your question.

While you are running a warm standby config, you will still want to  
take
regular backups for recoverability and DR. These are additional  
backups,

i.e they are not required to maintain the warm standby.

You can backup the Primary, or you can backup the Standby, so most
people will choose to backup the Standby to reduce the overhead on the
Primary.


Ok, yeah, that's what I was thinking and is where we are headed in  
the next month or so here at work:  we already have a standby running  
and will be adding a second standby server that we will be using for  
snapshot backups (packaged with the pertinent wal files...) as well  
as periodically bringing the second standby up to run dumps from just  
to cover all of our bases and also to be able to take our main  
primary server down for maintenance and still have both a production  
and standby running.  I guess I was really just wanting to make sure  
I wasn't missing some other big usage for incremental backups from  
the standby.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] CHECK

2007-06-25 Thread scuffio
I would want to know as I make to control that the length of the field
character insert is of one sure dimension


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Throwing exceptions

2007-06-25 Thread Germán Hüttemann Arza
Hi,

I need a way to throw a message in a function, when an exception occurs, but I 
don't want to write again and again the same message in every place I need to 
throw it. So, is there a way to handle this situation in a more general 
manner?

Thanks in advance,

-- 
Germán Hüttemann Arza
CNC - Centro Nacional de Computación
UNA - Universidad Nacional de Asunción
Campus Universitario, San Lorenzo - Paraguay
http://www.cnc.una.py - Tel.: +595 21 585550

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Rule vs Trigger

2007-06-25 Thread Richard Broersma Jr
--- Alfred Zhao [EMAIL PROTECTED] wrote:
 Suppose I want to update a statistics table S after an insertion into an
 activity table A. In order to update S automatically, I can either create a
 rule on A or create an after trigger on A. What's the performance
 implication on using rule vs trigger? Thanks in advance.

My understanding is that Triggers offer better performance than rules do.  

However, one important advantage of triggers over rules is that rules are not 
able to correctly
handle DML statements that affects more than one record in a single statement.

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] CHECK

2007-06-25 Thread Richard Broersma Jr

--- scuffio [EMAIL PROTECTED] wrote:

 I would want to know as I make to control that the length of the field
 character insert is of one sure dimension

A check constraint can certainly do this.  If you want to generalize it, you 
can roll you check
constraint into a domain (user defined column type).
http://www.postgresql.org/docs/8.2/interactive/ddl-constraints.html
http://www.postgresql.org/docs/8.2/interactive/sql-createdomain.html

Regards,
Richard Broersma Jr.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Error or bug?

2007-06-25 Thread Tom Lane
Vincenzo Romano [EMAIL PROTECTED] writes:
 Under PGSQL v8.2.4, in a PL/PgSQL function whenver I try to run this
 statement:
 INSERT INTO table1 SELECT  var1.*;
 I get his message:
 ERROR:  record type has not been registered

Works for me:

$ psql ...
Welcome to psql 8.2.4, the PostgreSQL interactive terminal.
...
regression=# create table table1(f1 int, f2 text);
CREATE TABLE
regression=# create or replace function foo() returns void as $$
declare var1 table1;
begin
  select * into var1 from table1;
  insert into table1 select var1.*;
end$$ language plpgsql;
CREATE FUNCTION
regression=# insert into table1 values(1,'one');
INSERT 0 1
regression=# select foo();
 foo 
-
 
(1 row)

regression=# select * from table1;
 f1 | f2  
+-
  1 | one
  1 | one
(2 rows)

If that's not what you did, then let's see exactly what you did do.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Throwing exceptions

2007-06-25 Thread Michael Glaesemann


On Jun 21, 2007, at 10:30 , Germán Hüttemann Arza wrote:

I need a way to throw a message in a function, when an exception  
occurs, but I
don't want to write again and again the same message in every place  
I need to
throw it. So, is there a way to handle this situation in a more  
general

manner?


The message is just a string. Assign the message to a variable and  
use the variable in place of the message. For example, in PL/pgSQL:


k_error_message := 'Boom! %';
RAISE EXCEPTION k_error_message, v_foo.id;

Hope that helps.

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pg_dump to include multiple tables

2007-06-25 Thread Michael Glaesemann


On Jun 19, 2007, at 0:40 , cha wrote:

I want to  dump the database by using pg_dump command but the  
problem is the

version at the server is 7.4 which doesn't support the provision for
excluding tables as in version 8.2.


The v8.2 pg_dump will be able to dump data from the v7.4 server, so  
use a newer pg_dump. Note that you may have difficulty loading the  
dump file created by the newer version into an older server. But  
hopefully you're upgrading, as 7.4 is getting a little long in the  
tooth.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_restore out of memory

2007-06-25 Thread Francisco Reyes
Re-did the machine that was running out of memory and installed FreeBSD 6.2 
AMD64.


The rows that used to fail now load.

Therefore, the problem is only with the i386 version.
Should I report this as a bug or is this nornal and expected?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] VACUUM ANALYZE extremely slow

2007-06-25 Thread Alvaro Herrera
Sergei Shelukhin escribió:

 * What other non-default configuration settings do you have?
 I played w/shared buffers, setting them between 16k and 32k,~ 24k
 seems to be the best but the difference is minimal. The work_mem
 setting is 256kb, and I increased effective cache size to ~700Mb (~35%
 ram) based on a tip from some article.
 max_fsm_* is increased too to accomodate vacuum analyze runs.

work_mem = 256 MB is probably too much; you might be filling your RAM
with it, causing the system to swap.  Try decreasing it.  This would be
particularly noticeable under load.  Keep in mind that work_mem is per
sort/hash and per backend, i.e. if you have a query which does 3 sorts
and is being executed by 5 processes in parallel, you will have 3 * 5 *
256 MB = 3840 MB of RAM in active use.

 * My system has to go down remark was probably a bit exaggerated;
 the system is noticeably slower when the vacuum is running and that is
 basically without workload; it will probably become unusable under
 load.

You can set the vacuum_cost_* settings in order to reduce the impact of
vacuum in the available I/O.


 * Specific queries that were slow: there are generally no specific
 queries, everything runs slowly, mostly due to estimates being grossly
 inaccurate (like 800k cost for a complex query based on Explain and
 then 5, actual cost based on e.analyze)

You might want to try increasing statistic targets for the problematic
columns.  It would be useful to choose one of these and send them along
for dissection here on the lists, to investigate the actual problems.
Misestimations are a problem at times, but most of the time there are
workarounds.

-- 
Alvaro Herrerahttp://www.advogato.org/person/alvherre
Oh, great altar of passive entertainment, bestow upon me thy discordant images
at such speed as to render linear thought impossible (Calvin a la TV)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_restore out of memory

2007-06-25 Thread Vivek Khera


On Jun 25, 2007, at 9:33 PM, Francisco Reyes wrote:


Therefore, the problem is only with the i386 version.
Should I report this as a bug or is this nornal and expected?



i wouldn't call it a bug to need more resources than you've got  
available :-)  obviously the limits on the i386 version of FreeBSD  
are less than that on the amd64 version, but I've not got any big  
i386 machines (ie, more than 1Gb of RAM) to compare.  All my big  
boxes are amd64.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_restore out of memory

2007-06-25 Thread Francisco Reyes

Vivek Khera writes:

i wouldn't call it a bug to need more resources than you've got  
available :-)


Hm... now I am really confused.
The same settings on AMD64 work. So how are more resources available when 
I have the same amount of memory and the same settings?


I even tried a machine with less memory, in AMD64 and the records that 
failed in i386 loaded fine. Even with default settings in postgresql.conf 
the problem records load whereas they fail in i386.


I think it must be a variable that is likely 4 bits in i386 and 8bits in 
64 bits. 




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_restore out of memory

2007-06-25 Thread Vivek Khera


On Jun 25, 2007, at 10:32 PM, Francisco Reyes wrote:


Hm... now I am really confused.
The same settings on AMD64 work. So how are more resources  
available when I have the same amount of memory and the same  
settings?


you set your maxdsize to the same as on i386?   on even my smallest  
amd64 boxes I see a data size limit of 33Mb per process.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_restore out of memory

2007-06-25 Thread Francisco Reyes

Vivek Khera writes:


you set your maxdsize to the same as on i386?


On the AMD64 I have /boot/loader.conf as
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256


In i386 I believe I had set
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.maxdsiz=1600MB
kern.dfldsiz=1600MB
kern.maxssiz=512MB 



In another machine, my laptop, I did not even increase anything. Used 
defaults straight from postgresql.conf and the large columns loaded fine.

All 4 trouble rows were above 80MB.


Limits compared.
AMD64
limit
cputime  unlimited
filesize unlimited
datasize 33554432 kbytes
stacksize524288 kbytes
coredumpsize unlimited
memoryuseunlimited
vmemoryuse   unlimited
descriptors  11095 
memorylocked unlimited
maxproc  5547 
sbsize   unlimited



Limit in i386 was
Report from limit:
cputime  unlimited
filesize unlimited
datasize 2097152 kbytes
stacksize131072 kbytes
coredumpsize unlimited
memoryuseunlimited
vmemoryuse   unlimited
descriptors  11095
memorylocked unlimited
maxproc  5547
sbsize   unlimited

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] data partitions across different nodes

2007-06-25 Thread 金星星

Are there any solutions based on PostgreSQL that can support
distributing partitions (horizontal fragmentations) across different
nodes. It doesn't need to support distributed transaction, since data
inconsistent is not a critical problem in my situation.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] data partitions across different nodes

2007-06-25 Thread Mikko Partio

On 6/26/07, 金星星 [EMAIL PROTECTED] wrote:


Are there any solutions based on PostgreSQL that can support
distributing partitions (horizontal fragmentations) across different
nodes. It doesn't need to support distributed transaction, since data
inconsistent is not a critical problem in my situation.




pgpool-II might be what you're looking for.

http://pgpool.projects.postgresql.org/pgpool-II/en/

Regards

MP


[GENERAL] varchar(n) VS text

2007-06-25 Thread Pierre Thibaudeau

Having read 
http://www.postgresql.org/docs/8.2/interactive/datatype-character.html
I am puzzling over this issue:

1) Is there ever ANY reason to prefer varchar(n) to text as a column type?

2) For instance, if I know that a character-type column will never
contain more than 300 characters, would I be better to define it as
varchar(300) or as text?

3) What if, in practice that same column usually contains strings of
no more than 10 characters (although I can't guarantee that it
wouldn't explode up to 300 in the one-in-a-million case)?

4) On the other hand, what if a column systematically contains strings
of 5 or fewer characters.  Is it better to define it as varchar(5)
or as text?


From my reading of the dataype documentation, the ONLY reason I can

think of for using varchar(n) would be in order to add an extra
data-type constraint to the column.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] varchar(n) VS text

2007-06-25 Thread Ben

On Mon, 25 Jun 2007, Pierre Thibaudeau wrote:


From my reading of the dataype documentation, the ONLY reason I can
think of for using varchar(n) would be in order to add an extra
data-type constraint to the column.


That's my understanding as well. I can think of a few reasons to use 
char(n) over text if you know exactly how many characters you will always 
have, but char isn't what you asked about.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] varchar(n) VS text

2007-06-25 Thread Tom Lane
Pierre Thibaudeau [EMAIL PROTECTED] writes:
 I am puzzling over this issue:

 1) Is there ever ANY reason to prefer varchar(n) to text as a column type?

In words of one syllable: no.

Not unless you have an application requirement for a specific maximum
length limit (eg, your client code will crash if fed a string longer
than 256 bytes, or there's a genuine data-validity constraint that you
can enforce this way).

Or if you want to have schema-level portability to some other DB that
understands varchar(N) but not text.  (varchar(N) is SQL-standard,
while text isn't, so I'm sure there are some such out there.)

 From my reading of the dataype documentation, the ONLY reason I can
 think of for using varchar(n) would be in order to add an extra
 data-type constraint to the column.

That is *exactly* what it does.  No more and no less.  There's no
performance advantage, in fact you can expect to lose a few cycles
to the constraint check.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread Michael Fuhr
On Mon, Jun 25, 2007 at 12:35:11AM -0400, Tom Lane wrote:
 Michael Fuhr [EMAIL PROTECTED] writes:
  On Mon, Jun 25, 2007 at 09:51:30AM +0900, [EMAIL PROTECTED] wrote:
  It seems that real takes 8 byte storage sizes.
 
  Real is 4 bytes but other columns' alignment requirements might
  result in no space being saved.
 
 Even with no other columns involved, if you're on a machine with
 MAXALIGN = 8 (which includes all 64-bit platforms as well as some
 that aren't), the row width won't shrink.

I see table sizes shrink on 64-bit sparc and x86 architectures, as
in the following example that results in adjacent 4-byte columns.
Or am I misinterpreting what's happening?

test= create table test (col1 double precision, col2 integer);
CREATE TABLE
test= insert into test select 1.0, 1 from generate_series(1, 1);
INSERT 0 1
test= select pg_relation_size('test');
 pg_relation_size 
--
   524288
(1 row)

test= alter table test alter col1 type real;
ALTER TABLE
test= select pg_relation_size('test');
 pg_relation_size 
--
   450560
(1 row)


-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] writing debug output in perl

2007-06-25 Thread Albe Laurenz
Harpreet Dhaliwal wrote:
 Can anyone help me out with funciton(s) to write a debug 
 output in a perl function on postgres.

Check out DBI-Link and see how it is done there:
http://pgfoundry.org/projects/dbi-link/

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread ssoo

Actually, table size shrinked a little.
But I misinterpreted it as no shrinking.
I expected much more shrinking.

Thank you for your concerns.

Michael Fuhr [EMAIL PROTECTED] wrote:

I see table sizes shrink on 64-bit sparc and x86 architectures, as
in the following example that results in adjacent 4-byte columns.
Or am I misinterpreting what's happening?

test= create table test (col1 double precision, col2 integer);
CREATE TABLE
test= insert into test select 1.0, 1 from generate_series(1, 1);
INSERT 0 1
test= select pg_relation_size('test');
 pg_relation_size
--
   524288
(1 row)

test= alter table test alter col1 type real;
ALTER TABLE
test= select pg_relation_size('test');
 pg_relation_size
--
   450560
(1 row)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread Michael Fuhr
On Mon, Jun 25, 2007 at 12:35:11AM -0400, Tom Lane wrote:
 As for that pg_dump measurement, the text form isn't going to get
 smaller ... 1.2 is the same length as 1.2.

Non-text formats like -Fc should (or might) shrink, right?  They
appear to in the tests I've done.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread Gregory Stark
[EMAIL PROTECTED] writes:

 Real type takes 4 byte storage sizes and double precision takes 8 bytes.
 I altered a data type from double precision to real and vacuumed DB.
 But PostgreSQL's data disk usage did not shrinked.
 And pg_dump size remained same.
 It seems that real takes 8 byte storage sizes.

Keep in mind that vacuum doesn't immediately shorten the table when it finds
free space. It just marks the space as free and available for reuse.

This could also be due to alignment restrictions on the other columns or the
row as a whole. If you're curious exactly what's going on and how to optimize
your table layout send your table definition and we can tell you exactly how
it's being laid out and where the extra 4 bytes are going.


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread ssoo

Michael Fuhr [EMAIL PROTECTED] wrote:

Altering a column's type rewrites the table so vacuuming afterward
shouldn't be necessary.


I'm gonna alter another table type from double precision to real.
This table size is lager than disk free space.
Can it be possible?
Previsouly, I saw continuous increasing in disk usage until
alter complete.
I'm afraid of disk full and aftereffects.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread ssoo

Gregory Stark [EMAIL PROTECTED] wrote:

This could also be due to alignment restrictions on the other columns or the
row as a whole. If you're curious exactly what's going on and how to optimize
your table layout send your table definition and we can tell you exactly how
it's being laid out and where the extra 4 bytes are going.


Here's my table:

create table WaferTestItem (
 WaferID integer NOT NULL REFERENCES Wafer (ID),
 X integer NOT NULL,
 Y integer NOT NULL,
 TestItemID integer NOT NULL REFERENCES TestItem (ID),
 Value double precision NOT NULL,
 PRIMARY KEY (WaferID, X, Y, TestItemID)
);

What happen if type of Value altered to real?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread Gregory Stark
[EMAIL PROTECTED] writes:

 Gregory Stark [EMAIL PROTECTED] wrote:
 This could also be due to alignment restrictions on the other columns or the
 row as a whole. If you're curious exactly what's going on and how to optimize
 your table layout send your table definition and we can tell you exactly how
 it's being laid out and where the extra 4 bytes are going.

 Here's my table:

 create table WaferTestItem (
  WaferID integer NOT NULL REFERENCES Wafer (ID),
  X integer NOT NULL,
  Y integer NOT NULL,
  TestItemID integer NOT NULL REFERENCES TestItem (ID),
  Value double precision NOT NULL,
  PRIMARY KEY (WaferID, X, Y, TestItemID)
 );

 What happen if type of Value altered to real?

That table is pretty much optimally packed. It's 4+4+4+4+8 or 24 bytes with no
padding between any of the fields. The next record can start immediately after
the end of the previous one because 24 is a multiple of 8. 

If you reduce the last field to 4 bytes then it'll be 4+4+4+4+4 or 20 bytes.
On your 64-bit platform you need 8-byte alignment for the next record so
there'll be 4 bytes of padding before the next record.

I haven't done the arithmetic but there's a small chance (like 1 in 14 think)
that this size will leave just enough space at the end of the page for one
extra record with the reduced size. In which case you would expect about 0.7%
space savings (these are realy small tuples, one more on a page doesn't
represent much saving).

If you had happened to have one more integer then going to real would save you
a lot of space though. Then it would be the difference between 32 and 24 bytes
(plus record headers). That would be more than 12% saving (on 8.2 -- on 8.3 it
would be over 14%).

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread Robert Treat
On Monday 25 June 2007 03:35, [EMAIL PROTECTED] wrote:
 Michael Fuhr [EMAIL PROTECTED] wrote:
  Altering a column's type rewrites the table so vacuuming afterward
  shouldn't be necessary.

 I'm gonna alter another table type from double precision to real.
 This table size is lager than disk free space.
 Can it be possible?
 Previsouly, I saw continuous increasing in disk usage until
 alter complete.
 I'm afraid of disk full and aftereffects.

Correct, modifying table data types requires a table re-write, so you're 
likely to run out of space.  You're not too likely to have much trouble by 
trying, when you run out of disk space the transaction will roll back and you 
should be ok.  One method to try and get around this might be to drop and 
then recreate relevant indexes (thereby freeing up space ahead of time) 
though you'll need to commit between those commands. HTH

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-25 Thread Robert Treat
On Thursday 21 June 2007 04:39, Christan Josefsson wrote:
 Ok.

 Big thanks for the information.

 You mentioned Bizgres, do you have any more information in that direction,
 or do you know who to contact regarding information on Bizgres bitmap
 indexes. If there is a bitmap index patch in Bizgres which can be applied
 to the latest stable source of PostgreSQL then I have a solution until 8.4
 (which I according to your answers is the assumed release for introducing
 on-disk bitmap indexes).


If you really want to see on-disk bitmaps in, you might want to study the 
patches and the missing vacuum related bits and then think about submitting 
an updated version.  My take on the future of that patch is the original 
developers aren't terribly motivated to finish it, in much part because some 
of the testing people have done vs. 8.3 shows it solves an even smaller 
number of issues than originally hoped.  my .02 anyway. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-25 Thread Chris Browne
[EMAIL PROTECTED] (Christan Josefsson) writes:
 So you indicate that the so called bitmap index scan, a.k.a
 in-memory bitmap indexes (right?), already adds such an
 improvement when it comes to optimized response time on large
 query sets (having the characteristics as normally used to
 identify cases where bitmap indexes improves performance like:
 low cardinality keys, large data volumes etc), so that the
 on-disk indexes are not really needed or atleast not worth wile
 implementing?

It looks very much like that may be the case...

Bitmap index scans have a somewhat different set of functionality, but
there is enough overlap that the cases where on-disk bitmap indexes
are useful (and in-memory bitmap scans aren't) look like rare edge
cases.

There may be users that see those rare edge cases all the time;
they'll find on-disk bitmap indexes worth having, and, possibly, worth
implementing.

But to be sure, there used to be a lot of burning interest in
on-disk bitmap indexes, and in-memory bitmap index scans have quenched
many of the flames...
-- 
cbbrowne,@,cbbrowne.com
http://linuxfinances.info/info/advocacy.html
WindowsNT will not accept fecal matter in its diet... it's that simple.

I suppose that is a good ward against cannibalism. -- Nick Manka

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-25 Thread Christan Josefsson

Thanks Robert!

So you indicate that the so called bitmap index scan, a.k.a in-memory bitmap
indexes (right?), already adds such an improvement when it comes to
optimized response time on large query sets (having the characteristics as
normally used to identify cases where bitmap indexes improves performance
like: low cardinality keys, large data volumes etc), so that the on-disk
indexes are not really needed or atleast not worth wile implementing?

Regards,
Christian


2007/6/25, Robert Treat [EMAIL PROTECTED]:


On Thursday 21 June 2007 04:39, Christan Josefsson wrote:
 Ok.

 Big thanks for the information.

 You mentioned Bizgres, do you have any more information in that
direction,
 or do you know who to contact regarding information on Bizgres bitmap
 indexes. If there is a bitmap index patch in Bizgres which can be
applied
 to the latest stable source of PostgreSQL then I have a solution until
8.4
 (which I according to your answers is the assumed release for
introducing
 on-disk bitmap indexes).


If you really want to see on-disk bitmaps in, you might want to study the
patches and the missing vacuum related bits and then think about
submitting
an updated version.  My take on the future of that patch is the original
developers aren't terribly motivated to finish it, in much part because
some
of the testing people have done vs. 8.3 shows it solves an even smaller
number of issues than originally hoped.  my .02 anyway.

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL



Re: [GENERAL] alter table type from double precision to real

2007-06-25 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 On Mon, Jun 25, 2007 at 12:35:11AM -0400, Tom Lane wrote:
 Even with no other columns involved, if you're on a machine with
 MAXALIGN = 8 (which includes all 64-bit platforms as well as some
 that aren't), the row width won't shrink.

 I see table sizes shrink on 64-bit sparc and x86 architectures, as
 in the following example that results in adjacent 4-byte columns.
 Or am I misinterpreting what's happening?

Sorry, I should've clarified that this depends on whether the total row
length is a multiple of 8.  In your example, you have an 8-byte column
followed by a 4-byte column.  MAXALIGN-8 machines will pad the row
length to 16 bytes.  You then altered it to be two 4-byte columns,
requiring no padding to have a row length of 8 bytes.  (Plus overhead
of course, but the overhead is MAXALIGN'd anyway.)

The case I was thinking of was more like this:

regression=# create table test (col1 double precision);
CREATE TABLE
regression=# insert into test select 1.0 from generate_series(1, 1);
INSERT 0 1
regression=# select pg_relation_size('test');
 pg_relation_size 
--
   368640
(1 row)

regression=# alter table test alter col1 type real;
ALTER TABLE
regression=# select pg_relation_size('test');
 pg_relation_size 
--
   368640
(1 row)

The space savings disappears into alignment padding.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] simple SQL question

2007-06-25 Thread Joshua

I have a column with the following values (example below)

5673
4731
4462
5422
756
3060

I want the column to display the numbers as follows:

56.73
47.31
44.62
54.22
7.56
30.60

I have been playing around with string functions but cannot seem to 
figure out a quick solution. Does anyone have any suggestions?


Please let me know.

Thanks.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: {Spam} [GENERAL] simple SQL question

2007-06-25 Thread Dimitri Fontaine
Le lundi 25 juin 2007, Joshua a écrit :
 I have been playing around with string functions but cannot seem to
 figure out a quick solution. Does anyone have any suggestions?

 Please let me know.

What about:

pgloader=# select a::float / 100 from (values(5673), (4731), (4462), (5422), 
(756), (3060)) as x(a);
 ?column?
--
56.73
47.31
44.62
54.22
 7.56
 30.6
(6 lignes)

-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-25 Thread Tom Lane
Chris Browne [EMAIL PROTECTED] writes:
 But to be sure, there used to be a lot of burning interest in
 on-disk bitmap indexes, and in-memory bitmap index scans have quenched
 many of the flames...

Well, we had in-memory bitmaps already in 8.1, and the bitmap index work
happened since that.

I think the main argument for bitmap indexes is the potential to make
the index smaller.  A btree index requires a minimum of 16 bytes per
entry (20 if MAXALIGN=8), whereas a bitmap index can in principle get
down to a few bits per entry for a high-cardinality column value.
So you could hope for a 10x smaller index and corresponding reduction in
index search time.

The fly in the ointment is that if the column value is so high
cardinality as all that, it's questionable whether you want an index
search at all rather than just seqscanning; and it's definite that
the index access cost will be only a fraction of the heap access cost.
So the prospects for actual net performance gain are a lot less than
the index-size argument makes them look.

There doubtless are gains on some workloads, but how much and on how
wide a range of workloads is still an open question.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] simple SQL question

2007-06-25 Thread Rich Shepard

On Mon, 25 Jun 2007, Joshua wrote:


I have a column with the following values (example below)

5673
4731
4462
5422
756
3060


  Are these numbers that will be used in calculations, or are they strings?


I want the column to display the numbers as follows:

56.73
47.31
44.62
54.22
7.56
30.60


  Numeric(4,2) will do this. Also, whatever the UI you are using in your
application can make the translation from display format to storage format.

I have been playing around with string functions but cannot seem to figure 
out a quick solution. Does anyone have any suggestions?


  If they are strings, why not enter and store them with the decimal point?

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.|  Accelerator(TM)
http://www.appl-ecosys.com Voice: 503-667-4517  Fax: 503-667-8863

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] another simple SQL question

2007-06-25 Thread Joshua

Ok here is another simple question from a novice

Here is what my table looks like

firstname lastname fullname
--   --   ---
 smith, john
 green, susan
 white, jeff


How can I break the fullname field into firstname lastname fields so it 
looks like the following:


firstname  lastname  fullname
- -   -
john smith smith, john
susan   green green, susan
jeff   white white, jeff

Please let me know. Sorry for such simple novice questions, I appreciate 
your support.


THANKS!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Move a database from one server to other

2007-06-25 Thread Adrián Ribao Martínez
Hello, I have to move all the data in my actual server to the new one.
I have several databases in postgresql, and I'd like to know the best way to 
move them.
I thoung that maybe copying all the data directory... but I'm not sure, and I 
don't know where it's stored.
I'd apreciate your help.

Thank you very much.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] another simple SQL question

2007-06-25 Thread Nicholas Barr
 Ok here is another simple question from a novice

 Here is what my table looks like

 firstname lastname fullname
 --   --   ---
   smith, john
   green, susan
   white, jeff


 How can I break the fullname field into firstname lastname fields so it
 looks like the following:

 firstname  lastname  fullname
 - -   -
 john smith smith, john
 susan   green green, susan
 jeff   white white, jeff

 Please let me know. Sorry for such simple novice questions, I appreciate
 your support.

 THANKS!

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster



temp=# create table temptable3 (firstname varchar(32), lastname
varchar(32), fullname varchar(32)) without oids;
CREATE TABLE
temp=# insert into temptable3 (fullname) values ('smith, john');
INSERT 0 1
temp=# insert into temptable3 (fullname) values ('spencer, frank');
INSERT 0 1
temp=# select * from temptable3;  firstname | lastname |fullname
---+--+
   |  | smith, john
   |  | spencer, frank
(2 rows)

temp=# update temptable3 set firstname=trim(substring(fullname from
position(',' in fullname) + 1)), lastname=trim(substring(fullname from 0
for position(',' in fullname)));
UPDATE 2
temp=# select * from temptable3;  firstname | lastname |fullname
---+--+
 john  | smith| smith, john
 frank | spencer  | spencer, frank
(2 rows)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Move a database from one server to other

2007-06-25 Thread Nicholas Barr
 Hello, I have to move all the data in my actual server to the new one.
 I have several databases in postgresql, and I'd like to know the best way
 to
 move them.
 I thoung that maybe copying all the data directory... but I'm not sure,
 and I
 don't know where it's stored.
 I'd apreciate your help.

 Thank you very much.


Only copy the data directory if both servers are offline and not running
and if both servers use the same version of postgres. This method is not
recommended AFAIK.

It is much much safer to use pg_dump/pg_restore and know that your data is
consistent.

http://www.postgresql.org/docs/8.2/static/backup-dump.html

Nick



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] dynamic table/col names in plpgsql

2007-06-25 Thread gary jefferson

Thanks Michael and Steve.

Related question: If I previously had a

IF EXISTS (select ...)

statement, and the 'select ...' part now needs to be run with EXECUTE,
how do I check for existence?  It looks like I need to do an 'INTO'
with a row variable?  I can't seem to find an example snippet of how
to accomplish this in the pg docs.  Where should I be looking?

Thanks again!
Gary


On 6/23/07, Michael Glaesemann [EMAIL PROTECTED] wrote:


On Jun 23, 2007, at 22:47 , gary jefferson wrote:

 Is there a way to use a variable as the name of a table or column
 in plpgsql?

AIUI, you need to use EXECUTE and build the query string yourself.

http://www.postgresql.org/docs/8.2/interactive/plpgsql-
statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN

Take a look at the quote_ident function as well:

http://www.postgresql.org/docs/8.2/interactive/functions-string.html

Hope this helps you get started.

Michael Glaesemann
grzm seespotcode net





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] acampada

2007-06-25 Thread Adrián Ribao Martínez
Ya he publicado la informacióne en la página y he abierto un post en el foro 
para organizarlo.
Creo que lo mejor sería que fueras respondiendo a las preguntas que se 
formulan en el. Esta es la direccion: 
http://foro.universoheroes.com/showthread.php?tid=1105pid=21813#pid21813

Un saludo!

El Jueves, 21 de Junio de 2007, [EMAIL PROTECTED] escribió:
 muchísimas gracias Adrián, hasta pronto, un saludo
 Guadalupe
 si conectan conmigo ,iré informándote para que sepas como va la
 acampada

 Adrián Ribao Martínez [EMAIL PROTECTED] ha escrito:
  Hola, si lo he recibido, es que he estado un poco liado, a ver si lo
  pongo pronto en la página y el foro ok?
  Siento el retraso, un saludo.
 
  El Miércoles, 20 de Junio de 2007, escribió:
  Hola Adrián
  soy Guadalupe, la de la acampada para el concierto de los Héroes en
  zaragoza, te envié dos mensajes hace unos días, uno por este medio y
  otro desde mi correo particular, los has recibido?? podrás poner la
  información el foro?,necesitas algun dato más?
  bueno, espero tus noticias, gracias de antemano.
  un saludo
  Guadalupe
 
  Adrián Ribao Martínez [EMAIL PROTECTED] ha escrito:
   Hola, he leido tu correo de casualidad, porque miro entre los 1300
   mensajes que tengo de spam una vez al año, y resulta que estaba alli
   el tuyo.
  
   Me parece una idea muy interesante, si quieres puedo publicarlo en la
   página, no se si estais ya llenos o no. Sobre cuanto es el precio?
   ¿cuantos autobuses habría y la frecuencia de cada uno?
  
   Un saludo.
  
   El Jueves, 14 de Junio de 2007, escribió:
   Hola, me pongo en contaco contigo, por ver si puedes ayudarme, deseo
   dar información a todos aquellos fans que tengan su entrada para ir a
   los conciertos de los dias 10 y 12 de octubre a Zaragoza, y se hayan
   quedado sin alojamiento, para darles una opción . se está creando la
   organización , para una acampada esos dias,en la comarca de la Ribera
   Alta del Ebro, varios pueblos (todos de 20 a 30 km de la ciudad, con
   accesos por autopista o autovia) la montarán  especialmente para
   estas personas, dándoles el servicio de baños, autocar para bajar a
   Zaragoza al concierto y delvoverlos al camping, comida al dia
   siguiente , con vigilanciaen fin, creemos que estará muy bien, el
   precio será muy adecuado , y tenemos ya  a unos  150 inscritos. como
   todo está bastante organizado ya, nos parece adecuado que si hay
   alguien que no encuentre sitio, y esta opción le interese, pudiese
   ponerse en contacto con nosotros, le indicaríamos de que forma se
   hará la reserva y dariamos toda la información adecuada. gracias, si
   puedes ,desde tu expriencia en el foro, podria ser una buena ayuda.
   si  desean, pueden ponerse en contacto por medio de :
   [EMAIL PROTECTED], mi nombre es Guadalupe, pueden dirigirse desde
   alli, les responderemos a todos. muchas gracias un saludo


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly