[GENERAL] Installation of postgres server-8.4

2010-10-20 Thread sameer malve
Hi sir,

   Can you please guide me for installation of postgres server-8.4.


Thanks & Regards,
Sameer M. Malve


Re: [GENERAL] Installation of postgres server-8.4

2010-10-20 Thread Dann Corbit

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of sameer malve
Sent: Wednesday, October 20, 2010 12:14 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Installation of postgres server-8.4

Hi sir,

   Can you please guide me for installation of postgres server-8.4.
>>
If you are on a POSIX platform, get an RPM and install from that.
http://www.if-not-true-then-false.com/2010/howto-install-postgresql-8-4-database-server-on-centos-fedora-red-hat/
or perhaps something like this, if platform allows:
https://help.ubuntu.com/community/PostgreSQL

If you are on a Mac or a Windows platform, use the one click installer.
http://www.enterprisedb.com/learning/pginst_guide.do

None of my business, but if it is a new installation, why not 9.0.1 instead of 
8.4?
<<

Thanks & Regards,
Sameer M. Malve


Re: [GENERAL] Installation of postgres server-8.4

2010-10-20 Thread Sergey Konoplev
Start with it http://www.postgresql.org/docs/8.4/interactive/admin.html

On 20 October 2010 11:14, sameer malve  wrote:
> Hi sir,
>
>    Can you please guide me for installation of postgres server-8.4.
>
>
> Thanks & Regards,
> Sameer M. Malve
>



-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgre on virtual machine

2010-10-20 Thread Georgi Ivanov
Hi,
Is there any specific reason one should/should not run postgre on virtual
machine for production ?
Is there any special tuning for virtual environment ?
Client of mine is running 8.4 on  virtual machine (VMWare ) so I'm wandering
if there is anything special I can tune to speed up a bit.
Also if I must choose between more RAM and more virtual cores what should I
choose ?


Re: [GENERAL] postgre on virtual machine

2010-10-20 Thread Emanuel Calvo Franco
2010/10/20 Georgi Ivanov :
> Hi,
> Is there any specific reason one should/should not run postgre on virtual
> machine for production ?
> Is there any special tuning for virtual environment ?
> Client of mine is running 8.4 on  virtual machine (VMWare ) so I'm wandering
> if there is anything special I can tune to speed up a bit.
> Also if I must choose between more RAM and more virtual cores what should I
> choose ?
>

All is relative. But in some experiences, I saw good performance for
SOME applications
with virtual environments. It's all relative to the mass of
transactionality, load average,
number of GB of database, etc.

Try first to determinate the 'weight' of tasks that this DB could get.

Next, remember to have:
- good storage (amonut and speed)
- have the number of processors required to meet all the processes
that launch postgres
- redundance or good backups.

You can not spare RAM or CORES. What will tell you, the exact amount
of each other,
is 'how' the application will use the base.

If you share more details, maybe we could help a little more.



-- 
              Emanuel Calvo Franco
        DBA | www.emanuelcalvofranco.com.ar
Curso a distancia Nivel 1 Admin Postgres:
    http://www.postgresql-es.org/node/525

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with Postgresql 9.0 streaming replication on Solaris 10 (x86)

2010-10-20 Thread dan.m.harris

Hi all,

I'm having a problem getting streaming replication to work between two
servers running Solaris 10 (both x86 platforms).

Both servers have PG 9.0 installed at /postgres/postgres/9.0_pgdg/, with a
database initialised at /postgres/postgres/9.0_pgdg/data/. 

I have followed the instructions at
hxxp://wiki.postgresql.org/wiki/Streaming_Replication to set up my servers,
except that I haven't set up WAL archiving to allow the standby to catch up.
I will in future, but I'm just prototyping for now.

On my standby server (10.254.2.37), my recover.conf file looks like this:
bash-3.00$ cat data/recovery.conf
standby_mode = 'on'
primary_conninfo = 'host=10.254.2.1 port='

On my primary server (10.254.2.1), my pg_hba.conf file looks like this:
# TYPE DATABASE USER CIDR-ADDRESS METHOD
host replication all 10.254.2.37/32 trust

With my primary DB stopped, I start the DB on my standby, and see this:
bash-3.00$ bin/pg_ctl -D data start
server starting
bash-3.00$ LOG: database system was shut down in recovery at 2010-10-20
12:33:58 GMT
LOG: entering standby mode
WARNING: WAL was generated with wal_level=minimal, data may be missing
HINT: This happens if you temporarily set wal_level=minimal without taking a
new base backup.
LOG: consistent recovery state reached at 0/588DB8
LOG: record with zero length at 0/588DB8
FATAL: could not connect to the primary server: invalid connection option
"replication"

FATAL: could not connect to the primary server: invalid connection option
"replication"

FATAL: could not connect to the primary server: invalid connection option
"replication"

FATAL: could not connect to the primary server: invalid connection option
"replication"
(etc)

Does anyone have any ideas what the problem may be? I suspect I may be
missing a library somewhere - I can't believe that streaming replication
just doesn't work on Solaris 10.

Any help much appreciated.

Thanks,
Dan
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Problem-with-Postgresql-9-0-streaming-replication-on-Solaris-10-x86-tp3228563p3228563.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] recovery from out of disk space

2010-10-20 Thread Alexander B
I use PostGre 8.3 on Ubuntu.

I realized last Friday that my /opt hard disk was full. The disk was full about 
2-4 hours of moderate activity. The disk  ran out of space for reasons outside 
of postgres 

I'm really at a loss of what to do.  Does anyone have suggestions for what my 
next step should be.  T

More details:

When I run postgress...Postgres seems to come up...However, it is completely 
unresponsive to my application.  Everything just hangs.

postgres  7364  0.0  1.5 1089056 27164 ?   S09:35   0:00 
/opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data
postgres  7365  0.0  0.0  10700   984 ?Ss   09:35   0:00 postgres: 
logger process

postgres  7367  0.0  0.4 1089572 7244 ?Ss   09:35   0:00 postgres: 
writer process

postgres  7368  0.0  0.0 1089572 1056 ?Ss   09:35   0:00 postgres: wal 
writer process

postgres  7369  0.0  0.0 1089712 1348 ?Ss   09:35   0:00 postgres: 
autovacuum launcher process   

postgres  7370  0.0  0.0  10828  1236 ?Ss   09:35   0:00 postgres: 
stats 
collector process


I also tried to run pg_dump to make a backup and that failed with the following 
message:

(reverse-i-search)`./.': cd ../../
r...@domu-12-31-39-0f-44-82:/opt/PostgreSQL/8.3/bin# p -v -c -f 
/opt/backup/bb.txt -U nuxeo nuxeodb
pg_dump: reading schemas
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump:  reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined operator families
pg_dump: reading user-defined conversions
pg_dump: reading user-defined tables
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  permission denied for relation 
totalcount
pg_dump: The command was: LOCK TABLE public.totalcount IN ACCESS SHARE MODE
pg_dump: *** aborted because of error



  

Re: [GENERAL] recovery from out of disk space

2010-10-20 Thread Alexander B
One more note.  I added extra disk space, so I cleared that part up.
/dev/sdg1103210940   5766776  92201356   6% /opt






From: Alexander B 
To: pgsql-general@postgresql.org
Sent: Wed, October 20, 2010 8:32:22 AM
Subject: [GENERAL] recovery from out of disk space


I use PostGre 8.3 on Ubuntu.

I realized last Friday that my /opt hard disk was full. The disk was full about 
2-4 hours of moderate activity. The disk  ran out of space for reasons outside 
of postgres 

I'm really at a loss of what to do.  Does anyone have suggestions for what my 
next step should be.  T

More details:

When I run postgress...Postgres seems to come up...However, it is completely 
unresponsive to my application.  Everything just hangs.

postgres  7364  0.0  1.5 1089056 27164 ?   S09:35   0:00 
/opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data
postgres  7365  0.0  0.0  10700   984 ?Ss   09:35   0:00 postgres: 
logger process

postgres  7367  0.0  0.4 1089572 7244 ?Ss   09:35   0:00 postgres: 
writer  process

postgres  7368  0.0  0.0 1089572 1056 ?Ss   09:35   0:00 postgres: wal 
writer process

postgres  7369  0.0  0.0 1089712 1348 ?Ss   09:35   0:00 postgres: 
autovacuum launcher process   

postgres  7370  0.0  0.0  10828  1236  ?Ss   09:35   0:00 postgres: 
stats collector process


I also tried to run pg_dump to make a backup and that failed with the following 
message:

(reverse-i-search)`./.': cd ../../
r...@domu-12-31-39-0f-44-82:/opt/PostgreSQL/8.3/bin# p -v -c -f 
/opt/backup/bb.txt -U nuxeo nuxeodb
pg_dump: reading schemas
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump:  reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined operator families
pg_dump: reading user-defined conversions
pg_dump: reading user-defined tables
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  permission denied for relation 
totalcount
pg_dump: The command was: LOCK TABLE public.totalcount IN ACCESS SHARE MODE
pg_dump: *** aborted because of error


  

Re: [GENERAL] recovery from out of disk space

2010-10-20 Thread Cédric Villemain
2010/10/20 Alexander B :
> I use PostGre 8.3 on Ubuntu.
>
> I realized last Friday that my /opt hard disk was full. The disk was full
> about 2-4 hours of moderate activity. The disk  ran out of space for reasons
> outside of postgres
> I'm really at a loss of what to do.  Does anyone have suggestions for what
> my next step should be.  T
>
> More details:
>
> When I run postgress...Postgres seems to come up...However, it is completely
> unresponsive to my application.  Everything just hangs.
>
> postgres  7364  0.0  1.5 1089056 27164 ?   S    09:35   0:00
> /opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data
> postgres  7365  0.0  0.0  10700   984 ?    Ss   09:35   0:00 postgres:
> logger process
> postgres  7367  0.0  0.4 1089572 7244 ?    Ss   09:35   0:00 postgres:
> writer process
> postgres  7368  0.0  0.0 1089572 1056 ?    Ss   09:35   0:00 postgres:
> wal writer process
> postgres  7369  0.0  0.0 1089712 1348 ?    Ss   09:35   0:00 postgres:
> autovacuum launcher process
> postgres  7370  0.0  0.0  10828  1236 ?    Ss   09:35   0:00 postgres:
> stats collector process
>
> I also tried to run pg_dump to make a backup and that failed with the
> following message:
>
> (reverse-i-search)`./.': cd ../../
> r...@domu-12-31-39-0f-44-82:/opt/PostgreSQL/8.3/bin# p -v -c -f
> /opt/backup/bb.txt -U nuxeo nuxeodb
> pg_dump: reading schemas
> pg_dump: reading user-defined functions
> pg_dump: reading user-defined types
> pg_dump: reading procedural languages
> pg_dump: reading user-defined aggregate functions
> pg_dump: reading user-defined operators
> pg_dump: reading user-defined operator classes
> pg_dump: reading user-defined text search parsers
> pg_dump: reading user-defined text search templates
> pg_dump: reading user-defined text search dictionaries
> pg_dump: reading user-defined text search configurations
> pg_dump: reading user-defined operator families
> pg_dump: reading user-defined conversions
> pg_dump: reading user-defined tables
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  permission denied for relation
> totalcount
> pg_dump: The command was: LOCK TABLE public.totalcount IN ACCESS SHARE MODE
> pg_dump: *** aborted because of error
>
>

Maybe you are not a superuser and does not have acess to the table ?
Try to dump the DB as 'postgres' user instead of Nuexo one.


-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trying to stop warm standby server service

2010-10-20 Thread Norberto Delle

Hi all

I'm trying to stop the service that controls a warm standby server, but 
issuing a 'net stop '
command fails to stop the service, however the service appears as 
stopped on the Services Snap-In.
Checking the process tree, I can see that only the pg_ctl.exe process 
was removed
from the root of tree, other postgres.exe processes remains running. 
Also the file postmaster.pid

is present in the data directory.
The only option that worked was to kill the processes and remove the 
postmaster.pid manually.

Any Ideas of what's happenning?
System: Windows 2008 R2 x64 standard, PostgreSQL 9.0.1 32bits

Norberto

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] drop view with out cascading the dependents

2010-10-20 Thread Merlin Moncure
2010/10/19 Grzegorz Jaśkiewicz :
> On Tue, Oct 19, 2010 at 3:12 PM, Tom Lane  wrote:
>> Ravi Katkar  writes:
>>> Is there any feature to drop the view with out cascading the dependents.
>>
>> No.  But why don't you use CREATE OR REPLACE VIEW?
>>
> only caveat is, it won't work if he adds/removes any columns. CREATE
> OR REPLACE VIEW is rather useless in most practical implementations.

In recent versions of postgres (I think 8.4+?) you can add columns to
the view via create/replace (not drop of course).  This greatly
reduces the practical annoyances of dropping view dependencies, at
least for me...

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] truncating or full vacuuming

2010-10-20 Thread Viktor Bojović
Hi,

while creating an index on billion records table i have canceled creation
because index took me ~70GB of space.
When I looked into disk space i saw that space is still occupied , and the
index doesn't exist.
I have started full vacuuming but it still runs after three days, so i
wanted to ask if there is some other way to free that space.
Would it be better if i truncate table and insert all records again (costs
~1.5 day) or is there some other solution?

Thanx in advance
-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [GENERAL] truncating or full vacuuming

2010-10-20 Thread Tom Lane
=?UTF-8?Q?Viktor_Bojovi=C4=87?=  writes:
> while creating an index on billion records table i have canceled creation
> because index took me ~70GB of space.
> When I looked into disk space i saw that space is still occupied , and the
> index doesn't exist.

hmm ... was your version of "cancel" spelled "kill -9" or something like
that?  If so it's unsurprising that temp files would have been left
behind.  Look in the pgsql_tmp subdirectory.  It's also possible that
the partially-filled index files are still there but aren't linked to
by any live pg_class row.  Check for files that don't match any entry in
the pg_class.relfilenode column.  In both cases you'd have to remove any
such files by hand --- VACUUM is not going to fix this for you.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] truncating or full vacuuming

2010-10-20 Thread Viktor Bojović
On Wed, Oct 20, 2010 at 4:50 PM, Tom Lane  wrote:

> =?UTF-8?Q?Viktor_Bojovi=C4=87?=  writes:
> > while creating an index on billion records table i have canceled creation
> > because index took me ~70GB of space.
> > When I looked into disk space i saw that space is still occupied , and
> the
> > index doesn't exist.
>
> hmm ... was your version of "cancel" spelled "kill -9" or something like
> that?  If so it's unsurprising that temp files would have been left
> behind.  Look in the pgsql_tmp subdirectory.  It's also possible that
> the partially-filled index files are still there but aren't linked to
> by any live pg_class row.  Check for files that don't match any entry in
> the pg_class.relfilenode column.  In both cases you'd have to remove any
> such files by hand --- VACUUM is not going to fix this for you.
>
>regards, tom lane
>

i have used Ctrl+C while i was in console.
I entered into that directory you have mentioned, but i have found no files
inside.

postg...@zohar:/srv/postgresql/base$ du -h --max-depth=1
4.2M ./1
4.2M ./11510
4.3M ./11511
1.1G ./1044080708
0 ./pgsql_tmp
453G ./1051277744
454G .





-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [GENERAL] drop view with out cascading the dependents

2010-10-20 Thread Grzegorz Jaśkiewicz
2010/10/20 Merlin Moncure :
> In recent versions of postgres (I think 8.4+?) you can add columns to
> the view via create/replace (not drop of course).  This greatly
> reduces the practical annoyances of dropping view dependencies, at
> least for me...
Ok, We're still on 8.3 here, and management reluctant to upgrade.
(which makes some sense).




-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery from out of disk space

2010-10-20 Thread Merlin Moncure
On Wed, Oct 20, 2010 at 9:32 AM, Alexander B  wrote:
> I use PostGre 8.3 on Ubuntu.
>
> I realized last Friday that my /opt hard disk was full. The disk was full
> about 2-4 hours of moderate activity. The disk  ran out of space for reasons
> outside of postgres
> I'm really at a loss of what to do.  Does anyone have suggestions for what
> my next step should be.  T
>
> More details:
>
> When I run postgress...Postgres seems to come up...However, it is completely
> unresponsive to my application.  Everything just hangs.
>
> postgres  7364  0.0  1.5 1089056 27164 ?   S    09:35   0:00
> /opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data
> postgres  7365  0.0  0.0  10700   984 ?    Ss   09:35   0:00 postgres:
> logger process
> postgres  7367  0.0  0.4 1089572 7244 ?    Ss   09:35   0:00 postgres:
> writer process
> postgres  7368  0.0  0.0 1089572 1056 ?    Ss   09:35   0:00 postgres:
> wal writer process
> postgres  7369  0.0  0.0 1089712 1348 ?    Ss   09:35   0:00 postgres:
> autovacuum launcher process
> postgres  7370  0.0  0.0  10828  1236 ?    Ss   09:35   0:00 postgres:
> stats collector process
>
> I also tried to run pg_dump to make a backup and that failed with the
> following message:
>
> (reverse-i-search)`./.': cd ../../
> r...@domu-12-31-39-0f-44-82:/opt/PostgreSQL/8.3/bin# p -v -c -f
> /opt/backup/bb.txt -U nuxeo nuxeodb
> pg_dump: reading schemas
> pg_dump: reading user-defined functions
> pg_dump: reading user-defined types
> pg_dump: reading procedural languages
> pg_dump: reading user-defined aggregate functions
> pg_dump: reading user-defined operators
> pg_dump: reading user-defined operator classes
> pg_dump: reading user-defined text search parsers
> pg_dump: reading user-defined text search templates
> pg_dump: reading user-defined text search dictionaries
> pg_dump: reading user-defined text search configurations
> pg_dump: reading user-defined operator families
> pg_dump: reading user-defined conversions
> pg_dump: reading user-defined tables
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  permission denied for relation
> totalcount
> pg_dump: The command was: LOCK TABLE public.totalcount IN ACCESS SHARE MODE
> pg_dump: *** aborted because of error

my guess is that while moving the database from one physical volume to
another you did not preserver permissions etc (typically done via 'cp
-a').  If you have the source database, repeat the copy, making sure
to preserve. Otherwise it all has to be set manually.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgeu-general] [ANNOUNCE] PGDay Europe 2010 Registration Open

2010-10-20 Thread Raymond O'Donnell

On 20/10/2010 03:01, JY Wang wrote:

Hi

Please count me out of the list of your subscription.


Hi there,

This is a mailing list, so you unsubscribe yourself - instructions are 
at the bottom of every email sent from the list server.


Ray.

--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table scans to analyze multiple indexes

2010-10-20 Thread Bruce Momjian
Thom Brown wrote:
> Hi all,
> 
> I might not be understanding this correctly, but does Postgres, when
> VACUUM ANALYZE-ing a table, perform separate scans for each index?
> And if so, is this necessary?  Can't it update indexes parallel?  This
> would be particularly useful when rebuilding all indexes on a table.

Uh, I am unsure of the question.  ANALYZE randomly samples the heap,
plus generates analyze statistics for each expression index.

As far as VACUUM, that scans the heap and then scans the indexes based on
the free space is finds;  see vacuumlazy.c:


http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/backend/commands/vacuumlazy.c;h=0ac993f957d6e49c23448dd85f8ed41308dc9c60;hb=HEAD

Are you asking why normal delete doesn't update the index?

This is probably a question for the hackers list.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database INNOVATION

2010-10-20 Thread Brent Wood
Gotcha.

Apologies for the digression, off your exact topic but consistent with the 
subject :-)

I'm interested in both, PL/R & representational graphics from an analytical 
perspective, doing more than just retrieving raw or accumulated data with SQL. 
& also from the (mathemetical) graphic perspective to support biological 
taxonomic trees/heirarchies, which do not easily fit the SQL model, although a 
number of kludges to traverse such structures are around. 

(I need to look at the Postgres recursive capability for this sometime)

Cheers,

  Brent

Brent Wood
DBA/GIS consultant
NIWA, Wellington
New Zealand
>>> Craig Ringer  10/20/10 6:12 PM >>>
On 10/20/2010 12:35 PM, Brent Wood wrote:
> Have a look at PL/R.
>
> You can embed a command to generate a graphic using R via a user defined
> SQL function,

In this case, when I say "graph" or "tree" I'm referring to the concept
in the graph theory sense, not the "plot" sense. "object graph" not
"image representation of data".

http://en.wikipedia.org/wiki/Graph_(mathematics)
http://en.wikipedia.org/wiki/Graph_theory

Sorry, I didn't even think to clarify my usage.

What I'm talking about is a way to query the database and obtain a
representation of matching tuples where each tuple is represented
exactly once, and referential relationships between tuples are included
in an efficient way.

For a simple tree or forest (ie a directed graph with no cycles) this
could be a XML/JSON/YAML/whatever document that uses nesting to
represent relationships.

For more complex graphs, it'd have to be a list of
XML/JSON/YAML/whatever representations of each tuple or (if Pg supported
it) multiple tabular result sets, one for each tuple type. An edge list
could be included to speed mapping out the inter-object references after
deserialization.

To say this would be nice when dealing with document-in-database storage
and certain types of ORM workload is quite an understatement. Getting
rid of all that horrid "multiply left join, filter and de-duplicate" or
"n+1 select" crap would be quite lovely. Sure, it's often better to use
sane SQL directly, but there are tasks for which ORMs or
document-database mappings are a real time and pain saver - it'd just be
nice to be able to teach the database their language.

Plus, that'd help shut up the "NoSQL" crowd and separate "NoSQL" from
"relaxed or no ACID shareded databases", two different things people
currently confuse.

In any case, thanks for the tip. It's nice to know the PL/R can be used
for such in-database processing when I *do* want to plot data.

--
Craig Ringer

Please consider the environment before printing this email.

NIWA is the trading name of the National Institute of Water & Atmospheric 
Research Ltd.


[GENERAL] unique constraint

2010-10-20 Thread Jamie Kahgee
I have a table of paragraphs for pages that are in a specific order (1st,
2nd, 3rd, etc...).

demo=# \d paragraphs
   Table "toolbox.paragraphs"
   Column|  Type   |Modifiers

-+-+-
 ...
 page| integer | not null
 pos | integer | not null default 1
 ...

Is there a good way to ensure these paragraphs order can't get all out of
whack?  what I mean is - we had a slight hiccup in a query and when
paragraph positions were moved it sometimes messed up the order of other
paragraph positions.  for example, some paragraph positions for a page might
end up like (1st, 2nd, 2nd, 4th, 5th) or some other random list w/ duplicate
positions

I've fixed the incorrect query, but would like to know if there is a better
constraint that I could use to ensure this can't happen besides spectacular
bug-free programming

I tried using a unique constraint on the page/pos columns, but was running
into constraint errors when I did an update to move positions - in a
transaction, there might be two pages at the same position for an instance
while they are getting shuffled around.

to fix this I tried deferring the constraints, but as we're using version
8.2.5, realized this isn't supported and might not be the best approach?

Any ideas from the community that might be usefull?

Thanks,
Jamie K.


Re: [GENERAL] unique constraint

2010-10-20 Thread Michael Glaesemann

On Oct 20, 2010, at 15:58 , Jamie Kahgee wrote:

> I have a table of paragraphs for pages that are in a specific order (1st,
> 2nd, 3rd, etc...).



> I tried using a unique constraint on the page/pos columns, but was running
> into constraint errors when I did an update to move positions - in a
> transaction, there might be two pages at the same position for an instance
> while they are getting shuffled around.
> 
> to fix this I tried deferring the constraints, but as we're using version
> 8.2.5, realized this isn't supported and might not be the best approach?

Upgrade if you can. Otherwise you can do two updates when rearranging 
paragraphs. The first updates them to an "invalid" range which isn't used in 
"correct" data and so won't conflict with other values, and the second moves 
them back to the valid range of positions.
Negative numbers are often used for this middle range.

This is basically a hierarchy problem. Take a gander around the web for nested 
sets for examples.

Michael Glaesemann
grzm seespotcode net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unique constraint

2010-10-20 Thread Jeff Davis
On Wed, 2010-10-20 at 16:10 -0400, Michael Glaesemann wrote:
> Upgrade if you can. Otherwise you can do two updates when rearranging 
> paragraphs. The first updates them to an "invalid" range which isn't used in 
> "correct" data and so won't conflict with other values, and the second moves 
> them back to the valid range of positions.
> Negative numbers are often used for this middle range.

The reason for the upgrade suggestion I believe was DEFERRABLE unique
constraints. These allow you to temporarily violate a unique constraint,
as long as it's correct at the end.

See:

  http://www.postgresql.org/docs/9.0/static/sql-createtable.html

and look for "DEFERRABLE".

You can also consider a constraint trigger:

  http://www.postgresql.org/docs/9.0/static/sql-createconstraint.html

Be careful to lock appropriately, however.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre on virtual machine

2010-10-20 Thread John R Pierce

On 10/20/10 3:46 AM, Georgi Ivanov wrote:

Hi,
Is there any specific reason one should/should not run postgre on 
virtual machine for production ?

Is there any special tuning for virtual environment ?
Client of mine is running 8.4 on  virtual machine (VMWare ) so I'm 
wandering if there is anything special I can tune to speed up a bit.
Also if I must choose between more RAM and more virtual cores what 
should I choose ?


the biggest issue tends to be disk IO performance, especially in a 
transactional intensive database.  virtualized storage is just not as 
fast as native storage.   the big iron folks deal with this by 
dedicating fiberchannel cards to each virtual machine that needs IO 
performance.  of course, this gets expensive.


as far as memory vs cores goes, that depends a lot on the nature of your 
access patterns.   larger databases benefit from more memory for caching 
& buffers, while higher levels of concurrent connections benefit from 
more CPU cores (*and* more memory).




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 SSL renegotiation failure restoring data

2010-10-20 Thread Bruce Momjian
Andrus Moor wrote:
> > Either (1) get a non-lobotomized SSL library
> I'm using latest official Postgres 9.0 server and pgAdmin client.
> Does one of them contain bug in SSL??

Uh, we don't ship SSL in the server.  We ship code that _uses_ ssl, so I
would look at your operating system to see what version of SSL you have,
and perhaps update that.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre on virtual machine

2010-10-20 Thread Joshua J. Kugler
On Wednesday 20 October 2010, John R Pierce elucidated thus:
> On 10/20/10 3:46 AM, Georgi Ivanov wrote:
> > Hi,
> > Is there any specific reason one should/should not run postgre on
> > virtual machine for production ?
> > Is there any special tuning for virtual environment ?
> > Client of mine is running 8.4 on  virtual machine (VMWare ) so I'm
> > wandering if there is anything special I can tune to speed up a
> > bit. Also if I must choose between more RAM and more virtual cores
> > what should I choose ?
>
> the biggest issue tends to be disk IO performance, especially in a
> transactional intensive database.  virtualized storage is just not as
> fast as native storage.   the big iron folks deal with this by
> dedicating fiberchannel cards to each virtual machine that needs IO
> performance.  of course, this gets expensive.
>
> as far as memory vs cores goes, that depends a lot on the nature of
> your access patterns.   larger databases benefit from more memory for
> caching & buffers, while higher levels of concurrent connections
> benefit from more CPU cores (*and* more memory).

Would using a raw partition in the VM alleviate some of the issues here?

j

-- 
Joshua Kugler
Part-Time System Admin/Programmer
http://www.eeinternet.com - Fairbanks, AK
PGP Key: http://pgp.mit.edu/  ID 0x73B13B6A

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread David Kerr
I know I've seen posts on how to do this, but i can't seem to find them.

I've got a data set

A, B
A, C
A, D
[...]

and so on

and i'd like to be able to wite a query that would result in

1,A,B
2,A,C
3,A,D
[...]

PG version is 8.3.

Any ideas?

Thanks

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread Raymond O'Donnell

On 20/10/2010 23:22, David Kerr wrote:

I know I've seen posts on how to do this, but i can't seem to find them.

I've got a data set

A, B
A, C
A, D
[...]

and so on

and i'd like to be able to wite a query that would result in

1,A,B
2,A,C
3,A,D
[...]

PG version is 8.3.

Any ideas?


You probably want generate_series():

   http://www.postgresql.org/docs/8.3/static/functions-srf.html

Ray.

--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread David Kerr
On Wed, Oct 20, 2010 at 11:28:18PM +0100, Raymond O'Donnell wrote:
- On 20/10/2010 23:22, David Kerr wrote:
- >I know I've seen posts on how to do this, but i can't seem to find them.
- >
- >I've got a data set
- >
- >A, B
- >A, C
- >A, D
- >[...]
- >
- >and so on
- >
- >and i'd like to be able to wite a query that would result in
- >
- >1,A,B
- >2,A,C
- >3,A,D
- >[...]
- >
- >PG version is 8.3.
- >
- >Any ideas?
- 
- You probably want generate_series():
- 
-http://www.postgresql.org/docs/8.3/static/functions-srf.html
- 
- Ray.

I thought, so. what would that look like?

select generate_series(1,select count(*) from table), field1, field2 from table
doesn't work.. 


thanks

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread DM
select generate_series(1,(select count(*) from tax)), country from tax;

you should use braces around the sub select.

Thanks
Deepak

On Wed, Oct 20, 2010 at 3:30 PM, David Kerr  wrote:

> On Wed, Oct 20, 2010 at 11:28:18PM +0100, Raymond O'Donnell wrote:
> - On 20/10/2010 23:22, David Kerr wrote:
> - >I know I've seen posts on how to do this, but i can't seem to find them.
> - >
> - >I've got a data set
> - >
> - >A, B
> - >A, C
> - >A, D
> - >[...]
> - >
> - >and so on
> - >
> - >and i'd like to be able to wite a query that would result in
> - >
> - >1,A,B
> - >2,A,C
> - >3,A,D
> - >[...]
> - >
> - >PG version is 8.3.
> - >
> - >Any ideas?
> -
> - You probably want generate_series():
> -
> -http://www.postgresql.org/docs/8.3/static/functions-srf.html
> -
> - Ray.
>
> I thought, so. what would that look like?
>
> select generate_series(1,select count(*) from table), field1, field2 from
> table
> doesn't work..
>
>
> thanks
>
> Dave
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread David Kerr
On Wed, Oct 20, 2010 at 03:47:19PM -0700, DM wrote:
- select generate_series(1,(select count(*) from tax)), country from tax;
- 
- you should use braces around the sub select.
- 
- Thanks
- Deepak
- 

Ah, great, thanks!

Dave



- On Wed, Oct 20, 2010 at 3:30 PM, David Kerr  wrote:
- 
- > On Wed, Oct 20, 2010 at 11:28:18PM +0100, Raymond O'Donnell wrote:
- > - On 20/10/2010 23:22, David Kerr wrote:
- > - >I know I've seen posts on how to do this, but i can't seem to find them.
- > - >
- > - >I've got a data set
- > - >
- > - >A, B
- > - >A, C
- > - >A, D
- > - >[...]
- > - >
- > - >and so on
- > - >
- > - >and i'd like to be able to wite a query that would result in
- > - >
- > - >1,A,B
- > - >2,A,C
- > - >3,A,D
- > - >[...]
- > - >
- > - >PG version is 8.3.
- > - >
- > - >Any ideas?
- > -
- > - You probably want generate_series():
- > -
- > -http://www.postgresql.org/docs/8.3/static/functions-srf.html
- > -
- > - Ray.
- >
- > I thought, so. what would that look like?
- >
- > select generate_series(1,select count(*) from table), field1, field2 from
- > table
- > doesn't work..
- >
- >
- > thanks
- >
- > Dave
- >
- > --
- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
- > To make changes to your subscription:
- > http://www.postgresql.org/mailpref/pgsql-general
- >

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cannot Start Postgres After System Boot

2010-10-20 Thread Rich Shepard

  For reasons I do not understand, the Slackware start-up file for postgres
(/etc/rc.d/rc.postgresql) fails to work properly after I reboot the system.
(Reboots normally occur only after a kernel upgrade or with a hardware
failure that crashes the system.)

  Trying to restart the system manually (su postgres -c 'postgres -D
/var/lib/pgsql/data &') regardless of the presence of /tmp/.s.PGSQL.5432
and /var/lib/pgsql/postmaster.pid. Here's what I see:

[rshep...@salmo ~]$ su postgres -c 'postgres -D /var/lib/pgsql/data &'
Password: 
[rshep...@salmo ~]$ LOG:  could not bind IPv4 socket: Address already in use

HINT:  Is another postmaster already running on port 5432? If not, wait a
few seconds and retry.
WARNING:  could not create listen socket for "localhost"
FATAL:  could not create any TCP/IP sockets

  If someone would be kind enough to point out what I'm doing incorrectly
(e.g., removing /tmp/.s.PGSQL.5432 and postmaster.pid when the startup
process complains they're not right) I'll save this information for the next
time. I can also provide the 'start' section of the Slackware init file so I
could learn why it's not working properly.

TIA,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrade from 8.3.3

2010-10-20 Thread Rich Shepard

  While I'm re-learning how to properly start postgres after a reboot, I'd
like recommendations on upgrading my current 8.3.3 to a newer version. Since
I'm the only one currently using the system (but I'll be using CMS Made
Simple for my revised web site and that works only with the 8.x releases), I
wonder if I should upgrade to 8.4.5 or 9.0. I suspect the former, but I'd
appreciate thoughts from more knowledgeable folks here.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread David Kerr
On Wed, Oct 20, 2010 at 03:47:19PM -0700, DM wrote:
- select generate_series(1,(select count(*) from tax)), country from tax;
- 
- you should use braces around the sub select.
- 
- Thanks
- Deepak

  Table "public.test"
 Column | Type | Modifiers
+--+---
 col1   | character varying(2) |
 col2   | character varying(2) |


select * from test;
 col1 | col2
--+--
 A| A
 A| B
 A| C
 B| A
 B| B
 B| C
(6 rows)

select generate_series(1,(select count(*) from test)), col1, col2 from test;
 generate_series | col1 | col2
-+--+--
   1 | A| A
   2 | A| A
   3 | A| A
   4 | A| A
   5 | A| A
   6 | A| A
   1 | A| B
   2 | A| B
   3 | A| B
   4 | A| B
   5 | A| B
   6 | A| B
   1 | A| C
   2 | A| C
   3 | A| C
   4 | A| C
   5 | A| C
   6 | A| C
   1 | B| A
   2 | B| A
   3 | B| A
   4 | B| A
   5 | B| A
   6 | B| A
   1 | B| B
   2 | B| B
   3 | B| B
   4 | B| B
   5 | B| B
   6 | B| B
   1 | B| C
   2 | B| C
   3 | B| C
   4 | B| C
   5 | B| C
   6 | B| C
(36 rows)

when what i want is:
1  | A| A
2  | A| B
3  | A| C
4  | B| A
5  | B| B
6  | B| C


thanks

Dave

- 
- On Wed, Oct 20, 2010 at 3:30 PM, David Kerr  wrote:
- 
- > On Wed, Oct 20, 2010 at 11:28:18PM +0100, Raymond O'Donnell wrote:
- > - On 20/10/2010 23:22, David Kerr wrote:
- > - >I know I've seen posts on how to do this, but i can't seem to find them.
- > - >
- > - >I've got a data set
- > - >
- > - >A, B
- > - >A, C
- > - >A, D
- > - >[...]
- > - >
- > - >and so on
- > - >
- > - >and i'd like to be able to wite a query that would result in
- > - >
- > - >1,A,B
- > - >2,A,C
- > - >3,A,D
- > - >[...]
- > - >
- > - >PG version is 8.3.
- > - >
- > - >Any ideas?
- > -
- > - You probably want generate_series():
- > -
- > -http://www.postgresql.org/docs/8.3/static/functions-srf.html
- > -
- > - Ray.
- >
- > I thought, so. what would that look like?
- >
- > select generate_series(1,select count(*) from table), field1, field2 from
- > table
- > doesn't work..
- >
- >
- > thanks
- >
- > Dave
- >
- > --
- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
- > To make changes to your subscription:
- > http://www.postgresql.org/mailpref/pgsql-general
- >

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help with design of the 'COUNT(*) in separate table schema

2010-10-20 Thread Dennis Gearon
Regarding the previously discussed triggers on tables to keep track of count:
http://www.varlena.com/GeneralBits/120.php
http://www.varlena.com/GeneralBits/49.php

CREATE OR REPLACE FUNCTION count_rows()
RETURNS TRIGGER AS
'
   BEGIN
  IF TG_OP = ''INSERT'' THEN
 UPDATE rowcount
SET total_rows = total_rows + 1
WHERE table_name = TG_RELNAME;
  ELSIF TG_OP = ''DELETE'' THEN
 UPDATE rowcount
SET total_rows = total_rows - 1
WHERE table_name = TG_RELNAME;
  END IF;
  RETURN NULL;
   END;
' LANGUAGE plpgsql;


Wouldn't something like this need row-locking (SELECT for UPDATE) in order to 
serialize the execution of all triggers?

Dennis Gearon

Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better idea to learn from others’ mistakes, so you do not have to make them 
yourself. from 'http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036'

EARTH has a Right To Life,
  otherwise we all die.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot Start Postgres After System Boot

2010-10-20 Thread Andrej
On 21 October 2010 11:53, Rich Shepard  wrote:
>  If someone would be kind enough to point out what I'm doing incorrectly
> (e.g., removing /tmp/.s.PGSQL.5432 and postmaster.pid when the startup
> process complains they're not right) I'll save this information for the next
> time. I can also provide the 'start' section of the Slackware init file so I
> could learn why it's not working properly.

Please do  - provide the section, I mean.


> TIA,
>
> Rich
Cheers,
Andrej


-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.georgedillon.com/web/html_email_is_evil.shtml

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] queriing the version of libpq

2010-10-20 Thread Bruce Momjian
A.M. wrote:
> 
> On Oct 7, 2010, at 5:34 AM, Devrim G?ND?Z wrote:
> 
> > On Thu, 2010-10-07 at 12:23 +0300, Peter Eisentraut wrote:
> >> This is really something that psycopg2 should work out for you.  I
> >> suggest you take up a discussion on this on their mailing list.
> >
> > ...which is down over the last 3 weeks or so:
> >
> > http://www.initd.org/
> 
> In addition, I posted a patch for 9.0 support which was supposed to be
> rolled in to psycopg2 weeks ago. Now I am stuck pushing my own psycopg2
> egg around. The psycopg2 project is too reliant on one person (who has
> trouble managing his servers)- I wish he would move the project to a
> public project management service.
> 
> I also remember a discussion on the poor state of postgresql drivers
> for python and which driver the PostgreSQL project should endorse- it
> looks like the situation has not improved. Here's the thread:
> http://archives.postgresql.org/pgsql-hackers/2010-02/msg00351.php

The python driver situation did improve, for a while.  :-|

--
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with design of the 'COUNT(*) in separate table schema

2010-10-20 Thread Peter Geoghegan
On 20 October 2010 23:52, Dennis Gearon  wrote:
> Regarding the previously discussed triggers on tables to keep track of count:
> http://www.varlena.com/GeneralBits/120.php
> http://www.varlena.com/GeneralBits/49.php
> 
> CREATE OR REPLACE FUNCTION count_rows()
> RETURNS TRIGGER AS
> '
>   BEGIN
>      IF TG_OP = ''INSERT'' THEN
>         UPDATE rowcount
>            SET total_rows = total_rows + 1
>            WHERE table_name = TG_RELNAME;
>      ELSIF TG_OP = ''DELETE'' THEN
>         UPDATE rowcount
>            SET total_rows = total_rows - 1
>            WHERE table_name = TG_RELNAME;
>      END IF;
>      RETURN NULL;
>   END;
> ' LANGUAGE plpgsql;
> 
>
> Wouldn't something like this need row-locking (SELECT for UPDATE) in order to 
> serialize the execution of all triggers?
>

The update will acquire a row level lock on rowcount for the
TG_RELNAME tuple without you doing anything else.

-- 
Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot Start Postgres After System Boot

2010-10-20 Thread Rich Shepard

On Thu, 21 Oct 2010, Andrej wrote:


Please do  - provide the section, I mean.


Andrej,

  The entire script is attached. It's only 2588 bytes.

  Also, when there is no postmaster.pid or .s.PGSQL.5432 (and its lock file)
are these recreated automagically when postgres is properly loaded, or do I
need to do something first?

Many thanks,

Rich#!/bin/bash

# PostgreSQL startup script for Slackware Linux
# Copyright 2007 Adis Nezirovic 
# Licensed under GNU GPL v2

# Do not source this script (since it contains exit() calls)

# Before you can run postgresql you'll need to create the
# database files in /var/lib/pgsql. The following should do
# the trick.
#
#   $ su postgres -c "initdb -D /var/lib/pgsql/data"
#

LOGFILE=/var/log/postgresql
DATADIR=/var/lib/pgsql/data
POSTGRES=/usr/bin/postgres
PIDFILE=postmaster.pid

# Return values (according to LSB):
# 0 - success
# 1 - generic or unspecified error
# 2 - invalid or excess argument(s)
# 3 - unimplemented feature (e.g. "reload")
# 4 - insufficient privilege
# 5 - program is not installed
# 6 - program is not configured
# 7 - program is not running

pg_ctl()
{
CMD="/usr/bin/pg_ctl $@"
su - postgres -c "$CMD"
}

if [ ! -f $POSTGRES ]; then
echo "Could not find 'postgres' binary. Maybe PostgreSQL is not 
installed properly?"
exit 5
fi

case "$1" in

"start")
echo "Starting PostgreSQL"
touch $LOGFILE
chown postgres:wheel $LOGFILE
chmod 0640 $LOGFILE

if [ ! -e $DATADIR/PG_VERSION ]; then
echo "You should initialize the PostgreSQL database at 
location $DATADIR"
exit 6
fi

if pgrep postgres; then

echo "PostgreSQL daemon already running"
if [ ! -f $DATADIR/$PIDFILE ]; then
echo "Warning: Missing pid file 
$DATADIR/$PIDFILE"
fi
exit 1

else # remove old socket, if it exists and no daemon is running.

if [ ! -f $DATADIR/$PIDFILE ]; then
rm -f /tmp/.s.PGSQL.5432
rm -f /tmp/.s.PGSQL.5432.lock
# pg_ctl start -w -l $LOGFILE -D $DATADIR
su postgres -c 'postgres -D /var/lib/pgsql/data 
&'
exit 0
else
echo "PostgreSQL daemon was not properly shut 
down"
echo "Please remove stale pid file 
$DATADIR/$PIDFILE"
exit 7
fi

fi  
;;

"stop")
echo "Shutting down PostgreSQL..."
pg_ctl stop -l $LOGFILE -D $DATADIR -m smart
;;

"restart")
echo "Restarting PostgreSQL..."
pg_ctl restart -l $LOGFILE -D $DATADIR -m smart
;;

"reload")
echo "Reloading configuration for PostgreSQL..."
pg_ctl reload -l $LOGFILE -D $DATADIR -m smart
;;

"status")
if pgrep postgres; then
echo "PostgreSQL is running"

if [ ! -e $DATADIR/$PIDFILE ]; then
echo "Warning: Missing pid file 
$DATADIR/$PIDFILE"
fi

exit 0
else
echo "PostgreSQL is stopped"

if [ -e $DATADIR/$PIDFILE ]; then
echo "Detected stale pid file $DATADIR/$PIDFILE"
fi

exit 0
fi
;;

*)
echo "Usage: $0 {start|stop|status|restart|reload}"
exit 1
;;
esac

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Composite Index question

2010-10-20 Thread DM
Composite Index question:

I have composite index on 3 columns on a table, by mistake the composite
index was created twice on the table.

Will there any performance issues on this table because of the 2 same
composite indexes?

Thanks
Deepak


Re: [GENERAL] Composite Index question

2010-10-20 Thread Rob Sargent
Hm. Run some queries; drop the second version of the index definition;
re-run the same queries; report to the group.  The redundant index isn't
helping, that much is certain.

On 10/20/2010 05:43 PM, DM wrote:
> Composite Index question:
> 
> I have composite index on 3 columns on a table, by mistake the composite
> index was created twice on the table.
> 
> Will there any performance issues on this table because of the 2 same
> composite indexes?
> 
> Thanks
> Deepak

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite Index question

2010-10-20 Thread DM
Its a huge table in production, i dont want to take any risk.

I can simulate and test this but i was to checking to see If any one knows
off hand about this.



I can simulate it but
On Wed, Oct 20, 2010 at 4:57 PM, Rob Sargent  wrote:

> Hm. Run some queries; drop the second version of the index definition;
> re-run the same queries; report to the group.  The redundant index isn't
> helping, that much is certain.
>
> On 10/20/2010 05:43 PM, DM wrote:
> > Composite Index question:
> >
> > I have composite index on 3 columns on a table, by mistake the composite
> > index was created twice on the table.
> >
> > Will there any performance issues on this table because of the 2 same
> > composite indexes?
> >
> > Thanks
> > Deepak
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Custom cache implemented in a postgresql C function

2010-10-20 Thread Gabi Julien
Hi,

Here is my problem: I have a postgresql C function that looks like this:

Datum filter(PG_FUNCTION_ARGS);

It takes identifiers and queries a bunch of tables and ends up returning true 
or false. So far nothing difficult except that we want better performance. The 
function was already optimized to the best of my abilities and changing the 
structure of the database would not help. However, having a cache would be the 
perfect solution. I could implement this cache outside of postgresql if need be 
but nothing could beat implementing this directly in a postgresql C function.

So this is what I want, a custom cache built into a postgresql C function. 
Since postgresql uses different processes, it would be best to use the shared 
memory. Can this be done safely? At its core, the cache could be considered as 
simple as a map protected by a mutex. With postgresql, I first need to 
initialized some shared memory. This is explained at the end of this link:

http://www.postgresql.org/docs/8.2/static/xfunc-c.html

However, it sounds like I need to reserve the shared memory in advance using:

void RequestAddinShmemSpace(int size)

In my case, I do not know how big my cache will be. I would preferably allocate 
the memory dynamically. Is this possible? In any case, am I trying to reinvent 
the wheel here? Is there already a shared map or a shared hash structure 
available in postgresql?

If shared memory turns out too difficult to use, I could create separate caches 
for each postgresql processes. This would be a waste of space but it might be 
better then nothing. In this case, do I need to make my code thread safe? In 
other words, is postgresql using more then one thread per processes?

Any insights would be more then welcome!
Thank you,
Gabi Julien

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite Index question

2010-10-20 Thread Rob Sargent
If you can think of one benefit from having the redundant index then by
all means keep it.  It certainly eludes me.  Seems to me, removing an
un-necessary index on a huge table can only be a good thing.

On 10/20/2010 06:02 PM, DM wrote:
> Its a huge table in production, i dont want to take any risk.
> 
> I can simulate and test this but i was to checking to see If any one
> knows off hand about this.
> 
> 
> 
> I can simulate it but
> On Wed, Oct 20, 2010 at 4:57 PM, Rob Sargent  > wrote:
> 
> Hm. Run some queries; drop the second version of the index definition;
> re-run the same queries; report to the group.  The redundant index isn't
> helping, that much is certain.
> 
> On 10/20/2010 05:43 PM, DM wrote:
> > Composite Index question:
> >
> > I have composite index on 3 columns on a table, by mistake the
> composite
> > index was created twice on the table.
> >
> > Will there any performance issues on this table because of the 2 same
> > composite indexes?
> >
> > Thanks
> > Deepak
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom cache implemented in a postgresql C function

2010-10-20 Thread Rob Sargent
Are you sure you cache needs to grow endlessly?  Otherwise you could use
RequestAddinShmemSpace and manage you're map within that space, perhaps
"overwriting" chunks on an LRU basis or a rollover. i.e. Grab it all and
do your own management within that single block of shmem.
Caches are best for thing revisited often, so old/unused ought to be
expendable with little performance loss, at least compared with the
heavy traffic.

On 10/20/2010 05:44 PM, Gabi Julien wrote:
> Hi,
> 
> Here is my problem: I have a postgresql C function that looks like this:
> 
> Datum filter(PG_FUNCTION_ARGS);
> 
> It takes identifiers and queries a bunch of tables and ends up returning true 
> or false. So far nothing difficult except that we want better performance. 
> The function was already optimized to the best of my abilities and changing 
> the structure of the database would not help. However, having a cache would 
> be the perfect solution. I could implement this cache outside of postgresql 
> if need be but nothing could beat implementing this directly in a postgresql 
> C function.
> 
> So this is what I want, a custom cache built into a postgresql C function. 
> Since postgresql uses different processes, it would be best to use the shared 
> memory. Can this be done safely? At its core, the cache could be considered 
> as simple as a map protected by a mutex. With postgresql, I first need to 
> initialized some shared memory. This is explained at the end of this link:
> 
> http://www.postgresql.org/docs/8.2/static/xfunc-c.html
> 
> However, it sounds like I need to reserve the shared memory in advance using:
> 
> void RequestAddinShmemSpace(int size)
> 
> In my case, I do not know how big my cache will be. I would preferably 
> allocate the memory dynamically. Is this possible? In any case, am I trying 
> to reinvent the wheel here? Is there already a shared map or a shared hash 
> structure available in postgresql?
> 
> If shared memory turns out too difficult to use, I could create separate 
> caches for each postgresql processes. This would be a waste of space but it 
> might be better then nothing. In this case, do I need to make my code thread 
> safe? In other words, is postgresql using more then one thread per processes?
> 
> Any insights would be more then welcome!
> Thank you,
> Gabi Julien
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite Index question

2010-10-20 Thread DM
I know there is no benfit of having duplicate indexes.
Inorder for me to make change on production it requires lot of approvals and
stuff.

I wnat to know if there is any major performance drawback for having
duplicate composite index, so that i can push hard for the change. Let me
know.

thanks for your looking into this.


On Wed, Oct 20, 2010 at 5:10 PM, Rob Sargent  wrote:

> If you can think of one benefit from having the redundant index then by
> all means keep it.  It certainly eludes me.  Seems to me, removing an
> un-necessary index on a huge table can only be a good thing.
>
> On 10/20/2010 06:02 PM, DM wrote:
> > Its a huge table in production, i dont want to take any risk.
> >
> > I can simulate and test this but i was to checking to see If any one
> > knows off hand about this.
> >
> >
> >
> > I can simulate it but
> > On Wed, Oct 20, 2010 at 4:57 PM, Rob Sargent  > > wrote:
> >
> > Hm. Run some queries; drop the second version of the index
> definition;
> > re-run the same queries; report to the group.  The redundant index
> isn't
> > helping, that much is certain.
> >
> > On 10/20/2010 05:43 PM, DM wrote:
> > > Composite Index question:
> > >
> > > I have composite index on 3 columns on a table, by mistake the
> > composite
> > > index was created twice on the table.
> > >
> > > Will there any performance issues on this table because of the 2
> same
> > > composite indexes?
> > >
> > > Thanks
> > > Deepak
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org
> > )
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
> >
>


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread DM
create temp table dup_test (nm1 varchar(2),nm2 varchar(3));
insert into dup_test values ('A','A'),('A','B'),('A','C'),('B','A'),('B',
'B'),('B','C');

CREATE SEQUENCE
dup_test_seq
  INCREMENT 1
  MINVALUE 1
  MAXVALUE 9223372036854775807
  START 1
  CACHE 1;

alter table dup_test add column dup_id integer;

alter table dup_test alter column dup_id SET DEFAULT
nextval('dup_test_seq'::regclass);

update dup_test set dup_id = nextval('dup_test_seq'::regclass);


select * from dup_test;
 nm1 | nm2 | dup_id
-+-+
 A   | A   |  1
 A   | B   |  2
 A   | C   |  3
 B   | A   |  4
 B   | B   |  5
 B   | C   |  6
(6 rows)

Hope this helps


On Wed, Oct 20, 2010 at 4:07 PM, David Kerr  wrote:

> On Wed, Oct 20, 2010 at 03:47:19PM -0700, DM wrote:
> - select generate_series(1,(select count(*) from tax)), country from tax;
> -
> - you should use braces around the sub select.
> -
> - Thanks
> - Deepak
>
>   Table "public.test"
>  Column | Type | Modifiers
> +--+---
>  col1   | character varying(2) |
>  col2   | character varying(2) |
>
>
> select * from test;
>  col1 | col2
> --+--
>  A| A
>  A| B
>  A| C
>  B| A
>  B| B
>  B| C
> (6 rows)
>
> select generate_series(1,(select count(*) from test)), col1, col2 from
> test;
>  generate_series | col1 | col2
> -+--+--
>   1 | A| A
>   2 | A| A
>   3 | A| A
>   4 | A| A
>   5 | A| A
>   6 | A| A
>   1 | A| B
>   2 | A| B
>   3 | A| B
>   4 | A| B
>   5 | A| B
>   6 | A| B
>   1 | A| C
>   2 | A| C
>   3 | A| C
>   4 | A| C
>   5 | A| C
>   6 | A| C
>   1 | B| A
>   2 | B| A
>   3 | B| A
>   4 | B| A
>   5 | B| A
>   6 | B| A
>   1 | B| B
>   2 | B| B
>   3 | B| B
>   4 | B| B
>   5 | B| B
>   6 | B| B
>   1 | B| C
>   2 | B| C
>   3 | B| C
>   4 | B| C
>   5 | B| C
>   6 | B| C
> (36 rows)
>
> when what i want is:
> 1  | A| A
> 2  | A| B
> 3  | A| C
> 4  | B| A
> 5  | B| B
> 6  | B| C
>
>
> thanks
>
> Dave
>
> -
> - On Wed, Oct 20, 2010 at 3:30 PM, David Kerr  wrote:
> -
> - > On Wed, Oct 20, 2010 at 11:28:18PM +0100, Raymond O'Donnell wrote:
> - > - On 20/10/2010 23:22, David Kerr wrote:
> - > - >I know I've seen posts on how to do this, but i can't seem to find
> them.
> - > - >
> - > - >I've got a data set
> - > - >
> - > - >A, B
> - > - >A, C
> - > - >A, D
> - > - >[...]
> - > - >
> - > - >and so on
> - > - >
> - > - >and i'd like to be able to wite a query that would result in
> - > - >
> - > - >1,A,B
> - > - >2,A,C
> - > - >3,A,D
> - > - >[...]
> - > - >
> - > - >PG version is 8.3.
> - > - >
> - > - >Any ideas?
> - > -
> - > - You probably want generate_series():
> - > -
> - > -http://www.postgresql.org/docs/8.3/static/functions-srf.html
> - > -
> - > - Ray.
> - >
> - > I thought, so. what would that look like?
> - >
> - > select generate_series(1,select count(*) from table), field1, field2
> from
> - > table
> - > doesn't work..
> - >
> - >
> - > thanks
> - >
> - > Dave
> - >
> - > --
> - > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> - > To make changes to your subscription:
> - > http://www.postgresql.org/mailpref/pgsql-general
> - >
>


Re: [GENERAL] Composite Index question

2010-10-20 Thread Rob Sargent
Sorry, I cannot prove or disprove any penalty.  I don't currently have
access to any postgres db of any serious magnitude.

On 10/20/2010 06:23 PM, DM wrote:
> I know there is no benfit of having duplicate indexes.
> Inorder for me to make change on production it requires lot of approvals
> and stuff.
> 
> I wnat to know if there is any major performance drawback for having
> duplicate composite index, so that i can push hard for the change. Let
> me know.
> 
> thanks for your looking into this.
> 
> 
> On Wed, Oct 20, 2010 at 5:10 PM, Rob Sargent  > wrote:
> 
> If you can think of one benefit from having the redundant index then by
> all means keep it.  It certainly eludes me.  Seems to me, removing an
> un-necessary index on a huge table can only be a good thing.
> 
> On 10/20/2010 06:02 PM, DM wrote:
> > Its a huge table in production, i dont want to take any risk.
> >
> > I can simulate and test this but i was to checking to see If any one
> > knows off hand about this.
> >
> >
> >
> > I can simulate it but
> > On Wed, Oct 20, 2010 at 4:57 PM, Rob Sargent
> mailto:robjsarg...@gmail.com>
> > >> wrote:
> >
> > Hm. Run some queries; drop the second version of the index
> definition;
> > re-run the same queries; report to the group.  The redundant
> index isn't
> > helping, that much is certain.
> >
> > On 10/20/2010 05:43 PM, DM wrote:
> > > Composite Index question:
> > >
> > > I have composite index on 3 columns on a table, by mistake the
> > composite
> > > index was created twice on the table.
> > >
> > > Will there any performance issues on this table because of
> the 2 same
> > > composite indexes?
> > >
> > > Thanks
> > > Deepak
> >
> > --
> > Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org 
> >  >)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
> >
> 
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question on Explain : Index Scan

2010-10-20 Thread DM
Question on Index scan:
--->
test=# \d test_seqindex1
   Table "public.test_seqindex1"
 Column | Type  | Modifiers
+---+---
 sid| character varying(13) | not null
 name   | character varying(80) |
Indexes:
"test_seqindex1_pkey" PRIMARY KEY, btree (sid)

test=# explain select * from test_seqindex1 where sid='AA023';
QUERY
PLAN
---
 Index Scan using test_seqindex1_pkey on test_seqindex1  (cost=0.00..8.27
rows=1 width=28)
   Index Cond: ((sid)::text = 'AA023'::text)


test=# \d test_seqindex2
   Table "public.test_seqindex2"
 Column | Type  | Modifiers
+---+---
 eid| integer   | not null
 sid| character varying(13) |
 ename  | character varying(80) |
Indexes:
"test_seqindex2_pkey" PRIMARY KEY, btree (eid)
"idx_test_seqindex2_sid" btree (sid)

test=# explain select * from test_seqindex2 where sid='AA023';
  QUERY
PLAN
--
 Bitmap Heap Scan on test_seqindex2  (cost=4.95..275.53 rows=73 width=30)
   Recheck Cond: ((sid)::text = 'AA023'::text)
   ->  Bitmap Index Scan on idx_test_seqindex2_sid  (cost=0.00..4.93 rows=73
width=0)
 Index Cond: ((sid)::text = 'AA023'::text)



test=explain select * from test_seqindex1 t1,test_seqindex2 t2 where
t1.sid=t2.sid;
 QUERY
PLAN

 Hash Join  (cost=1231.55..46386.19 rows=920544 width=58)
   Hash Cond: ((t2.sid)::text = (t1.sid)::text)
   ->  Seq Scan on test_seqindex2 t2  (cost=0.00..16225.97 rows=920697
width=30)
   ->  Hash  (cost=581.80..581.80 rows=33580 width=28)
 ->  Seq Scan on test_seqindex1 t1  (cost=0.00..581.80 rows=33580
width=28)
(5 rows)



*I was hoping the optimizer would do a join using index scan*.

*Could some one please explain me why its not doing an index scan rather
than sequential scan .*


Thanks
Deepak


[GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tim Uckun
I have a very simple update query.

update cu
set screenshot_file_name = tu.screenshot_file_name,
screenshot_content_type  = tu.screenshot_content_type,
screenshot_file_size = tu.screenshot_file_size,
screenshot_status  = tu.screenshot_status

from  cu
inner join tu on tu.cu_id = cu.id

The join has five to six thousand rows in it.

The only indexed field in the update is screenshot_status which is an integer.

I changed the checkpoint settings to

checkpoint_segments = 256
checkpoint_completion_target = 0.9

but it still does not help.

I am having similar problems with deletes and inserts. Trying to
delete even a few thousand records takes forever.  The selects seem to
be just fine.

I am running this on my laptop with no other processes hitting the
database. It's a i5 with lots of RAM and quad core and a IDE drive.

Where is the FAST button for postgres updates? What parameter do I
have to set in order to update 6000 records in under an hour?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite Index question

2010-10-20 Thread Tom Lane
DM  writes:
> I know there is no benfit of having duplicate indexes.
> Inorder for me to make change on production it requires lot of approvals and
> stuff.

> I wnat to know if there is any major performance drawback for having
> duplicate composite index,

Of course there is: it doubles the index-update overhead every time you
update the table, in return for no benefit whatsoever.  Get rid of the
duplicate index.  (Now, if the table is seldom updated, it might not be
urgent to do so.  But you ought to plan on doing it.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom cache implemented in a postgresql C function

2010-10-20 Thread Tom Lane
Gabi Julien  writes:
> In my case, I do not know how big my cache will be.

That makes it awfully hard to use shared memory.

> If shared memory turns out too difficult to use, I could create
> separate caches for each postgresql processes.

That's what I'd recommend.  A big advantage of private caches is that
you don't have any need to manage concurrent access, which simplifies
the code and avoids contention.  All the caches that the core Postgres
code maintains are per-backend.

> This would be a waste
> of space but it might be better then nothing. In this case, do I need
> to make my code thread safe? In other words, is postgresql using more
> then one thread per processes?

No.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread Josh Kupershmidt
On Wed, Oct 20, 2010 at 6:22 PM, David Kerr  wrote:
> I know I've seen posts on how to do this, but i can't seem to find them.
>
> I've got a data set
>
> A, B
> A, C
> A, D
> [...]
>
> and so on
>
> and i'd like to be able to wite a query that would result in
>
> 1,A,B
> 2,A,C
> 3,A,D
> [...]
>
> PG version is 8.3.

If you can upgrade to 8.4, you could use the row_number() window
function which is perfectly suited to this task, should be as simple
as:

SELECT row_number() OVER (), * FROM tablename;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Gary Chambers
> Where is the FAST button for postgres updates? What parameter do I
> have to set in order to update 6000 records in under an hour?

Which version of Postgres?  Have you investigated more than just two
performance tuning parameters?  Does your MS Access version of the
query run any faster?

http://tinyurl.com/2fsjmv4

-- Gary Chambers

/* Nothing fancy and nothing Microsoft! */

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite Index question

2010-10-20 Thread mark

On 10/20/2010 05:43 PM, DM wrote:
> Composite Index question:
> 
> I have composite index on 3 columns on a table, by mistake the composite
> index was created twice on the table.
> 
> Will there any performance issues on this table because of the 2 same
> composite indexes?
> 
> Thanks
> Deepak

Are the indices of the same type (e.g. both BTree) but with different index
names?

Is the second composite index the columns in same order as the first ? if
not in the same column order you might be seeing some benefit for some
queries but this is dependent on the queries filter clauses.  If so you
might consider augmenting one or both of the indices to better suit your
queries. 

>From my experience, it appears to degrade performance because two indices
have to be maintained. (not to mention also appears to be a waste of disk
space) I am hopeful someone will correct me if I am wrong.

Ours were from people explicitly creating indexes on columns that had
indices implicitly created on them when the table was created.  Cleanup was
pretty easy and painless on our production systems. 

It's pretty easy to check for exact duplicates all over a given database as
well as how often each index is being used. 
Check some of the queries here: 
http://www.xzilla.net/blog/2008/Jul/Index-pruning-techniques.html

..:Mark


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tim Uckun
On Thu, Oct 21, 2010 at 3:37 PM, Gary Chambers  wrote:
>> Where is the FAST button for postgres updates? What parameter do I
>> have to set in order to update 6000 records in under an hour?
>
> Which version of Postgres?

8.4

  >Have you investigated more than just two
> performance tuning parameters?

I followed the advice I found when searching this mailing list.


>Does your MS Access version of the
> query run any faster?

I don't have access but I suspect it would.  How long should it take
to update three fields in 6000 records in your opinion.


> http://tinyurl.com/2fsjmv4

Ah yes I was expecting something snarky. Thanks.

BTW I have read many of those links and have adjusted some values but
honestly there are so many buttons to push and knobs to dial that it's
hard to know what will fix what.

Also searching the mailing list shows that this is a pretty commonly
asked question but I wasn't able to find an answer other than
increasing the parameters I listed.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Greg Smith

Tim Uckun wrote:

BTW I have read many of those links and have adjusted some values but
honestly there are so many buttons to push and knobs to dial that it's
hard to know what will fix what.
  


Generally update/delete tuning goes like this:

1) Increase checkpoint_segments (>64, increases beyond that can be 
helpful but they eventually level out)

2) Increase shared_buffers (~25% of RAM is normal)
3) Confirm there are no constraints or foreign keys happening at each update
4) Make sure your indexes aren't filled with junk and that VACUUM is 
running effectively.  REINDEX or CLUSTER tables that haven't been well 
maintained in the past.

5) Upgrade to better hardware that has a battery-backed write cache
- or -
Disable synchronous_commit and cheat on individual commits, at the 
expense of potential lost transactions after a crash.


Updating rows in PostgreSQL is one of the most intensive things you do 
to your disks, and it's hard to get a laptop drive to do a very good job 
at that.


--
Greg Smith, 2ndQuadrant US  g...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support www.2ndQuadrant.us
PostgreSQL 9.0 High Performance http://www.2ndquadrant.com/books



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_temp implicit search path: functions vs. tables

2010-10-20 Thread Josh Kupershmidt
Hi all,

I notice slightly different handling of the implicit search_path for
temporary tables and temporary functions. Consider:

(with a default search path):

# SHOW search_path;
  search_path

 "$user",public
(1 row)


BEGIN;
  CREATE TABLE pg_temp.bar();

  CREATE FUNCTION pg_temp.foofunc() RETURNS int AS $$
  SELECT 1;
  $$ LANGUAGE SQL;

  SELECT * FROM bar;
  SELECT * FROM foofunc();

COMMIT;

The select from temporary table bar above succeeds, but I get:
ERROR:  function foofunc() does not exist

if I don't schema-qualify the function as pg_temp.foofunc(). So,
pg_temp is being implicitly included in the default search path when
looking for tables, but not for functions. Is there a reason for this
difference?

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tim Uckun
>
> 1) Increase checkpoint_segments (>64, increases beyond that can be helpful
> but they eventually level out)

Changed it back to 64 from 256

> 2) Increase shared_buffers (~25% of RAM is normal)

Changed it to one gig (25% of my RAM) obviously this involved changing
the shmmax and shmall settings in the kernel.

> 3) Confirm there are no constraints or foreign keys happening at each update

There are none.

> 4) Make sure your indexes aren't filled with junk and that VACUUM is running
> effectively.  REINDEX or CLUSTER tables that haven't been well maintained in
> the past.

Autovacuum is on. I presume it's doing it's job. Didn't re-index or
recluster because it's a dev database and the data rarely changes.

> 5) Upgrade to better hardware that has a battery-backed write cache

Not for my laptop.

> Disable synchronous_commit and cheat on individual commits, at the expense
> of potential lost transactions after a crash.

I will leave this as a last resort.

> Updating rows in PostgreSQL is one of the most intensive things you do to
> your disks, and it's hard to get a laptop drive to do a very good job at
> that.
>


After making the above changes I re-ran the query. It's been running
for five minutes and it's still running.

This is a database with nothing else hitting it.

So obviously something else is out of kilter.

I'll ask the same question I asked Gary.

Say I just apt-get install postgres and do nothing else. One table has
about 500K records. The other has about 5K records. The joins are on
indexed integer fields (one is the primary key). How long should it
take to update five to six thousand records in your experience?   Out
of the box with no tuning. How long should this take on an almost new
laptop, four gigs of RAM, i5 quad core processor?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot Start Postgres After System Boot

2010-10-20 Thread Tom Lane
Rich Shepard  writes:
>The entire script is attached. It's only 2588 bytes.

Personally, I'd drop all the machinations with checking the pidfile or
removing old socket files.  The postmaster is fully capable of doing
those things for itself, and is much less likely to do them mistakenly
than this script is.  In particular, I wonder whether the script's
refusal to start if the pidfile already exists accounts for your
report that it fails to auto-restart after a reboot.

IOW, this:

>   else # remove old socket, if it exists and no daemon is running.

>   if [ ! -f $DATADIR/$PIDFILE ]; then
>   rm -f /tmp/.s.PGSQL.5432
>   rm -f /tmp/.s.PGSQL.5432.lock
>   # pg_ctl start -w -l $LOGFILE -D $DATADIR
>   su postgres -c 'postgres -D /var/lib/pgsql/data 
> &'
>   exit 0
>   else
>   echo "PostgreSQL daemon was not properly shut 
> down"
>   echo "Please remove stale pid file 
> $DATADIR/$PIDFILE"
>   exit 7
>   fi

>   fi  

could be reduced to just:

else
su postgres -c 'postgres -D /var/lib/pgsql/data &'
exit 0
fi

I'd also strongly recommend making that be "su - postgres -c ..."
rather than the way it is now; it's failing to ensure that the
postmaster is started with the postgres account's login settings.

I'm not sure about your comment that manual start attempts fail with
LOG:  could not bind IPv4 socket: Address already in use
It's pretty hard to believe that that could occur on a freshly
booted system unless the TCP port was in fact already in use ---
ie, either there *is* a running postmaster, or something else is
using port 5432.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tim Uckun
To follow up...

I did a full vacuum analyze on both tables and re-ran the query. Same
story. I ended the query after eight minutes.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tom Lane
Tim Uckun  writes:
> I have a very simple update query.

> update cu
> set screenshot_file_name = tu.screenshot_file_name,
> screenshot_content_type  = tu.screenshot_content_type,
> screenshot_file_size = tu.screenshot_file_size,
> screenshot_status  = tu.screenshot_status
> from  cu
> inner join tu on tu.cu_id = cu.id

That isn't actually the query you're issuing, because if it were
you would get an error "table name "cu" specified more than once".

I suspect that the query you're actually issuing involves an
unconstrained cartesian product self-join between the target table
and another instance of itself.  Postgres doesn't consider that
the target table should be named again in FROM.  But it's hard to
be sure about that when looking at a redacted query.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot Start Postgres After System Boot

2010-10-20 Thread Andrej
On 21 October 2010 16:50, Tom Lane  wrote:
> could be reduced to just:
>
>                else
>                        su postgres -c 'postgres -D /var/lib/pgsql/data &'
>                        exit 0
>                fi


> I'm not sure about your comment that manual start attempts fail with
>        LOG:  could not bind IPv4 socket: Address already in use
> It's pretty hard to believe that that could occur on a freshly
> booted system unless the TCP port was in fact already in use ---
> ie, either there *is* a running postmaster, or something else is
> using port 5432.

I concur on both accounts; I would like to see the output of the
actual script, though, when it refuses to start; and also a
netstat -anp | grep 5432


Cheers,
Andrej

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tim Uckun
On Thu, Oct 21, 2010 at 5:05 PM, Tom Lane  wrote:
> Tim Uckun  writes:
>> I have a very simple update query.
>
>> update cu
>> set screenshot_file_name = tu.screenshot_file_name,
>>     screenshot_content_type  = tu.screenshot_content_type,
>>     screenshot_file_size = tu.screenshot_file_size,
>>     screenshot_status  = tu.screenshot_status
>> from  cu
>> inner join tu on tu.cu_id = cu.id
>
> That isn't actually the query you're issuing, because if it were
> you would get an error "table name "cu" specified more than once".

Sorry I shortened the names of the tables.

> I suspect that the query you're actually issuing involves an
> unconstrained cartesian product self-join between the target table
> and another instance of itself.  Postgres doesn't consider that
> the target table should be named again in FROM.  But it's hard to
> be sure about that when looking at a redacted query.

I apologize for the redacted query. I was trying to make it easier to follow.

Here is the actual query.

update consolidated_urls
set screenshot_file_name = tu.screenshot_file_name,
   screenshot_content_type  = tu.screenshot_content_type,
   screenshot_file_size = tu.screenshot_file_size,
   screenshot_status  = tu.screenshot_status

from  consolidated_urls cu
inner join trending_urls tu on tu.consolidated_url_id = cu.id

This is a simple inner join.

select count(cu.id)
from  consolidated_urls cu
inner join trending_urls tu on tu.consolidated_url_id = cu.id

yields 3657 records.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] a query on stored procedures/functions in pgsql

2010-10-20 Thread Neil D'Souza
consider the following sql statements:

create table food(
    food_code serial unique,
    food_category varchar(20),
    food_name varchar(20)
    );

insert into food (food_category, food_name) values ('fruit', 'tomato');
insert into food (food_category, food_name) values ('fruit', 'banana');
insert into food (food_category, food_name) values ('fruit', 'apple');

insert into food (food_category, food_name) values ('vegetable', 'cabbage');
insert into food (food_category, food_name) values ('vegetable', 'cauliflower');
insert into food (food_category, food_name) values ('vegetable', 'okra');

insert into food (food_category, food_name) values ('nuts', 'almonds');
insert into food (food_category, food_name) values ('nuts', 'hazelnuts');
insert into food (food_category, food_name) values ('nuts', 'pine-seeds');

I tried the following queries - the output is listed below:
select food_category, food_name, rank as my_rank from
    (
     select food_category, food_name, rank()  over (order by
food_category, food_name) from food
    ) stage1
where rank >= 4 and rank <=8;
output
-
food_category |  food_name  | my_rank
---+-+-
 nuts  | almonds |   4
 nuts  | hazelnuts   |   5
 nuts  | pine-seeds  |   6
 vegetable | cabbage |   7
 vegetable | cauliflower |   8

select food_category, food_name, my_rank from
    (
     select food_category, food_name, rank()  over (order by
food_category, food_name)as  my_rank from food
    ) stage1
where my_rank >= 4 and my_rank <=8;

output
-
 food_category |  food_name  | my_rank
---+-+-
 nuts  | almonds |   4
 nuts  | hazelnuts   |   5
 nuts  | pine-seeds  |   6
 vegetable | cabbage |   7
 vegetable | cauliflower |   8


Consider what happens when I try to make a simple variant of the 2nd
query into a stored procedure

create or replace function food4(p1 int, p2 int)
returns table (
food_code int,
food_category varchar(20),
food_name varchar(20),
my_rank bigint
) as $$
begin
return query
    select stage1.* from
    (
     select food_code, food_category, food_name, rank()  over (order
by food_code) as my_rank from food
    ) stage1;
    --where rank >= 4 and rank <=8;
end
$$ language plpgsql;

nxd=> \i my_rank_sp4.sql
psql:my_rank_sp4.sql:16: ERROR:  syntax error at or near "$4"
LINE 1: ... $1 ,  $2 ,  $3 , rank() over (order by  $1 ) as  $4  from f...
 ^
    The stored procedure does not allow me to rename the variable to
the name I need in the output table.

    I went to the plpgsql documentation of the user manual - Chapter
38 - section 38.3.1 . There you have the "extended_sales" function
which also returns a table (what I needed), and there the table has a
parameter called total which is computed - the multiplication of
"quantity * price" is not renamed to "total" which is in the output
table, rather "quantity*price" is in the same position (2nd position)
in the select query that "total" occupies in the output table. Hence I
decided not to rename the ranked field - stored procedure query given
below.

create or replace function food5(p1 int, p2 int)
returns table (
food_code int,
food_category varchar(20),
food_name varchar(20),
my_rank bigint
) as $$
begin
return query
    select stage1.* from
    (
     select food_code, food_category, food_name, rank()  over (order
by food_code) from food
    ) stage1;
    --where rank >= 4 and rank <=8;
end
$$ language plpgsql;

and this works - However when I run the function this is what i get
nxd=> \i my_rank_sp5.sql
CREATE FUNCTION
nxd=> select * from food5(1,9);
 food_code | food_category | food_name | my_rank
---+---+---+-
   |   |   |   1
   |   |   |   1
   |   |   |   1
   |   |   |   1
   |   |   |   1
   |   |   |   1
   |   |   |   1
   |   |   |   1
   |   |   |   1

The values are blank as you can see above
If, I run a plain query like this - which is just text from the stored
procedure,
but not embedded in a plpgsql function - the result is fine
nxd=> select stage1.* from
nxd-> (
nxd(>  select food_code, food_category, food_name, rank()  over (order
by food_code) from food
nxd(> ) stage1;
 food_code | food_category |  food_name  | rank
---+---+-+--
 1 | fruit | tomato  |    1
 2 | fruit | banana  |    2
 3 | fruit | apple   |    3
 4 | vegetable | cabbage |    4
 5 | vegetable | cauliflower |    5
 6 | vegetable 

Re: [GENERAL] Cannot Start Postgres After System Boot

2010-10-20 Thread Reid Thompson

On 10/20/2010 6:53 PM, Rich Shepard wrote:

  For reasons I do not understand, the Slackware start-up file for postgres
(/etc/rc.d/rc.postgresql) fails to work properly after I reboot the system.
(Reboots normally occur only after a kernel upgrade or with a hardware
failure that crashes the system.)

  Trying to restart the system manually (su postgres -c 'postgres -D
/var/lib/pgsql/data &') regardless of the presence of /tmp/.s.PGSQL.5432
and /var/lib/pgsql/postmaster.pid. Here's what I see:

[rshep...@salmo ~]$ su postgres -c 'postgres -D /var/lib/pgsql/data &'
Password: [rshep...@salmo ~]$ LOG:  could not bind IPv4 socket: Address already 
in use
HINT:  Is another postmaster already running on port 5432? If not, wait a
few seconds and retry.
WARNING:  could not create listen socket for "localhost"
FATAL:  could not create any TCP/IP sockets

  If someone would be kind enough to point out what I'm doing incorrectly
(e.g., removing /tmp/.s.PGSQL.5432 and postmaster.pid when the startup
process complains they're not right) I'll save this information for the next
time. I can also provide the 'start' section of the Slackware init file so I
could learn why it's not working properly.

TIA,

Rich


what does
$ netstat -an|grep 5432
return?

what does
$ ps -ef|grep post
return?

The above indicates that the tcp ipv4 socket is already bound by some process

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a query on stored procedures/functions in pgsql

2010-10-20 Thread Pavel Stehule
Hello



2010/10/21 Neil D'Souza :
> consider the following sql statements:
>
> create table food(
>     food_code serial unique,
>     food_category varchar(20),
>     food_name varchar(20)
>     );
>
> insert into food (food_category, food_name) values ('fruit', 'tomato');
> insert into food (food_category, food_name) values ('fruit', 'banana');
> insert into food (food_category, food_name) values ('fruit', 'apple');
>
> insert into food (food_category, food_name) values ('vegetable', 'cabbage');
> insert into food (food_category, food_name) values ('vegetable', 
> 'cauliflower');
> insert into food (food_category, food_name) values ('vegetable', 'okra');
>
> insert into food (food_category, food_name) values ('nuts', 'almonds');
> insert into food (food_category, food_name) values ('nuts', 'hazelnuts');
> insert into food (food_category, food_name) values ('nuts', 'pine-seeds');
>
> I tried the following queries - the output is listed below:
> select food_category, food_name, rank as my_rank from
>     (
>      select food_category, food_name, rank()  over (order by
> food_category, food_name) from food
>     ) stage1
> where rank >= 4 and rank <=8;
> output
> -
> food_category |  food_name  | my_rank
> ---+-+-
>  nuts  | almonds |   4
>  nuts  | hazelnuts   |   5
>  nuts  | pine-seeds  |   6
>  vegetable | cabbage |   7
>  vegetable | cauliflower |   8
>
> select food_category, food_name, my_rank from
>     (
>      select food_category, food_name, rank()  over (order by
> food_category, food_name)as  my_rank from food
>     ) stage1
> where my_rank >= 4 and my_rank <=8;
>
> output
> -
>  food_category |  food_name  | my_rank
> ---+-+-
>  nuts  | almonds |   4
>  nuts  | hazelnuts   |   5
>  nuts  | pine-seeds  |   6
>  vegetable | cabbage |   7
>  vegetable | cauliflower |   8
>
>
> Consider what happens when I try to make a simple variant of the 2nd
> query into a stored procedure
>
> create or replace function food4(p1 int, p2 int)
> returns table (
> food_code int,
> food_category varchar(20),
> food_name varchar(20),
> my_rank bigint
> ) as $$
> begin
> return query
>     select stage1.* from
>     (
>      select food_code, food_category, food_name, rank()  over (order
> by food_code) as my_rank from food
>     ) stage1;
>     --where rank >= 4 and rank <=8;
> end
> $$ language plpgsql;
>
> nxd=> \i my_rank_sp4.sql
> psql:my_rank_sp4.sql:16: ERROR:  syntax error at or near "$4"
> LINE 1: ... $1 ,  $2 ,  $3 , rank() over (order by  $1 ) as  $4  from f...
>  ^
>     The stored procedure does not allow me to rename the variable to
> the name I need in the output table.
>
>     I went to the plpgsql documentation of the user manual - Chapter
> 38 - section 38.3.1 . There you have the "extended_sales" function
> which also returns a table (what I needed), and there the table has a
> parameter called total which is computed - the multiplication of
> "quantity * price" is not renamed to "total" which is in the output
> table, rather "quantity*price" is in the same position (2nd position)
> in the select query that "total" occupies in the output table. Hence I
> decided not to rename the ranked field - stored procedure query given
> below.
>
> create or replace function food5(p1 int, p2 int)
> returns table (
> food_code int,
> food_category varchar(20),
> food_name varchar(20),
> my_rank bigint
> ) as $$
> begin
> return query
>     select stage1.* from
>     (
>      select food_code, food_category, food_name, rank()  over (order
> by food_code) from food
>     ) stage1;
>     --where rank >= 4 and rank <=8;
> end
> $$ language plpgsql;
>
> and this works - However when I run the function this is what i get
> nxd=> \i my_rank_sp5.sql
> CREATE FUNCTION
> nxd=> select * from food5(1,9);
>  food_code | food_category | food_name | my_rank
> ---+---+---+-
>    |   |   |   1
>    |   |   |   1
>    |   |   |   1
>    |   |   |   1
>    |   |   |   1
>    |   |   |   1
>    |   |   |   1
>    |   |   |   1
>    |   |   |   1
>
> The values are blank as you can see above
> If, I run a plain query like this - which is just text from the stored
> procedure,
> but not embedded in a plpgsql function - the result is fine
> nxd=> select stage1.* from
> nxd-> (
> nxd(>  select food_code, food_category, food_name, rank()  over (order
> by food_code) from food
> nxd(> ) stage1;
>  food_code | food_category |  food_name  | rank
> ---+-

Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tom Lane
Tim Uckun  writes:
> Here is the actual query.

> update consolidated_urls
> set screenshot_file_name = tu.screenshot_file_name,
>screenshot_content_type  = tu.screenshot_content_type,
>screenshot_file_size = tu.screenshot_file_size,
>screenshot_status  = tu.screenshot_status
> from  consolidated_urls cu
> inner join trending_urls tu on tu.consolidated_url_id = cu.id

> This is a simple inner join.

No, it isn't.  This is a three-way join between consolidated_urls, cu,
and tu --- the fact that cu is the same underlying table as
consolidated_urls doesn't change that.  And the join is
underconstrained, causing each row of consolidated_urls to be joined
to every row of the cu/tu join.  That's why it's taking such an
unreasonably long time --- you're generating many thousands of redundant
updates to each row of consolidated_urls.  You should just write this as

update consolidated_urls
set screenshot_file_name = tu.screenshot_file_name,
   screenshot_content_type  = tu.screenshot_content_type,
   screenshot_file_size = tu.screenshot_file_size,
   screenshot_status  = tu.screenshot_status
from trending_urls tu where tu.consolidated_url_id = consolidated_urls.id

Postgres is a bit different from some other DBMSes in how it interprets
UPDATE ... FROM syntax.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread Darren Duncan

Josh Kupershmidt wrote:

On Wed, Oct 20, 2010 at 6:22 PM, David Kerr  wrote:

I know I've seen posts on how to do this, but i can't seem to find them.

I've got a data set

A, B
A, C
A, D
[...]

and so on

and i'd like to be able to wite a query that would result in

1,A,B
2,A,C
3,A,D
[...]

PG version is 8.3.


If you can upgrade to 8.4, you could use the row_number() window
function which is perfectly suited to this task, should be as simple
as:

SELECT row_number() OVER (), * FROM tablename;


Yes indeed.  For a simple increment by one sequence, functions like rank() ... 
see http://www.postgresql.org/docs/9.0/interactive/functions-window.html ... are 
exactly what you want. -- Darren Duncan


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with Postgresql 9.0 streaming replication on Solaris 10 (x86)

2010-10-20 Thread Fujii Masao
On Wed, Oct 20, 2010 at 10:31 PM, dan.m.harris
 wrote:
> LOG: entering standby mode
> WARNING: WAL was generated with wal_level=minimal, data may be missing
> HINT: This happens if you temporarily set wal_level=minimal without taking a
> new base backup.

Did you set wal_level to archive or hot_standby on the master?
If not, you have to do that and restart the master. Then you
need to take a new base backup from the master and restart the
standby from that base backup.

> FATAL: could not connect to the primary server: invalid connection option
> "replication"
> (etc)
>
> Does anyone have any ideas what the problem may be? I suspect I may be
> missing a library somewhere - I can't believe that streaming replication
> just doesn't work on Solaris 10.

I guess that the version of the libpq library you use is not 9.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot Start Postgres After System Boot

2010-10-20 Thread Scott Marlowe
On Wed, Oct 20, 2010 at 4:53 PM, Rich Shepard  wrote:
>  For reasons I do not understand, the Slackware start-up file for postgres
> (/etc/rc.d/rc.postgresql) fails to work properly after I reboot the system.
> (Reboots normally occur only after a kernel upgrade or with a hardware
> failure that crashes the system.)
>
>  Trying to restart the system manually (su postgres -c 'postgres -D
> /var/lib/pgsql/data &') regardless of the presence of /tmp/.s.PGSQL.5432
> and /var/lib/pgsql/postmaster.pid. Here's what I see:
>
> [rshep...@salmo ~]$ su postgres -c 'postgres -D /var/lib/pgsql/data &'
> Password: [rshep...@salmo ~]$ LOG:  could not bind IPv4 socket: Address
> already in use
> HINT:  Is another postmaster already running on port 5432? If not, wait a
> few seconds and retry.
> WARNING:  could not create listen socket for "localhost"
> FATAL:  could not create any TCP/IP sockets

Are you sure postgresql isn't getting started by some other init
script before this one runs?  warnings that a port can't be bound to
is usually just that.  something else is on it.  What does lsof tell
you is running on that port?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_temp implicit search path: functions vs. tables

2010-10-20 Thread Tom Lane
Josh Kupershmidt  writes:
> pg_temp is being implicitly included in the default search path when
> looking for tables, but not for functions. Is there a reason for this
> difference?

Yes.  They used to be the same, but awhile back we decided it was a
security hole to look for functions or operators in the implicit temp
schema.  It makes it too easy for someone to substitute a trojan-horse
function that will be picked up in preference to whatever's in the
normal search path.  See CVE-2007-2138.

If you actually do want to define and call temporary functions, you
can include "pg_temp" in the search path explicitly, or perhaps better,
explicitly qualify the intentional calls with pg_temp.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a query on stored procedures/functions in pgsql

2010-10-20 Thread Neil D'Souza
>
> You have same plpgsql identifiers as sql identifiers, and because
> plpgsql identifiers has higher priority, your query is broken. For
> simple functions like this don't use a plpgsql language - use sql
> language instead.
>

Thank you for the quick reply. The example I constructed was
specifically for this post. I modified the function as below and it
works fine now. It would be great if the point you mentioned was a
note in the PGSQL Documentation (or did I miss it). In case I didnt
miss it, Is there anyone I have to write to, to help get this note in?

create or replace function food6(p1 int, p2 int)
returns table (
p_food_code int,
p_food_category varchar(20),
p_food_name varchar(20),
my_rank bigint
) as $$
begin
return query
select stage1.* from
(
 select food_code, food_category, food_name, rank()  over (order by
food_code) from food
) stage1;
--where rank >= 4 and rank <=8;
end
$$ language plpgsql;

nxd=> select * from food6(1,9);
 p_food_code | p_food_category | p_food_name | my_rank
-+-+-+-
   1 | fruit   | tomato  |   1
   2 | fruit   | banana  |   2
   3 | fruit   | apple   |   3
   4 | vegetable   | cabbage |   4
   5 | vegetable   | cauliflower |   5
   6 | vegetable   | okra|   6
   7 | nuts| almonds |   7
   8 | nuts| hazelnuts   |   8
   9 | nuts| pine-seeds  |   9
(9 rows)


Many Thanks once again,
Kind Regards,
Neil

>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a query on stored procedures/functions in pgsql

2010-10-20 Thread Pavel Stehule
Hello

2010/10/21 Neil D'Souza :
>>
>> You have same plpgsql identifiers as sql identifiers, and because
>> plpgsql identifiers has higher priority, your query is broken. For
>> simple functions like this don't use a plpgsql language - use sql
>> language instead.
>>
>
> Thank you for the quick reply. The example I constructed was
> specifically for this post. I modified the function as below and it
> works fine now. It would be great if the point you mentioned was a
> note in the PGSQL Documentation (or did I miss it). In case I didnt
> miss it, Is there anyone I have to write to, to help get this note in?
>

yes, it's probably undocumented :(. see - unofficial plpgsql
documentation 
http://www.pgsql.cz/index.php/PL/pgSQL_%28en%29#Recommendation_for_design_of_saved_procedures_in_PL.2FpqSQL_language

but it is solved on 9.0, where you will got adequate error message.

Regards

Pavel Stehule

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tim Uckun
>
> No, it isn't.  This is a three-way join between consolidated_urls, cu,
> and tu --- the fact that cu is the same underlying table as

cu is an alias for consolidated_urls. tu is an alias for trending_urls.

There are only two tables in the query.

> consolidated_urls doesn't change that.  And the join is
> underconstrained, causing each row of consolidated_urls to be joined
> to every row of the cu/tu join.  That's why it's taking such an
> unreasonably long time --- you're generating many thousands of redundant
> updates to each row of consolidated_urls.  You should just write this as
>

Once again there are only two tables in the query. The join clause is
inner join trending_urls tu on tu.consolidated_url_id = cu.id


> update consolidated_urls
> set screenshot_file_name = tu.screenshot_file_name,
>   screenshot_content_type  = tu.screenshot_content_type,
>   screenshot_file_size = tu.screenshot_file_size,
>   screenshot_status  = tu.screenshot_status
> from trending_urls tu where tu.consolidated_url_id = consolidated_urls.id
>
> Postgres is a bit different from some other DBMSes in how it interprets
> UPDATE ... FROM syntax.
>


I'll try this too.

Anything to make this query complete in a reasonable amount of time.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a query on stored procedures/functions in pgsql

2010-10-20 Thread Tom Lane
"Neil D'Souza"  writes:
> Thank you for the quick reply. The example I constructed was
> specifically for this post. I modified the function as below and it
> works fine now. It would be great if the point you mentioned was a
> note in the PGSQL Documentation (or did I miss it).

As of 9.0, plpgsql's default behavior is to throw an error when there's
an ambiguity of this sort.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Tim Uckun
One more follow up.

Did a vacuum full on both tables and a re-index on both tables.

Changed the wal_buffers to 16MB  (increased the kernel param as a
result) as per
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

I also turned off fsync but still no joy.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_temp implicit search path: functions vs. tables

2010-10-20 Thread Josh Kupershmidt
On Thu, Oct 21, 2010 at 12:47 AM, Tom Lane  wrote:
> Josh Kupershmidt  writes:
>> pg_temp is being implicitly included in the default search path when
>> looking for tables, but not for functions. Is there a reason for this
>> difference?
>
> Yes.  They used to be the same, but awhile back we decided it was a
> security hole to look for functions or operators in the implicit temp
> schema.  It makes it too easy for someone to substitute a trojan-horse
> function that will be picked up in preference to whatever's in the
> normal search path.  See CVE-2007-2138.
>
> If you actually do want to define and call temporary functions, you
> can include "pg_temp" in the search path explicitly, or perhaps better,
> explicitly qualify the intentional calls with pg_temp.

Thanks, thought it might be something like that.

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to Rejoin a Node after failover [Postgre SQL replication using Slony-I]

2010-10-20 Thread Vishnu S.
 

 Hi,  

  

 

I am using Slony-I 2.0.4 with PostgreSQL 8.4 in windows XP. I have
replicated the data from a master (say A ) to a slave machine(say B).

And when the Master machine A fails, the slave machine B, acts as Master
machine and the node corresponding to master is dropped using  

DROP NODE command. It  also works fine. My requirement is that, 

  

 

I have to rejoin the machine A to the replication set.

 

Then I have to replicate data from Machine B to Machine A. (ie. Both
machine A and Machine B should contain same data.)

 

Then I have to change Machine A as master machine and Master B slave
machine.  

 

Please help me to create scripts for this. I have tried without dropping
the node too. But it also not working.

Please tell me how to rejoin a dropped node into the replication set
after failover.

 

 

 

 

 

Thanks & Regards,

Vishnu S

 

* Confidentiality Statement/Disclaimer *

This message and any attachments is intended for the sole use of the intended 
recipient. It may contain confidential information. Any unauthorized use, 
dissemination or modification is strictly prohibited. If you are not the 
intended recipient, please notify the sender immediately then delete it from 
all your systems, and do not copy, use or print. Internet communications are 
not secure and it is the responsibility of the recipient to make sure that it 
is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or 
modifications made to the contents. If you require any form of confirmation of 
the contents, please contact the company/sender. The company/sender is not 
liable for any errors or omissions in the content of this message.


[GENERAL] Re: Updates, deletes and inserts are very slow. What can I do make them bearable?

2010-10-20 Thread Thomas Kellerer

Tim Uckun, 21.10.2010 07:05:

No, it isn't.  This is a three-way join between consolidated_urls, cu,
and tu --- the fact that cu is the same underlying table as


cu is an alias for consolidated_urls. tu is an alias for trending_urls.

There are only two tables in the query.


Yes, but consolidated_urls is there twice. Which makes it three relations 
involved in the update
(consolidated_urls, cu and tu)

That's what Tom meant and that's where your cartesian product comes from.


select count(cu.id)
from  consolidated_urls cu
inner join trending_urls tu on tu.consolidated_url_id = cu.id


That select is not the same as your UPDATE statement.

If your update statement was re-written to a plain SELECT it would be something 
like

select count(consolidated_urls.id)
from  consolidated_urls, consolidated_urls cu
inner join trending_urls tu on tu.consolidated_url_id = cu.id

See the difference?

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre on virtual machine

2010-10-20 Thread Craig Ringer
On 21/10/10 05:22, Joshua J. Kugler wrote:
> On Wednesday 20 October 2010, John R Pierce elucidated thus:
>> On 10/20/10 3:46 AM, Georgi Ivanov wrote:
>>> Hi,
>>> Is there any specific reason one should/should not run postgre on
>>> virtual machine for production ?
>>> Is there any special tuning for virtual environment ?
>>> Client of mine is running 8.4 on  virtual machine (VMWare ) so I'm
>>> wandering if there is anything special I can tune to speed up a
>>> bit. Also if I must choose between more RAM and more virtual cores
>>> what should I choose ?
>>
>> the biggest issue tends to be disk IO performance, especially in a
>> transactional intensive database.  virtualized storage is just not as
>> fast as native storage.   the big iron folks deal with this by
>> dedicating fiberchannel cards to each virtual machine that needs IO
>> performance.  of course, this gets expensive.
>>
>> as far as memory vs cores goes, that depends a lot on the nature of
>> your access patterns.   larger databases benefit from more memory for
>> caching & buffers, while higher levels of concurrent connections
>> benefit from more CPU cores (*and* more memory).
> 
> Would using a raw partition in the VM alleviate some of the issues here?

Not really. Using accelerated virtualized I/O support using VT-IO or
similar may help a fair bit, though.

My main concern with any virtualized setup would be that the guest's
writes were properly flushed to the host's disk(s) when programs on the
guest fsync(). If the guest's writes aren't properly flushed before more
disk commands are processed, you are at risk of data loss and/or corruption.

Because forced disk flushes are slow, I won't be at all surprised if
many virtualization systems don't honour them. If you trust your host OS
to be 100% totally reliable, your UPSes to be perfect, and you have
REALLY good backups that might be OK. For the rest of us living in the
real world, that's a problem that has to be fixed before systems can be
used in production.

A good start to test for this issue is to see how many fsync() calls per
second your virtualized guest gets. If it's significantly more than the
host system running the test program unvirtualized, be very VERY
suspicious.

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general