[ADMIN] dynamic name of table in select

2008-01-03 Thread giuseppe . derossi
Hi to all,
I need help about the following issue:
in my db I've three table which have the same column (there are some reasons
for that) I'd like to perform a select statement like that: select my_column
from xxx_table . 
Where xxx could be choosen from another table result. 
for example :

select my_column from (select name_table from catalog where condition1)

now the internal select has to return the name of the table which is linked
to the condition1 (let me say condition1_table) so after the inner select
has been perfomed, I should have 

select my_column from condition1_table

Is it possible ?

Second question: is there a system variable where the name of the db is
stored ? how can I recall it?

Thanks in advance and best regards

Giu 
 --
 Email.it, the professional e-mail, gratis per te: http://www.email.it/f
 
 Sponsor:
 Realizza i tuoi sogni con i finanziamenti Finatel! Fino a 50.000 Euro senza
spese in pochissimo tempo. Richiedi Informazioni 

 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=7371d=20080103



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[ADMIN] get PGDATA as non-postgres user

2008-01-03 Thread Sofer, Yuval
Hi 

 

I need to get PGDATA location when I logged in as non-postgres user. 

I cannot see data_directory from pg_setting table as non-postgres user. 

 

Any work around? Any other way to retrieve that information? 

 

If not, what is the quickest way to enable non-postgres user to have
PGDATA location. 

Of course I can create table as postgres and grant select to the
non-privileged user. But I wonder if there is a quicker solution. 

 

Thanks, 

 

Yuval Sofer
BMC Software
CTMD Business Unit

DBA Team
972-52-4286-282
[EMAIL PROTECTED]

 



Re: [ADMIN] get PGDATA as non-postgres user

2008-01-03 Thread Peter Eisentraut
Am Donnerstag, 3. Januar 2008 schrieb Sofer, Yuval:
 I need to get PGDATA location when I logged in as non-postgres user.

Consider writing a security-definer function that retrieves the information.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[ADMIN] Vacuum taking an age

2008-01-03 Thread Brian Modra
Hi,


I have a pretty live table: rows being inserted and updated more
than once 1 per second, though far, far more inserts than updates.

There are currently over 3 million rows.

It has not been vacuumed for months.

Now a vacuum on that table takes hours, and I have not let it complete
because it stays running into our daily busy time... but I've been
told its necessary because the table is slowing down.

I have begun a cron job which will do a daily analyze, and am thinking
of a weekly vacuum...
Please advise on the best way to keep this table maintained, even if
it means regularly taking the service offline early on Sunday
morning...

thanks in advance
Brian

-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
6 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Vacuum taking an age

2008-01-03 Thread Pascal Heraud
If you're using the version 8.1 and after, you should consider using the 
auto-vacuum daemon that is the best way to do it:

http://www.postgresql.org/docs/8.1/static/maintenance.html

Pascal;
Brian Modra a écrit :

Hi,


I have a pretty live table: rows being inserted and updated more
than once 1 per second, though far, far more inserts than updates.

There are currently over 3 million rows.

It has not been vacuumed for months.

Now a vacuum on that table takes hours, and I have not let it complete
because it stays running into our daily busy time... but I've been
told its necessary because the table is slowing down.

I have begun a cron job which will do a daily analyze, and am thinking
of a weekly vacuum...
Please advise on the best way to keep this table maintained, even if
it means regularly taking the service offline early on Sunday
morning...

thanks in advance
Brian

  



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [ADMIN] MacOS X 10.5.1 and compiling for multiple Architectures

2008-01-03 Thread Chris Ruprecht

Hey Tom, Shane,

actually, PHP w/out the PG stuff builds just fine with x86_64 and i386  
specified at the same time. I didn't bother with the PPC  
architectures, since I don't have a Leopard capable PPC box any more.  
My old PowerBook G3/500 point blank refuses to run anything higher  
than 10.4.11 :).


When I tried the build, including the PG libs for both architectures,  
I got error messages for missing symbols only for the i386  
architecture (since I only built the PG libs for 64-bit). Everything  
else seemed to be present.


I'm good to run with the x86_64 only binary since my Apache is built  
using that architecture. I'm also not to worried about any bugs in the  
64-bit OS X code, this is for development only, no live data, ever. If  
it crashes or mangles up my data, so be it, backups must be good for  
something, right? Would have been nice though, to go and give the  
completely built libphp5.so module to somebody and say 'here, run it'  
regardless of what his hardware architecture supports.


best regards,
chris
--
chris ruprecht
network grunt and bit pusher extraordinaíre





On Jan 3, 2008, at 00:41, Tom Lane wrote:


Chris Ruprecht [EMAIL PROTECTED] writes:

I am trying to build PG 8.3Beta4 for MacOS Leopard.
It works fine when using the default i386 architecture. However, this
is not an option as Apple, in their wisdom, have chosen to build
Apache as a 64-bit application and their PHP 5.2.4 module without PG
support as a Intel 32, Intel 64, PPC 32 and PPC 64 type module. For
those that don't know: Under MacOS, it is possible to build
executables that will run native on multiple architectures, all
contained in a single file - kind of weird, and pretty bloated.


You're not going to have a lot of luck building PG for those four
architectures from a single build run --- you really need a different
pg_config.h for each case.  The specific failure you're seeing seems
to stem from SIZEOF_DATUM not agreeing with sizeof(Datum), but you
can bet your bottom dollar that that's just the tip of the iceberg.

I'd be somewhat interested to see a build approach that supports this,
but I don't want to spend any of my own time on it.  I have a vague
suspicion that Apple probably built PHP four separate times and then
melded the executables after the fact.

regards, tom lane




smime.p7s
Description: S/MIME cryptographic signature


Re: [ADMIN] MacOS X 10.5.1 and compiling for multiple Architectures

2008-01-03 Thread Shane Ambler

Chris Ruprecht wrote:

Hey Tom, Shane,


I'm good to run with the x86_64 only binary since my Apache is built 
using that architecture.


That's not a real reason to run pg in 64 bit.

Although you would need php and libpq to be the same architecture as 
apache (the running architect if multiple available) - the postgresql 
server can be 32 bit. If apache is only 64 bit then you will need a 64 
bit libpq.



Apache calls functions in the php module which then calls functions in 
the libpq - they must all have matching architectures available to do 
this which is why apple supplies fat binary libs with osx.



From there to postgresql is a network connection (or local socket) so 
each program is just sending data to the other - they don't need to be 
the same architecture any more than you need a 64 bit sparc browser to 
connect to a 64 bit sparc web server.




--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] Need some info on Postgresql

2008-01-03 Thread Suresh Gupta VG
Hi Team,

 

I am using Postgresql 7.4.2 version on Solaris. There are number of
tables say about 30+ tables in our database. I started to reindex the
tables individually. reindex table table name. All the queries
executed normally with less than 1 minute of duration. But one table is
not responding any thing even after 10 minutes. The details of the table
are ...

1) it had 5 column, out of it, one is primary key.

2) There are only 5 rows in it.

3) see this entry for index on that table

Schema |   Name| Type  |
Owner |Table

+--+-+--+---
---

public | currency_pkey | index |
pgsql | currency

 

Can you please tell me why it not executing this command on the
particular table and I am the owner of the DB? Can you please suggest
some thing. We found these days, the database is taking more time to
execute any query. We are doing vacuum Verbose analyze regularly at
low traffic time.

Regards,

G. V. Suresh Gupta


-

Innovative Technology Solutions(ITS), Zensar Technologies 

Zensar Knowledge Park, Plot#5, MIDC IT Tower, 

Kharadi, Off Nagar Road, Pune - 411014

Landline :  +91-20-66453471   | +91-9890898688 

Email :   [EMAIL PROTECTED]| website:  www.zensar.com
http://www.zensar.com/ 

 

 

 

 

 

 



From: Peter Koczan [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 04, 2007 5:09 AM
To: Suresh Gupta VG
Cc: pgsql-admin@postgresql.org
Subject: Re: Postgresql pg_dumpall

 



DISCLAIMER:
This email may contain confidential or privileged information for the intended 
recipient(s) and the views expressed in the same are not necessarily the views 
of Zensar Technologies Ltd. If you are not the intended recipient or have 
received this e-mail by error, its use is strictly prohibited, please delete 
the e-mail and notify the sender. Zensar Technologies Ltd. does not accept any 
liability for virus infected mails. 
image001.jpg

Re: [ADMIN] best practices for separating data and logs

2008-01-03 Thread Alvaro Herrera
Chris Browne wrote:

  - Alternatively, to help establish common policies, for the less
frequent cases.
 
   env:   Sets up PATH, MAN_PATH, PGPORT with the values used by
  the backend in this init file

How does this work?  I have my own script to do things, and one of the
painful bits is setting up the environment.  What my script does is emit
the var definitions to stdout, so I have to run the script on `` to let
the current shell get the definitions.

   logtail:  runs tail -f on the last log file for the cluster

Hmm, interesting.  Perhaps I oughta implement this.  Currently my script
starts the server by launching the postmaster in foreground, so there is
no need for this -- but I have frequently wished for an option to start
it in background, and be able to get at the log separately.

I use this script so frequently that I have also implemented bash
completion for it.  It is truly helpful.  (Of course, it deals with
multiple installations by getting the one to use as an argument, and
autocompletes that as well).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Need some info on Postgresql

2008-01-03 Thread Scott Marlowe
On Jan 3, 2008 5:33 AM, Suresh Gupta VG [EMAIL PROTECTED] wrote:

 Hi Team,

 I am using Postgresql 7.4.2 version on Solaris.

You need to update to 7.4.18 or whatever the last version was.  7.4.2
has known data eating bugs, and if you value your data even a little,
you should update.  this is a relatively painless update not requiring
a dump / restore.

 There are number of tables
 say about 30+ tables in our database. I started to reindex the tables
 individually. reindex table table name. All the queries executed
 normally with less than 1 minute of duration. But one table is not
 responding any thing even after 10 minutes. The details of the table are …

I've had large tables take well over 10 minutes on 7.4 to reindex.
That's not necessarily a sign of a problem.  How many rows does this
table have? Do you get any kind of error message or the database just
never returns.  Have you tried vacuum full on this table instead?  Do
the logs say anything about the reindex operation getting an error
etc???

 Can you please tell me why it not executing this command on the particular
 table and I am the owner of the DB? Can you please suggest some thing.

Is it really not executing?  Is it simply returning without a notice,
or hanging?

 We
 found these days, the database is taking more time to execute any query. We
 are doing vacuum Verbose analyze regularly at low traffic time.

You would do well to consider a migration to 8.2.5 or 8.3 when it
comes out of beta / RC status.  7.4.x is getting rather old, and the
8.x series (8.0, 8.1, 8.2, and now 8.3) have each had major
improvements in speed.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Vacuum taking an age

2008-01-03 Thread Scott Marlowe
On Jan 3, 2008 6:48 AM, Brian Modra [EMAIL PROTECTED] wrote:
 Hi,


 I have a pretty live table: rows being inserted and updated more
 than once 1 per second, though far, far more inserts than updates.

 There are currently over 3 million rows.

 It has not been vacuumed for months.

How many rows per second?  1?  all of them?  Kinda makes a difference.

If it was 1 a second updated for 3 months that's about 7million dead
rows.  If it was all 3million, then that's 7million * 3million dead
rows, also know as a whole bunch of rows.

Either way, you probably have a table so terribly bloated that a
regular vacuum will not help you in terms of speeding it up.  Regular
vacuums are like brushing your teeth three times a day.  If you've
forgotten for three months, brushing them once isn't likely to fix all
the cavities you've got.  Same thing here.  You'll either need a
vacuum full or a cluster.  Cluster is often faster.  Or you can try
selecting everything into a temp table, truncating the real table, and
inserting the data back in.  Truncation will remove all rows, dead or
otherwise.  The advantage is that it's often faster to truncate /
reload than it is to vacuum full.  If you have indexes, you might want
to drop them while re-inserting and then recreated them.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Vacuum taking an age

2008-01-03 Thread Guido Neitzer

On 03.01.2008, at 05:48, Brian Modra wrote:


I have a pretty live table: rows being inserted and updated more
than once 1 per second, though far, far more inserts than updates.


Not that busy ;-)


It has not been vacuumed for months.


Not good.


Now a vacuum on that table takes hours, and I have not let it complete
because it stays running into our daily busy time... but I've been
told its necessary because the table is slowing down.

I have begun a cron job which will do a daily analyze, and am thinking
of a weekly vacuum...
Please advise on the best way to keep this table maintained, even if
it means regularly taking the service offline early on Sunday
morning...


Two things you can consider:

1. Cluster the table with one of the indexes. This will be really  
fast, but is not transaction-safe as far as I remember for 8.2.x.


2. Use autovaccum to vacuum / analyze your database all the time. That  
will keep the size small and the stats up to date.


cug

--
http://www.event-s.net


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [ADMIN] Vacuum taking an age

2008-01-03 Thread Brian Modra
Hi,
thanks for your reply.
The number of rows per second has been increasing rapidly, but its
averaging about 1 row per second, and a far smaller number of updates.
So maybe there are not such a huge number of dead rows. I hope that a
normal vacuum will clean it up.
Total number of rows is about 3 million.

Last night before I got your reply, I noticed that the number of
shared memory buffers was only 1000, so I increased shmmax and when I
restart the server next, its number of buffers will be 1. The
server has 8GB of memory, so that will only be a small proportion of
its total memory.
I have not restarted postgres yet because a vacuum is still running.
Maybe I should kill that and restart postgres?

The reason I increased this is because I noticed if I did a partial
count of rows (e.g. those inserted with a timestamp after midnight
last night), then the first time takes about 17 seconds, and the
second time 1/4 second.

I started a vacuum on the table yesterday, and its still running. I
guess thats because the table is live. I am pretty sure that if I take
it offline, then the vacuum will complete relatively quickly. Am I
right? (I don't want to take it offline unless I really need to.)

On 04/01/2008, Scott Marlowe [EMAIL PROTECTED] wrote:
 On Jan 3, 2008 6:48 AM, Brian Modra [EMAIL PROTECTED] wrote:
  Hi,
 
 
  I have a pretty live table: rows being inserted and updated more
  than once 1 per second, though far, far more inserts than updates.
 
  There are currently over 3 million rows.
 
  It has not been vacuumed for months.

 How many rows per second?  1?  all of them?  Kinda makes a difference.

 If it was 1 a second updated for 3 months that's about 7million dead
 rows.  If it was all 3million, then that's 7million * 3million dead
 rows, also know as a whole bunch of rows.

 Either way, you probably have a table so terribly bloated that a
 regular vacuum will not help you in terms of speeding it up.  Regular
 vacuums are like brushing your teeth three times a day.  If you've
 forgotten for three months, brushing them once isn't likely to fix all
 the cavities you've got.  Same thing here.  You'll either need a
 vacuum full or a cluster.  Cluster is often faster.  Or you can try
 selecting everything into a temp table, truncating the real table, and
 inserting the data back in.  Truncation will remove all rows, dead or
 otherwise.  The advantage is that it's often faster to truncate /
 reload than it is to vacuum full.  If you have indexes, you might want
 to drop them while re-inserting and then recreated them.



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
6 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] When does VACUUM FULL not clean out all deleted data?

2008-01-03 Thread Tom Lane
James Cloos [EMAIL PROTECTED] writes:
 Right after the restore the db took up less than ten percent as much
 space as the backup.  (For the backup I stopped pg and used rsync on
 the /var/lib/postgresql/data directory.)

 Why was the db using that extra five plus gigs?

Smells like a hadn't-been-vacuuming-often-enough problem.

Please note that vacuum-full-once-in-a-long-while is *not* a substitute
for frequent-plain-vacuum, because VACUUM FULL isn't good at reducing
index space usage --- in fact it usually makes it worse.  Your numbers
didn't separate table vs index bloat, but I'm suspicious that a lot of
your problem was the latter.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend