[GENERAL] How do clients failover in hot standby/SR?

2012-02-01 Thread Herouth Maoz
We are looking at a replication solution aimed at high availability.

So we want to use PostgreSQL 9's streaming replication/hot standby. But I seem 
to be missing a very basic piece of information: suppose the primary is host1 
and the secondary is host2. Suppose that when host1 fails host2 detects that 
and creates the trigger file that causes the secondary to act as primary.

How do all clients, which have connection strings aimed at host1 know to fail 
over and use host2?

Is there a good Internet resource for reading on this?

Thank you,
Herouth
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do clients failover in hot standby/SR?

2012-02-01 Thread Andreas Kretschmer
Herouth Maoz hero...@unicell.co.il wrote:

 We are looking at a replication solution aimed at high availability.
 
 So we want to use PostgreSQL 9's streaming replication/hot standby.
 But I seem to be missing a very basic piece of information: suppose
 the primary is host1 and the secondary is host2. Suppose that when
 host1 fails host2 detects that and creates the trigger file that
 causes the secondary to act as primary.
 
 How do all clients, which have connection strings aimed at host1 know
 to fail over and use host2?

You can, for instance, use pgpool as connection-pooler. pgpool can
detect a failed node, can create the trigger-file and connects clients
now to the other server.

 
 Is there a good Internet resource for reading on this?

google - pgpool, for instance. There are other solutions, heartbeat for
instance (with flying service-ip's).


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help speeding up a left join aggregate

2012-02-01 Thread Volodymyr Kostyrko

Nick wrote:

I have a pretty well tuned setup, with appropriate indexes and 16GB of
available RAM. Should this be taking this long? I forced it to not use
a sequential scan and that only knocked a second off the plan.

 QUERY
PLAN
--
  Hash Right Join  (cost=105882.35..105882.47 rows=3 width=118) (actual
time=3931.567..3931.583 rows=4 loops=1)
Hash Cond: (songs_downloaded.advertisement_id = a.id)
-   HashAggregate  (cost=105881.21..105881.26 rows=4 width=13)
(actual time=3931.484..3931.489 rows=3 loops=1)
  -   Seq Scan on songs_downloaded  (cost=0.00..95455.96
rows=1042525 width=13) (actual time=0.071..1833.680 rows=1034752
loops=1)
Filter: (advertiser_id = 6553406)
-   Hash  (cost=1.10..1.10 rows=3 width=46) (actual
time=0.050..0.050 rows=4 loops=1)
  Buckets: 1024  Batches: 1  Memory Usage: 1kB
  -   Seq Scan on advertisements a  (cost=0.00..1.10 rows=3
width=46) (actual time=0.037..0.041 rows=4 loops=1)
Filter: (advertiser_id = 6553406)
  Total runtime: 3931.808 ms
(10 rows)


What indexes do you have? Can you show some?

I bet you need something like (advertiser_id, advertisement_id), because 
plain index would not be sorted right.



SELECT a.id, sd.price, COALESCE(sd.downloads,0) AS downloads,
COALESCE(sd.download_revenue,0) AS download_revenue
FROM advertisements a
LEFT JOIN (SELECT advertisement_id, AVG(price) AS price, SUM(price) AS
download_revenue, COUNT(1) AS downloads FROM songs_downloaded WHERE
advertiser_id = 6553406 GROUP BY advertisement_id) AS sd ON a.id =
sd.advertisement_id
WHERE advertiser_id = 6553406


--
Sphinx of black quartz judge my vow.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Hard-Coded Version 9.1 In Names?

2012-02-01 Thread Bobby Dewitt
We get around this issue by creating a symbolic link called current that
points to the version of Postgres that we want our servers to use by
default:

ln -s /var/lib/pgsql/9.1 /var/lib/pgsql/current

The symbolic link is changed whenever we do an upgrade so it doesn't
interfere with anything that we may already have configured.

Thanks,
Bobby

On 1/31/12 8:14 AM, Marti Raudsepp ma...@juffo.org wrote:

On Tue, Jan 31, 2012 at 00:41, Jerry Richards
jerry.richa...@teotech.com wrote:
 I just installed postgreSQL 9.1 and noticed it hard-codes the folder
 /var/lib/pgsql/9.1 and it hard-codes the service name to be
postgresql91.

 Why is the hard-coded version included in the naming?

Note that this is done by Linux distributions, vanilla PostgreSQL
doesn't use version-specific paths.

The reason is that the PostgreSQL on-disk format is not
forward-compatible. In order to upgrade from one Postgres version to
the next, you need to have *both* versions installed at once. As
annoying as it is, version-specific paths is a pretty foolproof way to
enable that.

Regards,
Marti


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parameter vacuum_defer_cleanup_age

2012-02-01 Thread Marti Raudsepp
On Mon, Jan 30, 2012 at 20:55, Tulio tu...@informidia.com.br wrote:
 I have 2 servers, working with Hot-Standby and Streaming Replication...
 and when we executed some query much large returns a message..
 canceling statement due to statement timeout
 I want know, how can I calculate the better value to 
 vacuum_defer_cleanup_age in my case?

This error doesn't happen due to recovery conflict, so
vacuum_defer_cleanup_age doesn't make any difference. The error
happens because the statement_timeout setting is configured somewhere
(maybe per-database or per-user settings). Set it to 0 to disable the
statement timeout.

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Issue with CREATE EXTENSION tablefuncKreiter

2012-02-01 Thread Chris Travers
Hi all;

We have gotten a report from a user who is having issues with CREATE
EXTENSION tablefunc.  I figured I would ask for additional insight and
assistance at this point.

When the user tries to run CREATE EXTENSION tablefunc; the following occurs:

-bash-4.2$ dropdb ext_test
-bash-4.2$ createdb ext_test
-bash-4.2$ psql ext_test
psql (9.1.2)
Type help for help.

ext_test=# select version();
   version
-
 PostgreSQL 9.1.2 on x86_64-redhat-linux-gnu, compiled by gcc (GCC)
4.6.2 20111027 (Red Hat 4.6.2-1), 64-bit
(1 row)

ext_test=# CREATE EXTENSION tablefunc;
ERROR:  type tablefunc_crosstab_2 already exists

This lead me to conclude that we needed to CREATE EXTENSION FROM UNPACKAGED
thinking this might be an upgrade issue.  However no luck.

ext_test=# CREATE EXTENSION tablefunc FROM unpackaged;
ERROR:  function normal_rand(integer, double precision, double
precision) does not exist


What should be tried next?

Best Wishes,
Chris Travers


Re: [GENERAL] Issue with CREATE EXTENSION tablefuncKreiter

2012-02-01 Thread Merlin Moncure
On Wed, Feb 1, 2012 at 8:52 AM, Chris Travers chris.trav...@gmail.com wrote:
 Hi all;

 We have gotten a report from a user who is having issues with CREATE
 EXTENSION tablefunc.  I figured I would ask for additional insight and
 assistance at this point.

 When the user tries to run CREATE EXTENSION tablefunc; the following occurs:

 -bash-4.2$ dropdb ext_test
 -bash-4.2$ createdb ext_test
 -bash-4.2$ psql ext_test
 psql (9.1.2)
 Type help for help.

 ext_test=# select version();
                                                    version
 -
  PostgreSQL 9.1.2 on x86_64-redhat-linux-gnu, compiled by gcc (GCC)
 4.6.2 20111027 (Red Hat 4.6.2-1), 64-bit
 (1 row)

 ext_test=# CREATE EXTENSION tablefunc;
 ERROR:  type tablefunc_crosstab_2 already exists

 This lead me to conclude that we needed to CREATE EXTENSION FROM UNPACKAGED
 thinking this might be an upgrade issue.  However no luck.

 ext_test=# CREATE EXTENSION tablefunc FROM unpackaged;
 ERROR:  function normal_rand(integer, double precision, double
 precision) does not exist


 What should be tried next?

by any chance did you create the extension in the template1 database?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Expanding psql variables

2012-02-01 Thread mephysto
Hello to everyone,
I'm trying to use variables in psql script to create some stored funtions.
The problem is that psql variables are not expanded if it is into a dollar
quoted string. This is my example:

\set my_schema foo

CREATE OR REPLACE FUNCTION foo() RETURNS VOID AS
$BODY$
SELECT * FROM :my_schema.my_table;
$BODY$
LANGUAGE sql;


In this manner I receive a syntax error near:


The question is: is there a manner to expand psql script variables inside
dollar quoted string?

Thanks in advance.

Mephysto

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Expanding-psql-variables-tp5447801p5447801.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue with CREATE EXTENSION tablefuncKreiter

2012-02-01 Thread Tom Lane
Merlin Moncure mmonc...@gmail.com writes:
 On Wed, Feb 1, 2012 at 8:52 AM, Chris Travers chris.trav...@gmail.com wrote:
 ext_test=# CREATE EXTENSION tablefunc;
 ERROR:  type tablefunc_crosstab_2 already exists
 
 This lead me to conclude that we needed to CREATE EXTENSION FROM UNPACKAGED
 thinking this might be an upgrade issue.  However no luck.
 
 ext_test=# CREATE EXTENSION tablefunc FROM unpackaged;
 ERROR:  function normal_rand(integer, double precision, double
 precision) does not exist

 by any chance did you create the extension in the template1 database?

The whole extension is not in template1, else CREATE FROM unpackaged
would have worked.  But evidently there are at least some conflicting
objects there.  I speculate that somebody installed a pre-9.1 version of
the extension in template1, realized their mistake, and then tried to
clean it out manually (piecemeal) instead of using the uninstall script.
And missed some things.

Anyway the solution is to connect to template1 and drop any cruft that's
lying around in it.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: EXT :Re: [GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-02-01 Thread Tom Lane
Nykolyn, Andy (AS) andrew.nyko...@ngc.com writes:
 8.4.what exactly, and did you update versions around the time this
 started happening?  I'm worried that this may represent a
 newly-introduced bug.  Can you provide a self-contained test case?
 It doesn't matter if it only fails occasionally, as long as we can
 keep running it till it does fail.

 It is version 8.4.1 and it has been that for almost 3 years.

8.4.1?  Well, the *first* thing you ought to do is update to 8.4.10
so we can see if this represents an already-fixed bug.  In a quick
look through the release notes I see at least two possibly related
bug fixes, and in any case there are a slew of known crash and data
corruption bugs you are vulnerable to.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue with CREATE EXTENSION tablefuncKreiter

2012-02-01 Thread Philip Rhoades

Tom,


On 2012-02-02 02:52, Tom Lane wrote:

Merlin Moncure mmonc...@gmail.com writes:
On Wed, Feb 1, 2012 at 8:52 AM, Chris Travers 
chris.trav...@gmail.com wrote:

ext_test=# CREATE EXTENSION tablefunc;
ERROR:  type tablefunc_crosstab_2 already exists

This lead me to conclude that we needed to CREATE EXTENSION FROM 
UNPACKAGED

thinking this might be an upgrade issue.  However no luck.

ext_test=# CREATE EXTENSION tablefunc FROM unpackaged;
ERROR:  function normal_rand(integer, double precision, double
precision) does not exist


by any chance did you create the extension in the template1 
database?


The whole extension is not in template1, else CREATE FROM 
unpackaged

would have worked.  But evidently there are at least some conflicting
objects there.  I speculate that somebody installed a pre-9.1 version 
of

the extension in template1, realized their mistake, and then tried to
clean it out manually (piecemeal) instead of using the uninstall 
script.

And missed some things.

Anyway the solution is to connect to template1 and drop any cruft 
that's

lying around in it.


I am the user who has the situation - the background is:

- the previous installation on Fedora 14 x86_64 was PostgreSQL 8.x

- there were old (unused) versions of LedgerSMB

- during the move from Fedora 14 to Fedora 16 (x86_64), the data was 
dumped out of PG 8.x and restored to PG 9.1P


I haven't done any manual messing around with template1 as far as I 
know . .


Regards,

Phil.
--
Philip Rhoades

GPO Box 3411
Sydney NSW  2001
Australia
E-mail:  p...@pricom.com.au


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Server not starting problem

2012-02-01 Thread Pablo Fulco
Hi all, im very new to postgres, and im having a problem with postgres 8.1.

The thing is that the server wont start, when I checked the log it said:

 

Received fast shutdown request

Aborting any active transaction

FATAL: terminating connection due ti administrator command

FATAL: terminating connection due ti administrator command

FATAL: terminating connection due ti administrator command

FATAL: terminating connection due ti administrator command

FATAL: terminating connection due ti administrator command

Shutting down

Database system is shutdown

Logger shutting down

 

 

When I log into de pgAdminIII the server appears with a red cross, when I
try to connect it sais server doesn't listen

On the event viewer I have errors that say:

 

FATAL: syntax error in file c:/program
files/postgresql/8.1/data/postgresql.conf line 435, near token MB

 

In line 435 of the conf file I have shared_buffers = 256MB

 

When I check the services running, I have the postgresql server stopped,
when I try to start it it says it started but stopped because it had no work
to do

 

Any suggestions? I would really appreciate it

 

Pablo



Re: [GENERAL] Server not starting problem

2012-02-01 Thread Guillaume Lelarge
On Wed, 2012-02-01 at 11:44 -0200, Pablo Fulco wrote:
 Hi all, im very new to postgres, and im having a problem with postgres 8.1.
 
 The thing is that the server wont start, when I checked the log it said:
 
  
 
 Received fast shutdown request
 
 Aborting any active transaction
 
 FATAL: terminating connection due ti administrator command
 
 FATAL: terminating connection due ti administrator command
 
 FATAL: terminating connection due ti administrator command
 
 FATAL: terminating connection due ti administrator command
 
 FATAL: terminating connection due ti administrator command
 
 Shutting down
 
 Database system is shutdown
 
 Logger shutting down
 
  
 
 
 
 When I log into de pgAdminIII the server appears with a red cross, when I
 try to connect it sais server doesn't listen
 
 On the event viewer I have errors that say:
 
  
 
 FATAL: syntax error in file c:/program
 files/postgresql/8.1/data/postgresql.conf line 435, near token MB
 
  
 
 In line 435 of the conf file I have shared_buffers = 256MB
 
  
 
 When I check the services running, I have the postgresql server stopped,
 when I try to start it it says it started but stopped because it had no work
 to do
 
  
 
 Any suggestions? I would really appreciate it
 

You cannot use units in your configuration with the 8.1 release. Units
are available with the 8.2 release.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about (probably wrong) index scan cost for conditional indexes

2012-02-01 Thread Alex Lai

Tom Lane wrote:

Maxim Boguk maxim.bo...@gmail.com writes:
  

I know there is issue with statistics over intarrays (it was there
very long time and sometime it's complicating things a lot).



  

However,  the 100x cost difference between:
SELECT * from test order by id limit 100;  (over primary key (id) btree index)
Limit  (cost=0.00..3.43 rows=100 width=37)
vs
SELECT * from test where sections  '{2}' order by value limit 100;
(over test_value_in2section_key on test(value) where sections 
'{2}'   btree index)
Limit  (cost=0.00..539.29 rows=100 width=37)
seems wrong for me.



[ shrug... ]  It really is the fault of the bad rowcount estimate.
The cost estimates for the complete indexscans are reasonable.  However,
in one case you've got LIMIT thinking that it will fetch the first 100
out of 100 index entries, so it divides the indexscan cost estimate
by 1, and gets something reasonably accurate.  In the other case,
LIMIT thinks it's going to fetch the first 100 out of 1000 index
entries, so it divides the indexscan cost estimate by 10, and comes out
with something not very accurate.  If the rowcount estimate for  had
been correct, those numbers would be much more alike.

  

Both queries performs the absolutely same task: fetch 100 entries from
the table based on the ideally suitable index (no post
processing/filtering were done at all... just return 100 sorted tuples
based on single index scan).



Well, you know that and I know that, but the exposed cost and rowcount
estimates for the IndexScan plan node imply something else entirely:
that the cost-per-tuple-fetched is a lot higher in the one case than
the other.  The LIMIT estimator has no reason, or method, to second
guess that.

  

And even if I drop the intarray index completely, than I still have a
wrong plan (bitmap scan + sort),  because planner cost for the index
scan over conditional index 100 more the it should be.
(e.g. there is still an issue even in absence of the intarray index).



Yeah, because it's not about the index, it's about the selectivity of
the  operator.  That estimate is wrong regardless of whether there
are any indexes involved.

  

Is absence of frequency statistics over intarrays somehow linked to
the wrong planner cost estimates for conditional index scan?



Well, we lack both the statistics and an operator selectivity function
that would know what to do with them.  Just a small matter of
programming ...

regards, tom lane

  

Tom,

I had the same situation in one of my query.
Use the subquery can speed up almost by 100 times faster.

explain analyse
select FileId as FileId, ESDT as ESDT,1 as Position from 
V_FileMeta_L3

  where Archiveset = 61000 and ESDT= 'ESDT123'
and Source = 'SOURCE1234'
and (
   (StartTime between '2012-01-28 
05:59:57.00Z'::timestamp - '-135 minutes'::interval
 and '2012-01-28 
07:41:27.00Z'::timestamp + '100 days'::interval)

  or
(EndTime between '2012-01-28 
05:59:57.00Z'::timestamp - '-135 minutes'::interval
and '2012-01-28 
07:41:27.00Z'::timestamp + '100 days'::interval)

 )
order by starttime
limit 1;

Limit  (cost=0.00..15.20 rows=1 width=22) (actual time=200.048..200.048 
rows=1 loops=1)
  -  Nested Loop  (cost=0.00..117596.32 rows=7736 width=22) (actual 
time=200.046..200.046 rows=1 loops=1)
-  Index Scan using ak_filemeta_l3_esdt_starttime_endtime on 
filemeta_l3 b  (cost=0.00..77200.55 rows=7736 width=22) (actual 
time=199.986..199.989

rows=2 loops=1)
  Index Cond: ((esdt)::text = 'ROLPT'::text)
  Filter: (((source)::text = 'OMPS-NPP'::text) AND 
(((starttime = '2012-01-28 08:14:57'::timestamp without time zone) AND 
(starttime = '2012-
05-07 07:41:27'::timestamp without time zone)) OR ((endtime = 
'2012-01-28 08:14:57'::timestamp without time zone) AND (endtime = 
'2012-05-07 07:41:27'::ti

mestamp without time zone
-  Index Scan using pk_filemeta_archiveset on 
filemeta_archiveset a  (cost=0.00..5.21 rows=1 width=4) (actual 
time=0.025..0.025 rows=0 loops=2)
  Index Cond: ((a.fileid = b.fileid) AND (a.archiveset = 
61000))

Total runtime: 200.102 ms
(8 rows)

explain analyse
select FileId as FileId, ESDT as ESDT,1 as Position from 
V_FileMeta_L3

where FileId in (select fileid from V_FileMeta_L3
  where Archiveset = 61000 and ESDT= 'ESDT123'
and Source = 'SOUCE1234'
and (
   (StartTime between '2012-01-28 
05:59:57.00Z'::timestamp - '-135 minutes'::interval
 and '2012-01-28 
07:41:27.00Z'::timestamp + '100 days'::interval)

  or
(EndTime between '2012-01-28 

Re: [GENERAL] pg_dump -s dumps data?!

2012-02-01 Thread Dimitri Fontaine
Hi,

Sorry to be late in the thread, I'm too busy right now.  Cédric called
it to my immediate attention though.

Martijn van Oosterhout klep...@svana.org writes:
 Perhaps a better way of dealing with this is providing a way of dumping
 extensions explicitly. Then you could say:

 pg_dump --extension=postgis -s

That's something I'm working on in this commit fest under the “inline
extensions” topic, and we should have that facility in 9.2 baring major
obstacles (consensus is made).

Tom Lane t...@sss.pgh.pa.us writes:
 On Mon, Jan 30, 2012 at 11:18 PM, Tom Lanet...@sss.pgh.pa.us  wrote:
 What's not apparent to me is whether there's an argument for doing more
 than that.  It strikes me that the current design is not very friendly
 towards the idea of an extension that creates a table that's meant
 solely to hold user data --- you'd have to mark it as config which
 seems a bit unfortunate terminology for that case.  Is it important to
 do something about that, and if so what?

 My thought exactly --- maybe it's only a minor cosmetic issue that will
 affect few people, or maybe this will someday be a major use-case.
 I don't know.  I was hoping Dimitri had an opinion.

So, being able to stuff data into an extension has been made possible to
address two use cases:

 - postgis
 - (sql only) data extensions

The former is very specific and as we didn't hear back from them I guess
we addressed it well enough, the latter is still WIP. It's about being
able to ship data as an extension (think timezone updates, geo ip, bank
cards database, exchange rates, etc). You need to be able to easily ship
those (CSV isn't the best we can do here, as generally it doesn't
include the schema nor the COPY recipe that can be non-trivial) and to
easily update those.

The case for a table that is partly user data and partly extension data
is very thin, I think that if I had this need I would use inheritance
and a CHECK(user_data is true/false) constraint to filter the data.

So I sure would appreciate being able to call that data rather than
config, and to mark any table at once. If that doesn't need any pg_dump
stretching I think providing that in 9.2 would be great.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue with CREATE EXTENSION tablefuncKreiter

2012-02-01 Thread Tom Lane
Philip Rhoades p...@pricom.com.au writes:
 On 2012-02-02 02:52, Tom Lane wrote:
 Anyway the solution is to connect to template1 and drop any cruft 
 that's lying around in it.

 I haven't done any manual messing around with template1 as far as I 
 know . .

Well, the behavior you describe indicates pretty strongly that there are
some non-factory-standard objects in template1.  If the database is a
few years old, that's not exactly hard to believe --- all it takes is
one time having connected to the wrong place and created some stuff.
Just go have a look and get rid of what you find ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Audtiting, DDL and DML in same SQL Function

2012-02-01 Thread Christian Ramseyer
Hello list

I'm trying to build a little trigger-based auditing for various web
applications. They have many users in the application layer, but they
all use the same Postgres DB and DB user.

So I need some kind of session storage to save this application level
username for usage in my triggers, which AFAIK doesn't exist in
Postgres. Googling suggested to use a temporary table to achieve
something similar.

Question 1: Is this really the right approach to implement this, or are
there other solutions, e.g. setting application_name to user@application
and using this in the triggers or similar workarounds?

On to question 2:

So now I was trying this:

create or replace function audit_init(text, text) returns void as $$

create temporary table application_session (
user text,
application text
) with ( oids = false);

   insert into application_session
  ( user, application)  values ($1, $2);

$$
language sql volatile;

Which unfortunately can't be created or executed, as it says:

ERROR:  relation application_session does not exist
LINE 8: insert into application_session (user, application) ...

When I manually create the temporary table first, I can create the
function, but then when launching it in a new session that doesn't have
the table yet the error is the same.

If I split it up in two functions, one with the insert and one with the
create, it works fine. So apparently the objects in the DML must be
available at parse time of the function body. Is there an easy way
around this? Optimally, I'd just have my applications perform a single
call after connecting, e.g. audit_init('USERNAME', 'Name of application').


Thanks for your help.
Christian

PS: I'm aware that this solution falls flat on its face when the
applications are using persistent connections, pools etc, but this isn't
the case here. It's all straight and unshared Perl DBI-connect or PHP
pg_connect().



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Audtiting, DDL and DML in same SQL Function

2012-02-01 Thread Scott Marlowe
On Wed, Feb 1, 2012 at 3:29 PM, Christian Ramseyer r...@networkz.ch wrote:
 Hello list

 I'm trying to build a little trigger-based auditing for various web
 applications. They have many users in the application layer, but they
 all use the same Postgres DB and DB user.

 So I need some kind of session storage to save this application level
 username for usage in my triggers, which AFAIK doesn't exist in
 Postgres. Googling suggested to use a temporary table to achieve
 something similar.

 Question 1: Is this really the right approach to implement this, or are
 there other solutions, e.g. setting application_name to user@application
 and using this in the triggers or similar workarounds?

 On to question 2:

 So now I was trying this:

 create or replace function audit_init(text, text) returns void as $$

    create temporary table application_session (
        user text,
        application text
    ) with ( oids = false);

   insert into application_session
      ( user, application)  values ($1, $2);

 $$
 language sql volatile;

 Which unfortunately can't be created or executed, as it says:

 ERROR:  relation application_session does not exist
 LINE 8:     insert into application_session (user, application) ...

 When I manually create the temporary table first, I can create the
 function, but then when launching it in a new session that doesn't have
 the table yet the error is the same.

 If I split it up in two functions, one with the insert and one with the
 create, it works fine. So apparently the objects in the DML must be
 available at parse time of the function body. Is there an easy way
 around this? Optimally, I'd just have my applications perform a single
 call after connecting, e.g. audit_init('USERNAME', 'Name of application').

I think if you build the query as a string and EXECUTE it it will
work.  But I'm not guaranteeing it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Audtiting, DDL and DML in same SQL Function

2012-02-01 Thread Scott Marlowe
On Wed, Feb 1, 2012 at 4:27 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Wed, Feb 1, 2012 at 3:29 PM, Christian Ramseyer r...@networkz.ch wrote:
 Hello list

 I'm trying to build a little trigger-based auditing for various web
 applications. They have many users in the application layer, but they
 all use the same Postgres DB and DB user.

 So I need some kind of session storage to save this application level
 username for usage in my triggers, which AFAIK doesn't exist in
 Postgres. Googling suggested to use a temporary table to achieve
 something similar.

 Question 1: Is this really the right approach to implement this, or are
 there other solutions, e.g. setting application_name to user@application
 and using this in the triggers or similar workarounds?

 On to question 2:

 So now I was trying this:

 create or replace function audit_init(text, text) returns void as $$

    create temporary table application_session (
        user text,
        application text
    ) with ( oids = false);

   insert into application_session
      ( user, application)  values ($1, $2);

 $$
 language sql volatile;

 Which unfortunately can't be created or executed, as it says:

 ERROR:  relation application_session does not exist
 LINE 8:     insert into application_session (user, application) ...

 When I manually create the temporary table first, I can create the
 function, but then when launching it in a new session that doesn't have
 the table yet the error is the same.

 If I split it up in two functions, one with the insert and one with the
 create, it works fine. So apparently the objects in the DML must be
 available at parse time of the function body. Is there an easy way
 around this? Optimally, I'd just have my applications perform a single
 call after connecting, e.g. audit_init('USERNAME', 'Name of application').

 I think if you build the query as a string and EXECUTE it it will
 work.  But I'm not guaranteeing it.

Note that you might have to build both queries and EXECUTE them to make it work.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue with CREATE EXTENSION tablefuncKreiter

2012-02-01 Thread Chris Travers
On Wed, Feb 1, 2012 at 1:31 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Philip Rhoades p...@pricom.com.au writes:
  On 2012-02-02 02:52, Tom Lane wrote:
  Anyway the solution is to connect to template1 and drop any cruft
  that's lying around in it.

  I haven't done any manual messing around with template1 as far as I
  know . .

 Well, the behavior you describe indicates pretty strongly that there are
 some non-factory-standard objects in template1.  If the database is a
 few years old, that's not exactly hard to believe --- all it takes is
 one time having connected to the wrong place and created some stuff.
 Just go have a look and get rid of what you find ...


What about running the 9.0 tablefunc-uninstall.sql?

Best Wishes,
Chris Travers


regards, tom lane



[GENERAL] Puzzling full database lock

2012-02-01 Thread Christopher Opena
Hello folks,

We've been running into some very strange issues of late with our
PostgreSQL database(s).  We have an issue where a couple of queries push
high CPU on a few of our processors and the entire database locks (reads,
writes, console cannot be achieved unless the high CPU query procs are
killed).  Further investigation shows ~59% total cpu usage (we have 16
total cores), low io, and mid-to-low memory usage (we have 74GB of memory,
shared_buffers=16GB).  We had previously seen some high io problems but
those turned out to be unconnected and ultimately solved, yet we are still
seeing a complete lock of the DB occasionally as previously described.

The queries themselves are not any different than normal usage on other
databases; they are pulling back a little more data but there's nothing
that stands out about them as far as query construction.

One thing that we aren't sure of is whether or not we are running into a
general connection pooling issue.  Our typical number of postgresql
processes fluctuates between 1,400 and 1,600 - most of which are idle - as
we have a number of application servers all connecting to a central
read/write master (the master replicates out to a secondary via streaming
replication).  We have max_processes set to 3,000 after tweaking some
kernel memory parameters so at least we know we aren't exceeding that, but
is there a practical real world limit or issue with setting this too high?

Ultimately, the problem we're seeing is a full read/write lock on a system
that is apparently at medium usage levels once we got rid of our high io
red herring.  Honestly I'm a little stumped as to where to look next; is
there some specific metric I might be missing here?

Any help is greatly appreciated,
-Chris.


Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Carlos Mennens
On Wed, Feb 1, 2012 at 7:38 PM, Christopher Opena counterv...@gmail.com wrote:
 Hello folks,

 We've been running into some very strange issues of late with our PostgreSQL
 database(s).  We have an issue where a couple of queries push high CPU on a
 few of our processors and the entire database locks (reads, writes, console
 cannot be achieved unless the high CPU query procs are killed).  Further
 investigation shows ~59% total cpu usage (we have 16 total cores), low io,
 and mid-to-low memory usage (we have 74GB of memory, shared_buffers=16GB).

Just out of curiosity, what OS are you running?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: EXT :Re: [GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-02-01 Thread Adrian Klaver
On Wednesday, February 01, 2012 4:55:46 am Nykolyn, Andy (AS) wrote:

 
 Tom,
 
 It is version 8.4.1 and it has been that for almost 3 years.  I have
 attached a script that will create and load the tables as well as the
 store procedure required to run the case that sometimes causes this error.
  As I stated it happens on different types of DML statements but always on
 a temp table.  The last line of the script contains the call to the stored
 procedure where the error had occurred most often when it occurred.  The
 line it usually happened on was the create temporary table t_bitgrid as
 select * from bitgrid  Please let me know if you need any more
 information.  Greatly appreciated.

Well I have been running the function using the data you sent against both an 
8.4.1 and 8.4.9 instance on and off a good part of the day. At this point we 
are 
talking many thousands of runs. In either case I have not seen an error. So 
either I am incredibly lucky(I wish) or something is going on that is unique to 
your environment.  At this point I am not quite where to go other then say,  do 
what Tom recommends, upgrade to 8.4.10.


 
 Andy Nykolyn
 Northrop Grumman

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Christopher Opena
It's CentOS 5.5, PostgreSQL version 9.0.4 (x86_64).

On Wed, Feb 1, 2012 at 4:44 PM, Carlos Mennens carlos.menn...@gmail.comwrote:

 On Wed, Feb 1, 2012 at 7:38 PM, Christopher Opena counterv...@gmail.com
 wrote:
  Hello folks,
 
  We've been running into some very strange issues of late with our
 PostgreSQL
  database(s).  We have an issue where a couple of queries push high CPU
 on a
  few of our processors and the entire database locks (reads, writes,
 console
  cannot be achieved unless the high CPU query procs are killed).  Further
  investigation shows ~59% total cpu usage (we have 16 total cores), low
 io,
  and mid-to-low memory usage (we have 74GB of memory,
 shared_buffers=16GB).

 Just out of curiosity, what OS are you running?



Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Carlos Mennens
On Wed, Feb 1, 2012 at 7:51 PM, Christopher Opena counterv...@gmail.com wrote:
 It's CentOS 5.5, PostgreSQL version 9.0.4 (x86_64).

That seems extremely bleeding edge for CentOS. Did you compile this
package from source RPM or some 3rd party package maintainer for
PostgreSQL?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Alan Hodgson
 On Wed, Feb 1, 2012 at 7:38 PM, Christopher Opena counterv...@gmail.com 
wrote:
  Hello folks,
  
  We've been running into some very strange issues of late with our
  PostgreSQL database(s).  We have an issue where a couple of queries
  push high CPU on a few of our processors and the entire database locks
  (reads, writes, console cannot be achieved unless the high CPU query
  procs are killed).  Further investigation shows ~59% total cpu usage
  (we have 16 total cores), low io, and mid-to-low memory usage (we have
  74GB of memory, shared_buffers=16GB).

Define low I/O. The only things I've ever seen interfere with console access  
are running out of memory or excessive disk i/o. If you're seeing even 6-12% 
iowait on a 16 core machine you might very well have swamped your available 
disk I/O.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] restart server on Lion

2012-02-01 Thread Scott Frankel

Hi all,

Problem resolved.  The postgres server now starts automatically when my OSX 
Lion machine reboots.  Hopefully the following info will be useful for anyone 
contemplating a macports installation of PostgreSQL.

The biggest impediment to getting the relaunch behavior I expected was from 
installing the postgresql84 package, rather than postgresql84-server.  The 
latter package contains the LaunchDaemon wrapper script that will ultimately 
relaunch postgres on reboot; it also apparently corrects the postgres username 
clobbering that Apple provides with PG on Lion.  Furthermore, it prints 
post-installation and db initialization instructions to the shell, which 
obviates having to glean that info from web searches ;)

Note that in my installation, I cleverly broke the LaunchDaemon wrapper's 
ability to relaunch postgres by making a minor change to macport's default 
database directly structure.  Turns out their postgres initd instructions 
reflect paths that are hard-wired in the wrapper.  Who knew?

I run PostgreSQL in a closed environment, for development and testing purposes, 
on my laptop.  Installing a *-server package seemed overkill.  Quite the 
contrary, that's exactly what I should've done from the beginning.

Hope this helps -
Scott



On Jan 30, 2012, at 2:03 PM, Scott Frankel wrote:

 
 Hi M,
 
 
 On Jan 30, 2012, at 11:46 AM, A.M. wrote:
 
 
 On Jan 30, 2012, at 2:40 PM, Scott Frankel wrote:
 
 
 Hi all,
 
 What's the best/correct way to cause the PostgreSQL server to startup 
 automatically when rebooting on OSX 10.7 Lion?  
 
 I'm using a macports install of postgres 8.4 and went through a couple 
 grueling days, sudo'd up to my eyeballs, to restore the postgres user and 
 have a working installation.
 
 To start the service, I'm currently invoking this on the cmd-line:
 
 sudo su postgres -c /opt/local/lib/postgresql84/bin/pg_ctl -D 
 /opt/local/var/postgresql84/defaultdb -l 
 /opt/local/var/postgresql84/defaultdb/data/logfile.txt start
 
 That's pretty cumbersome for each reboot.  I've also seen references to 
 manually invoking this on the cmd-line:
 
 sudo serveradmin start postgres
 
 But that yields postgres:error = CANNOT_LOAD_BUNDLE_ERR
 
 Is there an /etc or OSX-specific solution people are using for restarts?  
 My PG 8.3 server restarted automagically on OSX 10.5.  While I don't recall 
 setting up anything specifically to make that happen, memory fades...
 
 MacPorts includes a launchd plist to handle this. (Perhaps launchd is the 
 keyword you need to search.)
 
 Aha!  Nice to know which tree to bark up ;)
 
 
 /Library/LaunchDaemons/org.macports.postgresql90-server.plist (for 
 PostgreSQL 9.0, of course)
 http://od-eon.com/blogs/calvin/os-x-lion-postgresql/
 
 I installed macports: postgresql84 @8.4.10_0 and there's no trace of a 
 postgres launch daemon plist file having been installed on my machine.  My 
 best guess at this point is that the plist file may only come with the 
 postgresql84-server @8.4.10 port.  
 
 I'll test that theory tomorrow and keep this list posted.
 
 Thanks!
 Scott
 
 
 
 
 You can adjust the script to your liking.
 
 Cheers,
 M
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Christopher Opena
It was installed from pgrpms.org's repository.

On Wed, Feb 1, 2012 at 4:55 PM, Carlos Mennens carlos.menn...@gmail.comwrote:

 On Wed, Feb 1, 2012 at 7:51 PM, Christopher Opena counterv...@gmail.com
 wrote:
  It's CentOS 5.5, PostgreSQL version 9.0.4 (x86_64).

 That seems extremely bleeding edge for CentOS. Did you compile this
 package from source RPM or some 3rd party package maintainer for
 PostgreSQL?



Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Ondrej Ivanič
Hi,

On 2 February 2012 11:38, Christopher Opena counterv...@gmail.com wrote:
 We've been running into some very strange issues of late with our PostgreSQL
 database(s).  We have an issue where a couple of queries push high CPU on a
 few of our processors and the entire database locks (reads, writes, console
 cannot be achieved unless the high CPU query procs are killed).  Further
 investigation shows ~59% total cpu usage (we have 16 total cores), low io,
 and mid-to-low memory usage (we have 74GB of memory, shared_buffers=16GB).

I think 16GB is too much. We started with 9GB (16 cores, 80GB RAM,
SAN) and then experimented with lower value (6GB) but never used in
the production  because we switched to different database / storage
technology. Anyway, Overal CPU utilisation was lower using 6GB.

If CPU util is high because of io waits then it might be worth to play
with dirty_background_ratio and dirty_ratio. The problem is that the
value is percentage and you have 74GB. CentOS has 10% and 40% as
default value for dirty_background_ratio and dirty_ratio respectively.
10% of 74GB is 7.4GB and there is no storage controller with 7.4GB of
cache so you get IO waits (and high load). So writes will backup until
you hit 40% hard limit (vm.dirty_ratio) which is even worse (~30GB to
flush). I think you should try lower both. For example, try 1 and 10
for vm.dirty_background_ratio and vm.dirty_ratio respectively.

 One thing that we aren't sure of is whether or not we are running into a
 general connection pooling issue.  Our typical number of postgresql
 processes fluctuates between 1,400 and 1,600 - most of which are idle - as
 we have a number of application servers all connecting to a central
 read/write master (the master replicates out to a secondary via streaming
 replication).  We have max_processes set to 3,000 after tweaking some kernel
 memory parameters so at least we know we aren't exceeding that, but is there
 a practical real world limit or issue with setting this too high?

I would use connection pooler like PG-Pool II. It can add transparent
failover and you don't need max_processes set so high (plus parallel
query feature could be useful).

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Christopher Opena
Do you mean 6-12% of total iowait, or per cpu?  Our average iowait in the
last week is 34.31% of a total 1600% with an average idle of 1451.76%.  Our
iowait *does* spike occasionally (today it went up to 148.01%) but it
doesn't coincide with the lock happening.  At the time of the lock we were
at 10.58% iowait, which is quite a bit below our average.

Thanks,
-Chris.

On Wed, Feb 1, 2012 at 4:55 PM, Alan Hodgson ahodg...@simkin.ca wrote:


 Define low I/O. The only things I've ever seen interfere with console
 access
 are running out of memory or excessive disk i/o. If you're seeing even
 6-12%
 iowait on a 16 core machine you might very well have swamped your available
 disk I/O.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Alan Hodgson
On Wednesday, February 01, 2012 05:13:15 PM Christopher Opena wrote:
 Do you mean 6-12% of total iowait, or per cpu?  Our average iowait in the
 last week is 34.31% of a total 1600% with an average idle of 1451.76%.  Our
 iowait *does* spike occasionally (today it went up to 148.01%) but it
 doesn't coincide with the lock happening.  At the time of the lock we were
 at 10.58% iowait, which is quite a bit below our average.
 

Total, but it doesn't sound like that's the problem.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Christopher Opena
Yeah, it's strange because we definitely have periods of high iowait but
this is not when the locks are happening.  If I could correlate it directly
to that it would be so much easier.  Thanks again for the response!

On Wed, Feb 1, 2012 at 8:42 PM, Alan Hodgson ahodg...@simkin.ca wrote:

 On Wednesday, February 01, 2012 05:13:15 PM Christopher Opena wrote:
  Do you mean 6-12% of total iowait, or per cpu?  Our average iowait in the
  last week is 34.31% of a total 1600% with an average idle of 1451.76%.
  Our
  iowait *does* spike occasionally (today it went up to 148.01%) but it
  doesn't coincide with the lock happening.  At the time of the lock we
 were
  at 10.58% iowait, which is quite a bit below our average.
 

 Total, but it doesn't sound like that's the problem.




[GENERAL] [GENERA]: Postgresql-9.1.1 synchronous replication issue

2012-02-01 Thread Venkat Balaji
Hello,

I was testing the Postgres-9.1.1 synchronous streaming replication on our
UAT system.

Without synchronous replication, everything was working fine.

But, when i enabled synchronous_replication_names='*', the create table
started hanging for long time.

When i pressed Ctrl+C i got the following message -

Cancel request sent
WARNING:  canceling wait for synchronous replication due to user request
DETAIL:  The transaction has already committed locally, but might not have
been replicated to the standby.
CREATE TABLE

Can someone please help us ?

Thanks
VB


Re: [GENERAL] [GENERA]: Postgresql-9.1.1 synchronous replication issue

2012-02-01 Thread Raghavendra
What is the value of synchronous_commit ?

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



On Thu, Feb 2, 2012 at 12:21 PM, Venkat Balaji venkat.bal...@verse.inwrote:

 Hello,

 I was testing the Postgres-9.1.1 synchronous streaming replication on our
 UAT system.

 Without synchronous replication, everything was working fine.

 But, when i enabled synchronous_replication_names='*', the create table
 started hanging for long time.

 When i pressed Ctrl+C i got the following message -

 Cancel request sent
 WARNING:  canceling wait for synchronous replication due to user request
 DETAIL:  The transaction has already committed locally, but might not have
 been replicated to the standby.
 CREATE TABLE

 Can someone please help us ?

 Thanks
 VB



Re: [GENERAL] Why Hard-Coded Version 9.1 In Names?

2012-02-01 Thread Gregg Jaskiewicz
Its because of pg_upgrade, 'in place' upgrade capabilities that are in
pg since 8.4. For that to work you need both old and new (current) set
of postgresql binaries. Etc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [GENERA]: Postgresql-9.1.1 synchronous replication issue

2012-02-01 Thread Venkat Balaji
synchronous_commit is on

Thanks
VB

On Thu, Feb 2, 2012 at 12:31 PM, Raghavendra 
raghavendra@enterprisedb.com wrote:

 What is the value of synchronous_commit ?

 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/



 On Thu, Feb 2, 2012 at 12:21 PM, Venkat Balaji venkat.bal...@verse.inwrote:

 Hello,

 I was testing the Postgres-9.1.1 synchronous streaming replication on our
 UAT system.

 Without synchronous replication, everything was working fine.

 But, when i enabled synchronous_replication_names='*', the create table
 started hanging for long time.

 When i pressed Ctrl+C i got the following message -

 Cancel request sent
 WARNING:  canceling wait for synchronous replication due to user request
 DETAIL:  The transaction has already committed locally, but might not
 have been replicated to the standby.
 CREATE TABLE

 Can someone please help us ?

 Thanks
 VB