Re: Setting up streaming replication problems

2018-01-21 Thread Andreas Kretschmer



Am 22.01.2018 um 07:39 schrieb Thiemo Kellner:

Hi all

I try to set up synchronous streaming replication as try-out. I use my 
laptop with Debian 9 and PostgreSQL package 10+189.pgdg90+1. And of 
this PostgreSQL installation I have two clusters main (master) and 
main2 (hot standby). I tried with Rigg's book and the PostgreSQL 
documentation and some pages on the web, but fail miserably.




you have one cluster with 2 nodes ;-)




Master postgresql.conf (possible) differences from stock:
wal_level = replica
archive_mode = off
max_wal_senders = 12
max_replication_slots = 12
synchronous_standby_names = 'main2,main'


Note: it's a bad idea to build a synchronous cluster with only 2 nodes, 
you need at least 3 nodes




wal_receiver_timeout = 60s
log_min_messages = debug5
log_connections = on
log_statement = 'ddl'
log_replication_commands = on
lc_messages = 'C.UTF-8'

Master pg_hba.conf (possible) differences from stock:
host    replication all 127.0.0.1/32 md5
host    replication all ::1/128 md5
local   replication repuser peer
host    replication repuser 0.0.0.1/0 md5
host    replication repuser ::1/0 md5

Master pg_hba.conf (possible) differences from stock:


that's the recovery.conf, not pg_hba.conf. And you don't need it on the 
master.



standby_mode = 'off'
primary_conninfo = 'host=localhost user=repuser port=5433 
password='

restore_command = 'false'


why that?







Master pg_hba.conf (possible) differences from stock:


master or standby? confused...



standby_mode = 'on'
primary_conninfo = 'host=localhost user=repuser port=5433 
password='


the same port as above?



restore_command = 'false'


why?






I have created repuser on master and equally on hot standby:
postgres=# \du+ repuser
 Liste der Rollen
 Rollenname |   Attribute    | Mitglied von | Beschreibung
++--+--
 repuser    | Replikation   +| {}   |
    | 2 Verbindungen |  |


1) I am not sure whether to put the md5 value of the repuser password 
into primary conninfo or the plain one. I don't feel the documentation 
or the book is clear on that.


2) Starting the clusters, I do not see any attempt of the hot standby 
to connect to the master.


are the 2 nodes running on different ports?

You need only 1 recovery.conf, on the standby. restore_command = 'false' 
is useless, i'm guessing that's the reason that the standby doesn't 
connect to the master.
And, again, a synchronous replication needs at least 3 nodes. if the 
standby doesn't work, the master can't do any write operations, he has 
to wait wait for the standby - as you can see ;-)





Greetings from Dresden, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Setting up streaming replication problems

2018-01-21 Thread Thiemo Kellner

Hi all

I try to set up synchronous streaming replication as try-out. I use my 
laptop with Debian 9 and PostgreSQL package 10+189.pgdg90+1. And of this 
PostgreSQL installation I have two clusters main (master) and main2 (hot 
standby). I tried with Rigg's book and the PostgreSQL documentation and 
some pages on the web, but fail miserably.


Master postgresql.conf (possible) differences from stock:
wal_level = replica
archive_mode = off
max_wal_senders = 12
max_replication_slots = 12
synchronous_standby_names = 'main2,main'
wal_receiver_timeout = 60s
log_min_messages = debug5
log_connections = on
log_statement = 'ddl'
log_replication_commands = on
lc_messages = 'C.UTF-8'

Master pg_hba.conf (possible) differences from stock:
hostreplication all 127.0.0.1/32md5
hostreplication all ::1/128 md5
local   replication repuser peer
hostreplication repuser 0.0.0.1/0   md5
hostreplication repuser ::1/0   md5

Master pg_hba.conf (possible) differences from stock:
standby_mode = 'off'
primary_conninfo = 'host=localhost user=repuser port=5433 password=value of password>'

restore_command = 'false'


Hot standby postgresql.conf (possible) differences from stock:
wal_level = replica
max_wal_senders = 12
max_replication_slots = 12
synchronous_standby_names = 'main,main2'
wal_receiver_timeout = 60s
log_min_messages = debug5
log_connections = on
log_statement = 'ddl'
log_replication_commands = on
lc_messages = 'C.UTF-8'

Hot standby pg_hba.conf (possible) differences from stock:
hostreplication all 127.0.0.1/32md5
hostreplication all ::1/128 md5
local   replication repuser peer
hostreplication repuser 0.0.0.1/0   md5
hostreplication repuser ::1/0   md5

Master pg_hba.conf (possible) differences from stock:
standby_mode = 'on'
primary_conninfo = 'host=localhost user=repuser port=5433 password=value of password>'

restore_command = 'false'

I have created repuser on master and equally on hot standby:
postgres=# \du+ repuser
 Liste der Rollen
 Rollenname |   Attribute| Mitglied von | Beschreibung
++--+--
 repuser| Replikation   +| {}   |
| 2 Verbindungen |  |


1) I am not sure whether to put the md5 value of the repuser password 
into primary conninfo or the plain one. I don't feel the documentation 
or the book is clear on that.


2) Starting the clusters, I do not see any attempt of the hot standby to 
connect to the master.


3) Executing 'create database test;' on the master gets stuck. After 
cancelling (ctrl-c) I have got the message:
psql:/home/thiemo/external_projects/act/test.pg_sql:1: WARNING: 
canceling wait for synchronous replication due to user request
DETAIL:  The transaction has already committed locally, but might not 
have been replicated to the standby.

CREATE DATABASE
test exists now on master but not on hot standby.
  Liste der Datenbanken
 Name | Eigentümer | Kodierung | Sortierfolge | Zeichentyp  | 
Zugriffsprivilegien

--++---+--+-+-
 test | postgres   | UTF8  | de_DE.UTF-8  | de_DE.UTF-8 |
(1 Zeile)

postgres=# \l test
  Liste der Datenbanken
 Name | Eigentümer | Kodierung | Sortierfolge | Zeichentyp | 
Zugriffsprivilegien

--++---+--++-
(0 Zeilen)


Where did I go wrong? Any hint would be appreciated.

Kind regards Thiemo

--
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get=0xCA167FB0E717AFFC
<>

Re: array_agg and/or =ANY doesn't appear to be functioning as I expect

2018-01-21 Thread Rhys A.D. Stewart
Hannes,

Thanks for your observations.. Will take a look at the data.

Regards,

Rhys

On Jan 20, 2018 11:00 PM, "Hannes Erven"  wrote:

Hi Rhys,




Am 2018-01-21 um 02:42 schrieb Rhys A.D. Stewart:

> Greetings All,
> I'm having an issue which is very perplexing. The having clause in a
> query doesn't appear to be working as I expect it. Either that or my
> understanding of array_agg() is flawed.
>
>
> [...]


> with listing as (
>select start_vid, end_vid, array_agg(node order by path_seq)
> node_array, array_agg(edge order by path_seq) edge_array
>  from confounded.dataset
>group by start_vid,end_vid
>  having true =ALL (array_agg(truth))
>   )
> select count(*) from confounded.dataset
> where node in (select distinct unnest(node_array) from listing) and
> truth = false;
>
> I would expect the above query to return 0 rows.
>


the answer is in your data: "node" is not a UNIQUE field, and there are
node values with multiple rows.
e.g. node=977 has one row with truth=true and one with truth=false.

So what your second query really does is "select all node values from
listing for which another entry with truth=false exists in the dataset".

Presuming that "seq" is a primary key [although not declared], you probably
meant to restrict your query on that.


Best regards,

-hannes


Re: Best non-networked front end for postgresql

2018-01-21 Thread Michael Paquier
On Sun, Jan 21, 2018 at 11:57:39AM -0700, Sherman Willden wrote:
> Basic question 1: Which non-networked front-end would work best for me?
> 
> Basic question 2: I am seriously considering HTML fields to capture and
> process the information. So to connect with postgresql what do I need to
> know? Do I need to know javascript, python, and other languages? How is
> PERL for something like this?

PostgreSQL supports a wide range of drivers:
https://wiki.postgresql.org/wiki/List_of_drivers
If you are fluent with perl, with not playing with DBD::Pg then? Here is
its documentation:
https://metacpan.org/pod/DBD::Pg

> I am entering the below values by hand into a functional database. I
> thought that I would create some type of front-end to enter the values and
> then have the front-end enter the values into the postgresql database.
> 01). visit_date
> 02). start_time
> 03). end_time
> 04). venue (This is the casino name)
> 05). city
> 06). state
> 07). limit (4/8 20/40 etc)
> 08). game (7-card-stud etc)
> 09). variant (fixed-limit no-limit etc)
> 10). high-low (mixed-high-low high-only etc)
> 11). buy_in
> 12). cash_out

This is the kind of area where pgadmin can help I think:
https://www.pgadmin.org/

Still, the most interesting portion in hacking is by doing things
yourself, so why not giving a shot to perl and write your own set of
scripts?
--
Michael


signature.asc
Description: PGP signature


Re: How to measure query time - with warm up and cached data

2018-01-21 Thread Neto pr
2018-01-21 13:53 GMT-08:00 Peter J. Holzer :

> On 2018-01-21 12:45:54 -0800, Neto pr wrote:
> > I need to know the actual execution time of a query, but considering
> that the
> > data is already cached. I also need to make sure that cached data from
> other
> > queries is cleared.
> > I believe that in order to know the real time of a query it will be
> necessary
> > to "warm up" the data to be inserted in cache.
> >
> > Below are the steps suggested by a DBA for me:
> >
> > Step 1- run ANALYZE on all tables involved before the test;
> > Step 2- restart the DBMS (to clear the DBMS cache);
> > Step 3- erase the S.O. cache;
>
> Did you mean "OS cache" (operating system cache)?
>
>
Yes, Operating System cache...  S.O. = Sistema Operacional in portuguese,
it was a translation error!!

To restart the DBMS and clear the cache of O.S. I execute this commands in
linux Debian8.

/etc/init.d/pgsql stop
sync

echo "clear cache !!"

echo 3 > /proc/sys/vm/drop_caches
/etc/init.d/pgsql start


> Step 4- execute at least 5 times the same query.
> >
> > After the actual execution time of the query, it would have to take the
> time of
> > the query that is in the "median" among all.
>
> If you do this, clearing the caches before the tests will probably have
> little
> effekt. The first query will fill the cache with the data needed for
> your query (possibly evicting other data) and the next 4 will work on
> the cached data.


Yes, I believe that the first execution can be discarded, because the data
is accommodating in the cache ... the ideal is considered only the others
after the first one.


> Whether the cache was empty or full before the first
> query will make little difference to the median, because the first query
> will almost certainly be discarded as an outlier.
>
> Flushing out caches is very useful if you want to measure performance
> without caches (e.g. if you want to determine what the performance
> impact of a server reboot is).
>
>
> > Example:
> >
> > Execution 1: 07m 58s
> > Execution 2: 14m 51s
> > Execution 3: 17m 59s
> > Execution 4: 17m 55s
> > Execution 5: 17m 07s
>
> Are these real measurements or did you make them up? They look weird.
> Normally the first run is by far the slowest, then the others are very
> similar, sometimes with a slight improvement (especially between the 2nd
> and 3rd). But in your case it is just the opposite.
>


Yes, they are real information from TPC-H query 9.
I can not understand why in several tests I have done here, the first
execution is executed faster, even without indexes, and theoretically
without cache.

If someone wants to see the execution plans and other information the
worksheet with results is at the following link:
https://sites.google.com/site/eletrolareshop/repository/Result_80gb-SSD-10_exec_v4.ods

I thought it was because my CPU was working with variance .. but I
configured the BIOS it as " OS Control"  and in " Performance" CPU mode in
Linux Debian8. See below:
---

user1@hp110deb8:~/Desktop$ cpufreq-info | grep 'current CPU fr'
  current CPU frequency is 2.80 GHz.
  current CPU frequency is 2.80 GHz.
  current CPU frequency is 2.80 GHz.
  current CPU frequency is 2.80 GHz.
--

Apparently the processor is not working variably now.
Any idea why the first execution can be faster in many cases?

Best Regards
Neto






> > [cleardot]
>
> Sending Webbugs to a mailinglist?
>
> hp
>
> --
>_  | Peter J. Holzer| we build much bigger, better disasters now
> |_|_) || because we have much more sophisticated
> | |   | h...@hjp.at | management tools.
> __/   | http://www.hjp.at/ | -- Ross Anderson 
>


Re: How to measure query time - with warm up and cached data

2018-01-21 Thread Peter J. Holzer
On 2018-01-21 12:45:54 -0800, Neto pr wrote:
> I need to know the actual execution time of a query, but considering that the
> data is already cached. I also need to make sure that cached data from other
> queries is cleared.
> I believe that in order to know the real time of a query it will be necessary
> to "warm up" the data to be inserted in cache.
> 
> Below are the steps suggested by a DBA for me:
> 
> Step 1- run ANALYZE on all tables involved before the test;
> Step 2- restart the DBMS (to clear the DBMS cache);
> Step 3- erase the S.O. cache;

Did you mean "OS cache" (operating system cache)? 

> Step 4- execute at least 5 times the same query.
> 
> After the actual execution time of the query, it would have to take the time 
> of
> the query that is in the "median" among all.

If you do this, clearing the caches before the tests will probably have little
effekt. The first query will fill the cache with the data needed for
your query (possibly evicting other data) and the next 4 will work on
the cached data. Whether the cache was empty or full before the first
query will make little difference to the median, because the first query
will almost certainly be discarded as an outlier.

Flushing out caches is very useful if you want to measure performance
without caches (e.g. if you want to determine what the performance
impact of a server reboot is).


> Example:
> 
> Execution 1: 07m 58s
> Execution 2: 14m 51s
> Execution 3: 17m 59s
> Execution 4: 17m 55s
> Execution 5: 17m 07s

Are these real measurements or did you make them up? They look weird.
Normally the first run is by far the slowest, then the others are very
similar, sometimes with a slight improvement (especially between the 2nd
and 3rd). But in your case it is just the opposite.

> [cleardot]

Sending Webbugs to a mailinglist?

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Best non-networked front end for postgresql

2018-01-21 Thread Adrian Klaver

On 01/21/2018 10:57 AM, Sherman Willden wrote:

Name: Sherman

Single laptop: Compaq 6710b

Operating System: Ubuntu 17.10

Postgresql: 9.6

Used for: Just me and my home database

Seeking advice: Best non-networked front-end

Considerations: I am retired and want to create my own database and 
database captures. I have experience with PERL


Can you explain more what you mean by database captures?



Basic question 1: Which non-networked front-end would work best for me?

Basic question 2: I am seriously considering HTML fields to capture and 
process the information. So to connetct with postgresql what do I need to 
know? Do I need to know javascript, python, and other languages? How is 
PERL for something like this?


So does the above mean you want to use a Web based application to enter 
data into your database?


The way I see it you have two components to your question, managing the 
database e.g. creating the database and contained objects and 
manipulating the data within the created objects. For the management 
component Vincenzo's suggestion of pgAdmin or using the Postgres command 
line client psql would work. For the second component, building 
entry/reporting forms it comes down to how involved you want to get into 
coding.




I am entering the below values by hand into a functional database. I 
thought that I would create some type of front-end to enter the values 
and then have the front-end enter the values into the postgresql database.

01). visit_date
02). start_time
03). end_time
04). venue (This is the casino name)
05). city
06). state
07). limit (4/8 20/40 etc)
08). game (7-card-stud etc)
09). variant (fixed-limit no-limit etc)
10). high-low (mixed-high-low high-only etc)
11). buy_in
12). cash_out

Thank you;

Sherman



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Best non-networked front end for postgresql

2018-01-21 Thread Tim Clarke
On 21/01/18 19:05, Vincenzo Romano wrote:
> 2018-01-21 19:57 GMT+01:00 Sherman Willden :
>> Name: Sherman
>>
>> Single laptop: Compaq 6710b
>>
>> Operating System: Ubuntu 17.10
>>
>> Postgresql: 9.6
>>
>> Used for: Just me and my home database
>>
>> Seeking advice: Best non-networked front-end
>>
>> Considerations: I am retired and want to create my own database and database
>> captures. I have experience with PERL
>>
>> Basic question 1: Which non-networked front-end would work best for me?
>>
>> Basic question 2: I am seriously considering HTML fields to capture and
>> process the information. So to connect with postgresql what do I need to
>> know? Do I need to know javascript, python, and other languages? How is PERL
>> for something like this?
>>
>> I am entering the below values by hand into a functional database. I thought
>> that I would create some type of front-end to enter the values and then have
>> the front-end enter the values into the postgresql database.
>> 01). visit_date
>> 02). start_time
>> 03). end_time
>> 04). venue (This is the casino name)
>> 05). city
>> 06). state
>> 07). limit (4/8 20/40 etc)
>> 08). game (7-card-stud etc)
>> 09). variant (fixed-limit no-limit etc)
>> 10). high-low (mixed-high-low high-only etc)
>> 11). buy_in
>> 12). cash_out
>>
>> Thank you;
>>
>> Sherman
> PGAdmin is among the best tools to manage Postgres.
>
> https://www.pgadmin.org/
>
> As far as a front-end program, perl can be used.
> As well as a number of other languages ranging from C, C++, Java, PHP.
> Almost all languages have a "module" to interact with Postgres databases.
> The best one is IMHO the one you know the best.
>
> P.S.
> The differences between a local Unix socket and a TCP one are rather
> subtle from your point of view.
>

+1 for pgadmin - indeed why bother with anything else for one flat
table? How many rows of data do you envisage?

-- 
Tim Clarke



smime.p7s
Description: S/MIME Cryptographic Signature


How to measure query time - with warm up and cached data

2018-01-21 Thread Neto pr
Hi all,
I need to know the actual execution time of a query, but considering that
the data is already cached. I also need to make sure that cached data from
other queries is cleared.
I believe that in order to know the real time of a query it will be
necessary to "warm up" the data to be inserted in cache.

Below are the steps suggested by a DBA for me:

Step 1- run ANALYZE on all tables involved before the test;
Step 2- restart the DBMS (to clear the DBMS cache);
Step 3- erase the S.O. cache;
Step 4- execute at least 5 times the same query.

After the actual execution time of the query, it would have to take the
time of the query that is in the "median" among all.

Example:

Execution 1: 07m 58s
Execution 2: 14m 51s
Execution 3: 17m 59s
Execution 4: 17m 55s
Execution 5: 17m 07s

In this case to calculate the median, you must first order each execution
by its time:
Execution 1: 07m 58s
Execution 2: 14m 51s
Execution 5: 17m 07s
Execution 4: 17m 55s
Execution 3: 17m 59s

In this example the median would be execution 5 (17m 07s). Could someone
tell me if this is a good strategy ?
Due to being a scientific work, if anyone has a reference of any article or
book on this subject, it would be very useful.

Best Regards
Neto


Re: Best non-networked front end for postgresql

2018-01-21 Thread Vincenzo Romano
2018-01-21 19:57 GMT+01:00 Sherman Willden :
> Name: Sherman
>
> Single laptop: Compaq 6710b
>
> Operating System: Ubuntu 17.10
>
> Postgresql: 9.6
>
> Used for: Just me and my home database
>
> Seeking advice: Best non-networked front-end
>
> Considerations: I am retired and want to create my own database and database
> captures. I have experience with PERL
>
> Basic question 1: Which non-networked front-end would work best for me?
>
> Basic question 2: I am seriously considering HTML fields to capture and
> process the information. So to connect with postgresql what do I need to
> know? Do I need to know javascript, python, and other languages? How is PERL
> for something like this?
>
> I am entering the below values by hand into a functional database. I thought
> that I would create some type of front-end to enter the values and then have
> the front-end enter the values into the postgresql database.
> 01). visit_date
> 02). start_time
> 03). end_time
> 04). venue (This is the casino name)
> 05). city
> 06). state
> 07). limit (4/8 20/40 etc)
> 08). game (7-card-stud etc)
> 09). variant (fixed-limit no-limit etc)
> 10). high-low (mixed-high-low high-only etc)
> 11). buy_in
> 12). cash_out
>
> Thank you;
>
> Sherman

PGAdmin is among the best tools to manage Postgres.

https://www.pgadmin.org/

As far as a front-end program, perl can be used.
As well as a number of other languages ranging from C, C++, Java, PHP.
Almost all languages have a "module" to interact with Postgres databases.
The best one is IMHO the one you know the best.

P.S.
The differences between a local Unix socket and a TCP one are rather
subtle from your point of view.

-- 
Vincenzo Romano - NotOrAnd.IT
Information Technologies
--
NON QVIETIS MARIBVS NAVTA PERITVS



Best non-networked front end for postgresql

2018-01-21 Thread Sherman Willden
Name: Sherman

Single laptop: Compaq 6710b

Operating System: Ubuntu 17.10

Postgresql: 9.6

Used for: Just me and my home database

Seeking advice: Best non-networked front-end

Considerations: I am retired and want to create my own database and
database captures. I have experience with PERL

Basic question 1: Which non-networked front-end would work best for me?

Basic question 2: I am seriously considering HTML fields to capture and
process the information. So to connect with postgresql what do I need to
know? Do I need to know javascript, python, and other languages? How is
PERL for something like this?

I am entering the below values by hand into a functional database. I
thought that I would create some type of front-end to enter the values and
then have the front-end enter the values into the postgresql database.
01). visit_date
02). start_time
03). end_time
04). venue (This is the casino name)
05). city
06). state
07). limit (4/8 20/40 etc)
08). game (7-card-stud etc)
09). variant (fixed-limit no-limit etc)
10). high-low (mixed-high-low high-only etc)
11). buy_in
12). cash_out

Thank you;

Sherman


Re: Notify client when a table was full

2018-01-21 Thread Vincenzo Romano
2018-01-21 19:31 GMT+01:00 Francisco Olarte :
> On Sun, Jan 21, 2018 at 1:27 PM, Michael Paquier
>  wrote:
>> On Fri, Jan 19, 2018 at 03:40:01PM +, Raymond O'Donnell wrote:
> ...
>>> How do you define "full"?

The only possible and meaningful case, IMHO, as stated by David
earlier, is "file system full".
Which is communicated by Postgres with the "Class 53 — Insufficient
Resources" error codes.
Please refer to official documentation like:

https://www.postgresql.org/docs/10/static/errcodes-appendix.html

For specific programming languages more details need to be checked.



Re: Notify client when a table was full

2018-01-21 Thread Francisco Olarte
On Sun, Jan 21, 2018 at 1:27 PM, Michael Paquier
 wrote:
> On Fri, Jan 19, 2018 at 03:40:01PM +, Raymond O'Donnell wrote:
...
>> How do you define "full"?

> There could be two definitions here:
> 1) A table contains more data than a customly-defined amount of data
> on-disk.
> 2) The partition where the table data is located runs out of disk
> space.

I see a third definition. No more rows can be inserted without
previously deleting. The simplest case I can think of is "b boolean
primary key" with two rows already in.

But I doubt the OP was referring to any of that.

Francisco Olarte



License question regarding distribution of binaries

2018-01-21 Thread Rafał Zabrowarny
Hi,

My name is Rafał and I would like prepare lib to setup and run Pg within 
integration tests.
To do it I would like to  prepare on Windows nuget package containing necessary 
Pg's binaries. I would like to keep it in separate folder with enclosed Pg's 
license. It would be distributed in 1 package with rest of the lib.

I would like to know if this is allowed in terms of law enforcement ? Is 
distributing Pg's binaries in such a form doesn't violated license ?
Next quest I have is it possible to remove unneeded Pg's binaries and only 
distribute part of it (for instance pg_ctl, includes folder and so for) ? I 
Would like to distribute minimal set of binaries that are needed to run Pg.


TIA for respond


Re: Notify client when a table was full

2018-01-21 Thread Michael Paquier
On Fri, Jan 19, 2018 at 03:40:01PM +, Raymond O'Donnell wrote:
> On 19/01/18 15:34, hmidi slim wrote:
>> Hi,
>> I'm looking for a function in postgresql which notify the client if a
>> table was full or not.So I found the function Notify
>> https://www.postgresql.org/docs/9.0/static/sql-notify.html.
>> This function send a notification when a new action was done to the
>> table. Is there a way to send a notification only  when the table was
>> full and no future actions (insertion of new rows for examples) will be
>> done. I was connected to an external api and saving the data received
>> from it to a postgres database and I want to be notified when the table
>> was full and no rows will be inserted. Does it realizable or Should I
>> create a trigger and listens for every insertion and notify the client?
> 
> How do you define "full"?

There could be two definitions here:
1) A table contains more data than a customly-defined amount of data
on-disk.
2) The partition where the table data is located runs out of disk
space.
--
Michael


signature.asc
Description: PGP signature