RE: How to double-quote a double quoted identifier?

2020-06-15 Thread Edson Richter
De: Tom Lane 
Enviado: segunda-feira, 15 de junho de 2020 22:24
Para: Edson Richter 
Cc: pgsql-general 
Assunto: Re: How to double-quote a double quoted identifier?

Edson Richter  writes:
> How do I remote execute que following command:
> CREATE EXTENSION "uuid-ossp"
> SCHEMA public
> VERSION "1.1";

> I'm using PostgreSQL 9.6, Linux x64, and bash.

> I've tried the following:

> ssh -l postgres -2 -C -p 2022 192.168.0.70 "psql -h 127.0.0.1 -d test_db -p 
> 5432 -c \"CREATE EXTENSION ""uuid-ossp"" SCHEMA pg_catalog;\" "

Yeah, nesting quoting levels in shell is kind of a bear.

> I know this is a kind of "bash-psql" issue, but I would appreciate if anyone 
> could share experience with this.

Since you're using bash, you can get bash to do the work for you,
via multiple iterations of the ${variable@Q} construct.

$ SQLCMD='CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'
$ echo $SQLCMD
CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;
$ echo ${SQLCMD@Q}
'CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'
-- not too exciting so far, but wrap it into another layer of quoting:
$ SHCMD="psql -h 127.0.0.1 -d test_db -p 5432 -c ${SQLCMD@Q}"
$ echo $SHCMD
psql -h 127.0.0.1 -d test_db -p 5432 -c 'CREATE EXTENSION "uuid-ossp" SCHEMA 
pg_catalog;'
$ echo ${SHCMD@Q}
'psql -h 127.0.0.1 -d test_db -p 5432 -c '\''CREATE EXTENSION "uuid-ossp" 
SCHEMA pg_catalog;'\'''
-- and now your answer is:
$ echo ssh -l postgres -2 -C -p 2022 192.168.0.70 ${SHCMD@Q}
ssh -l postgres -2 -C -p 2022 192.168.0.70 'psql -h 127.0.0.1 -d test_db -p 
5432 -c '\''CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'\'''

You could, of course, just use the end result of that -- but it's
probably better to have a shell script recalculate it on the fly
given the desired SQL command as input.

regards, tom lane
Ingenious!
Thanks a lot!

Edson Richter



Re: How to double-quote a double quoted identifier?

2020-06-15 Thread Tom Lane
Edson Richter  writes:
> How do I remote execute que following command:
> CREATE EXTENSION "uuid-ossp"
> SCHEMA public
> VERSION "1.1";

> I'm using PostgreSQL 9.6, Linux x64, and bash.

> I've tried the following:

> ssh -l postgres -2 -C -p 2022 192.168.0.70 "psql -h 127.0.0.1 -d test_db -p 
> 5432 -c \"CREATE EXTENSION ""uuid-ossp"" SCHEMA pg_catalog;\" "

Yeah, nesting quoting levels in shell is kind of a bear.

> I know this is a kind of "bash-psql" issue, but I would appreciate if anyone 
> could share experience with this.

Since you're using bash, you can get bash to do the work for you,
via multiple iterations of the ${variable@Q} construct.

$ SQLCMD='CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'
$ echo $SQLCMD
CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;
$ echo ${SQLCMD@Q}
'CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'
-- not too exciting so far, but wrap it into another layer of quoting:
$ SHCMD="psql -h 127.0.0.1 -d test_db -p 5432 -c ${SQLCMD@Q}"
$ echo $SHCMD
psql -h 127.0.0.1 -d test_db -p 5432 -c 'CREATE EXTENSION "uuid-ossp" SCHEMA 
pg_catalog;'
$ echo ${SHCMD@Q}
'psql -h 127.0.0.1 -d test_db -p 5432 -c '\''CREATE EXTENSION "uuid-ossp" 
SCHEMA pg_catalog;'\'''
-- and now your answer is:
$ echo ssh -l postgres -2 -C -p 2022 192.168.0.70 ${SHCMD@Q}
ssh -l postgres -2 -C -p 2022 192.168.0.70 'psql -h 127.0.0.1 -d test_db -p 
5432 -c '\''CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'\'''

You could, of course, just use the end result of that -- but it's
probably better to have a shell script recalculate it on the fly
given the desired SQL command as input.

regards, tom lane




How to double-quote a double quoted identifier?

2020-06-15 Thread Edson Richter
How do I remote execute que following command:

CREATE EXTENSION "uuid-ossp"
SCHEMA public
VERSION "1.1";

I'm using PostgreSQL 9.6, Linux x64, and bash.

I've tried the following:

ssh -l postgres -2 -C -p 2022 192.168.0.70 "psql -h 127.0.0.1 -d test_db -p 
5432 -c \"CREATE EXTENSION ""uuid-ossp"" SCHEMA pg_catalog;\" "

I know this is a kind of "bash-psql" issue, but I would appreciate if anyone 
could share experience with this.

Thanks a lot,

Edson


Re: Something else about Redo Logs disappearing

2020-06-15 Thread Laurenz Albe
On Mon, 2020-06-15 at 19:00 +0200, Peter wrote:
> And that is one of a couple of likely pitfalls I perceived when
> looking at that new API.

That is a property of my scripts, *not* of the non-exclusive
backup API...

> We could fix that, but that will then get more complicated - and
> people will usually not do that. And that's why I consider that
> new API as rather dangerous.

... so this is moot.

Yours,
Laurenz Albe





Re: Should I enforce ssl/local socket use?

2020-06-15 Thread Bruce Momjian
On Sun, Jun  7, 2020 at 10:32:39AM +1000, Tim Cross wrote:
> 
> Michel Pelletier  writes:
> 
> > Hello,
> >
> > I'm the author of the pgsodium cryptography library.  I have a question
> > about a best practice I'm thinking of enforcing.  Several functions in
> > pgsodium generate secrets, I want to check the Proc info to enforce that
> > those functions can only be called using a local domain socket or an ssl
> > connection.  If the connection isn't secure by that definition, secret
> > generating functions will fail.
> >
> > If someone really wants to point the gun at their foot, they can connect
> > with an unsecured proxy.  My goal would be to make bypassing the check
> > annoying.
> >
> > Any thoughts?  Is this an insufferably rude attitude?  Are there scenarios
> > where one can foresee needing to generate secrets not over ssl or a domain
> > socket?
> >
> 
> I'm never very fond of enforcing a particular behaviour as it assumes we
> understand all environments and use cases. Far better to make this the
> default behaviour, but allow users to disable it if they want and
> clearly document that option as insecure. I also suspect that without
> the ability to somehow disable the checks, people will find elaborate
> ways to work around them which are almost certainly going to be even
> worse from a security perspective. 

You also have to allow a way to disable it that is secure or it is
useless, which makes it even more complex.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Index no longer being used, destroying and recreating it restores use.

2020-06-15 Thread Michael Lewis
On Tue, Jun 9, 2020 at 6:24 AM Koen De Groote  wrote:

> Right. In that case, the function I ended up with is this:
>
> create or replace function still_needs_backup(bool, bool)
> returns BOOLEAN as $$
> BEGIN
> PERFORM 1 from item where shouldbebackedup=$1 and backupperformed=$2;
>   IF FOUND THEN
> RETURN TRUE;
>   ELSE
> RETURN FALSE;
>   END IF;
> END;
> $$
> language plpgsql;
>

I meant something like the below (not tested)-

create or replace function still_needs_backup(shouldbebackedup bool,
backupperformed bool)
returns BOOLEAN as $$
BEGIN
   return $1 AND NOT $2;
END;
$$
language sql;

CREATE INDEX CONCURRENTLY index_test ON item USING btree (itemCreated)
WHERE still_needs_backup(shouldbebackedup, backupperformed);
ANALYZE item;

>


Re: Move configuration files with pg_upgrade

2020-06-15 Thread Bruce Momjian
On Thu, Jun  4, 2020 at 03:26:07PM -0700, Adrian Klaver wrote:
> On 6/4/20 12:52 PM, Tanja Savic wrote:
> > Hello,
> > 
> > I did Postgresql upgrade from v10 to v12 using pg_upgrade (on Linux
> > Ubuntu server).
> > Of course i wanted settings inpg_hba.conf and postgresql.conf to keep,
> > but after the upgrade there were new configuration files and I moved it
> > manually.
> 
> That is documented:
> 
> https://www.postgresql.org/docs/12/pgupgrade.html
> 
> "
> Restore pg_hba.conf
> 
> If you modified pg_hba.conf, restore its original settings. It might also be
> necessary to adjust other configuration files in the new cluster to match
> the old cluster, e.g. postgresql.conf.
> "
> 
> The new version's conf files  will probably have new settings available so
> this is something that should be reviewed before moving over. One way to
> deal with this is include file(s):
> 
> https://www.postgresql.org/docs/12/config-setting.html#CONFIG-INCLUDES
> 
> Using them you create the settings you want to manage in the include file.
> Then copy that over and set the include 'some_file' in the new
> postgresql.conf and you are good.

Yes, the community instructions require you to reconfigure the new
server to match the old one.  Some packagers who automate pg_upgrade
might do that configuration migration automatically.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Index no longer being used, destroying and recreating it restores use.

2020-06-15 Thread Bruce Momjian
On Tue, Jun  9, 2020 at 02:23:51PM +0200, Koen De Groote wrote:
> Right. In that case, the function I ended up with is this:
> 
> create or replace function still_needs_backup(bool, bool)
> returns BOOLEAN as $$
> BEGIN                
>     PERFORM 1 from item where shouldbebackedup=$1 and backupperformed=$2;
>       IF FOUND THEN
>         RETURN TRUE;
>       ELSE
>         RETURN FALSE;
>       END IF;
> END;
> $$
> language plpgsql;
> 
> 
> And the index for that should be this, I think: "CREATE INDEX CONCURRENTLY
> index_test ON item USING btree (still_needs_backup, itemCreated, filepath)
> WHERE still_needs_backup(true, false) = true;"
> However postgres throws an error here, saying "ERROR:  functions in index
> predicate must be marked IMMUTABLE".
> 
> I tried it also without the first argument, same error.
> 
> And I don't think I can do that, because the return is not IMMUTABLE. It is at
> best STABLE, but certainly not IMMUTABLE.
> 
> So yeah, I'm probably not understanding the suggestion properly? Either way, I
> still have questions about the earlier function I created, namely how reliable
> that performance is. If not the same thing will happen as with the re-created
> index.

I think you need to look at EXPLAIN ANALYZE and see how close the
estimate is from the actual counts for various stages.  The original
query had these quite different, leading to misestimation and wrong
plans.  If the new EXPLAIN ANALYZE has estimates closer to actual, the
problem should not reappear.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Something else about Redo Logs disappearing

2020-06-15 Thread Peter
On Mon, Jun 15, 2020 at 03:19:29PM +0200, Laurenz Albe wrote:
! On Mon, 2020-06-15 at 14:50 +0200, Peter wrote:
! > ! An example:
! > ! 
! > ! - Backup #1 calls "pgpre.sh"
! > ! - Backup #1 starts copying files
! > ! - Backup #2 calls "pgpre.sh".
! > !   This will cancel the first backup.
! > ! - Backup #1 completes copying files.
! > ! - Backup #1 calls "pgpost.sh".
! > !   It will receive an error.
! > !   So it has to invalidate the backup.
! > ! - Backup #2 completes copying files.
! > ! - Backup #2 calls "pgpost.sh".
! > !   It gets a "backup_label" file and completes the backup.
! > 
! > That's not true.
! 
! Ah, yes, you are right.

Thank You.

! Since "pgpre.sh" and "pgpost.sh" are independent, there
! is no way to tell which of them belongs to which other.

Correct.
 
! So calling "pgpost.sh" indeed ends the most recently started
! backup and returns "backup_label" accordingly.
! 
! That means: the caller of the scripts has to make sure
! not to start a second backup while the first one is running.

Never run two backups in parallel with such an approach, exactly.

And that is one of a couple of likely pitfalls I perceived when
looking at that new API.

We could fix that, but that will then get more complicated - and
people will usually not do that. And that's why I consider that
new API as rather dangerous.


cheerio,
PMc




Re: Assigning values to a range in Pgsql and inclusive / exclusive bounds

2020-06-15 Thread Ron Clarke
Thanks for that perfect... missed the use of tstzrange() as a 'function' in
the documentation.

Best regards
Ron

On Fri, 12 Jun 2020 at 21:02, Adrian Klaver 
wrote:

> On 6/12/20 11:45 AM, Ron Clarke wrote:
> > Hi,
> >
> > I've got a simple problem, but I'm convinced that there must be an
> > elegant solution. I'm a refugee from the world of MSSQL, so I'm still
> > finding some aspects of PostgreSQL alien.
> >
> > I'm trying to use the /tstzrange /datatype. My issue is correctly
> > setting the bound types when assigning values to a range in code (PGSQL).
> >
> > So if i declare this : e.g.
> >
> > /declare tx tstzrange := '[today, tomorrow)' ;/
> >
> >
> > I get the variable tx as expected with the Inclusive '[' lower bound and
> > exclusive upper ')' bound.
> >
> > But if I attempt to reassign the value in code within pgsql I can do
> > this simply, only with '(' syntax for the lower bound i.e. with an
> > exclusive lower bound, e.g so this works:-
> >
> > /tx= (Timestamptz 'today', timestamptz 'now' + interval '1 hour');/
> >
> > but if I try
> > /tx= [Timestamptz 'today', timestamptz 'now' + interval '1 hour'); /
> > /
> > /
> > this will have syntax errors - as the hidden 'select [' upsets the
> > parser.   I've tried to include a '[)' in variations of the expression,
> > but just get various syntax errors..
> >
> > I've tried many combinations and I can get it to work using casts and
> > concatenations, e.g. :-
> >
> > / tx = CONCAT('[', 'today'::Timestamptz,',',('now' ::Timestamptz +
> > interval '1 hour'):: timestamptz , ')'):: tstzrange ;/
> >
> > works but I can't help thinking that I'm missing something much simpler
> > and more elegant.
> > How should this actually be done?
>
> Realized what you want is:
>
> select tstzrange('today', ('now'::timestamptz + interval '1 hour'), '[)');
>tstzrange
> --
>   ["06/12/2020 00:00:00 PDT","06/12/2020 13:59:27.554229 PDT")
>
>
> tx tstzrange := tstzrange('today', ('now'::timestamptz + interval '1
> hour'), '[)') ;
>
> >
> > Thanks in advance for your advice.
> >
> > Ron
> > Stay safe everyone.
> >
> >
> > here's an example script to show what I mean:-
> >
> > /do
> > //$$
> > //DECLARE
> > /
> >
> > /tx tstzrange := '[today, tomorrow)' ;/
> >
> > /answer text;/
> >
> > /BEGIN
> > /
> >
> > /RAISE NOTICE 'Start %', tx;/
> >
> > /answer = tx @> 'today'::Timestamptz;/
> >
> > /RAISE NOTICE 'today  %', answer;/
> >
> > /answer = tx @> 'tomorrow'::Timestamptz;/
> >
> > /RAISE NOTICE 'tomorrow  %', answer;/
> >
> > /-- ( works
> > -- tx= (Timestamptz 'today', timestamptz 'now' + interval '1
> hour');
> > /-- [ doesn't work
> > -- tx= [Timestamptz 'today', timestamptz 'now' + interval '1
> hour');
> > -- working around the parser??
> > /tx = CONCAT('[', 'today'::Timestamptz,',',('now' ::Timestamptz
> > + interval '1 hour'):: /timestamptz , ')'):: tstzrange ;
> >
> > /RAISE NOTICE 'reassign  %', tx;/
> >
> > /answer = tx @> 'today'::Timestamptz;/
> >
> > /RAISE NOTICE 'today  %', answer;/
> >
> > /answer = tx @> 'now'::Timestamptz;/
> >
> > /RAISE NOTICE 'now%', answer;/
> >
> > /END;/
> > /$$ /
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


SV: SV: pg_service.conf and client support

2020-06-15 Thread Niels Jespersen
Fra: Laurenz Albe  
Sendt: 15. juni 2020 15:36
Til: Niels Jespersen ; pgsql-gene...@postgresql.org
Emne: Re: SV: pg_service.conf and client support

On Mon, 2020-06-15 at 11:58 +, Niels Jespersen wrote:
> > For your examples that means:
> > - The PostgreSQL ODBC server can use pg_service.conf
> > - NpgSQL cannot use the file.
> 
> You mention that the PostgreSQL ODBC driver can use pg_service.conf. 
> But I cannot find any examples om how to construct a connectionstring that 
> PostgreSQL ODBC will accept?

After looking at the code, I am no longer sure.

There doesn't seem to be a way to specify a general connection string.

You could try setting the environment variable PGSERVICE to specify the service.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

That is the way to do it. It works. Connectionstring 

'Driver={PostgreSQL Unicode(x64)}' 

And environment variables 

PGSERVICEFILE and PGSERVICE

Will connect you. Thank you for helping with this. 

Regards Niels



Re: TOAST table size in bytes growing despite working autovacuum

2020-06-15 Thread Laurenz Albe
On Mon, 2020-06-15 at 16:42 +0300, Kristjan Mustkivi wrote:
> > You should schedule down time and run a VACUUM (FULL) on that table.
> > That will rewrite the table and get rid of the bloat.
> 
> But in order to avoid the situation happening again (as it will with
> the current settings), I should likely make the autovacuuming on the
> TOAST table even more aggressive via
> toast.autovacuum_vacuum_scale_factor tinkering, right?

No, the correct way is to reduce "autovacuum_vacuum_cost_delay".

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Mixed Locales and Upgrading

2020-06-15 Thread Don Seiler
Good morning,

Back with a follow-up question to all this. I'm wondering if we shouldn't
also change the locale settings for postgres/template0/template1 to match
our new desires setting en_US.UTF-8 with UTF8 encoding. We haven't written
anything to postgres. Some of our DB clusters have changed these already
(but still leaving things mixed) but I want to make things uniform for
future upgrades to go a lot more smoothly.

Some examples of current DB clusters that need to be fixed. The first one
has the more problematic incompatible mixing within the DBs (the original
point of this thread), the others just have different settings across DBs
that I'd like change going forward as well.

  datname   | encoding | datcollate | datctype | size_mb
+--++--+-
 xx | UTF8 | en_US  | en_US| 1390789
 postgres   | UTF8 | en_US  | en_US|   6
 template0  | UTF8 | en_US  | en_US|   6
 template1  | UTF8 | en_US  | en_US|   6

   datname| encoding | datcollate |  datctype  | size_mb
--+--+++-
 xxx  | UTF8 | en_US.utf8 | en_US.utf8 |2178
 postgres | LATIN1   | en_US  | en_US  |   7
 template0| LATIN1   | en_US  | en_US  |   7
 template1| UTF8 | en_US.utf8 | en_US.utf8 |   7

  datname  | encoding | datcollate |  datctype  | size_mb
---+--+++-
 xx| UTF8 | en_US.utf8 | en_US.utf8 | 345
 postgres  | LATIN1   | en_US  | en_US  |   7
 template0 | UTF8 | en_US.utf8 | en_US.utf8 |   7
 template1 | UTF8 | en_US.utf8 | en_US.utf8 |   7

For the smaller DBs I was planning to just dump/restore into a new cluster
as I upgrade to PG12 as well. However two that have the problem are the two
biggest ones where the downtime for dump/restore would be too great. So I'm
wondering if there is risk or harm in running an UPDATE pg_database command
on postgres/template0/template1 as needed and re-indexing afterward.

-- 
Don Seiler
www.seiler.us


RE: Getting error on

2020-06-15 Thread M Tarkeshwar Rao
Thanks for reply.

Can you please suggest why this error occur?
Any resource crunch on postgres server?

Regards
Tarkeshwar
From: Ron 
Sent: Monday, June 15, 2020 7:17 PM
To: pgsql-general@lists.postgresql.org
Subject: Re: Getting error on

On 6/15/20 8:43 AM, M Tarkeshwar Rao wrote:

Hi all,

I have the c++ postgres client. When I restarted my client process I am getting 
following error on insertion.

Can you please suggest why this error is coming and how to resolve it?

The error hint says how to resolve it.

2020-04-17 06:30:04.498 PRINT_DEFAULT_ERROR: Processor-DefaultCollectorGroup-0 
:DRViewerPoint.cc :notify --> Failed to insert the data
ERROR:  index "1_rasciistruct_thefields_2_pkey" contains unexpected zero page 
at block 291
HINT:  Please REINDEX it.

REINDEX INDEX 1_rasciistruct_thefields_2_pkey;
--
Angular momentum makes the world go 'round.


Re: Getting error on

2020-06-15 Thread Ron

On 6/15/20 8:43 AM, M Tarkeshwar Rao wrote:


Hi all,

I have the c++ postgres client. When I restarted my client process I am 
getting following error on insertion.


Can you please suggest why this error is coming and how to resolve it?



The error hint says how to resolve it.


2020-04-17 06:30:04.498 PRINT_DEFAULT_ERROR: 
Processor-DefaultCollectorGroup-0 :DRViewerPoint.cc :notify --> Failed to 
insert the data


ERROR:  index "1_rasciistruct_thefields_2_pkey" contains unexpected zero 
page at block 291


HINT:  Please REINDEX it.



REINDEX INDEX 1_rasciistruct_thefields_2_pkey;

--
Angular momentum makes the world go 'round.


Getting error on

2020-06-15 Thread M Tarkeshwar Rao
Hi all,

I have the c++ postgres client. When I restarted my client process I am getting 
following error on insertion.

Can you please suggest why this error is coming and how to resolve it?

2020-04-17 06:30:04.498 PRINT_DEFAULT_ERROR: Processor-DefaultCollectorGroup-0 
:DRViewerPoint.cc :notify --> Failed to insert the data
ERROR:  index "1_rasciistruct_thefields_2_pkey" contains unexpected zero page 
at block 291
HINT:  Please REINDEX it.

Regards
Tarkeshwar


Re: TOAST table size in bytes growing despite working autovacuum

2020-06-15 Thread Kristjan Mustkivi
On Mon, Jun 15, 2020 at 4:37 PM Laurenz Albe  wrote:
>
> On Mon, 2020-06-15 at 13:47 +0300, Kristjan Mustkivi wrote:
> > Still, pgstattuple reveals that the table size is 715MB while live
> > tuple len is just 39MB and 94% of the table is vacant. I do not have
> > much experience in interpreting this but it would seem that it is
> > still getting bloated. Should the autovacuum be made even more
> > aggressive? E.g toast.autovacuum_vacuum_scale_factor=0.01 instead of
> > 0.05 and tweaked further when necessary until the size stabilizes
> > (more precisely pgstattuple will reflect the bloat to be under
> > control):
> >
> > SELECT * FROM pgstattuple('pg_toast.pg_toast_293406');
> > ─[ RECORD 1 ]──┬──
> > table_len  │ 715776000
> > tuple_count│ 25545
> > tuple_len  │ 39241366
> > tuple_percent  │ 5.48
> > dead_tuple_count   │ 1116
> > dead_tuple_len │ 1930508
> > dead_tuple_percent │ 0.27
> > free_space │ 669701052
> > free_percent   │ 93.56
>
> Indeed, the table is almost entirely air.
>
> You should schedule down time and run a VACUUM (FULL) on that table.
> That will rewrite the table and get rid of the bloat.

Hello!

But in order to avoid the situation happening again (as it will with
the current settings), I should likely make the autovacuuming on the
TOAST table even more aggressive via
toast.autovacuum_vacuum_scale_factor tinkering, right?

Sorry to pester with this and thank you for the feedback - it is much
appreciated!
-- 
Kristjan Mustkivi




Re: TOAST table size in bytes growing despite working autovacuum

2020-06-15 Thread Laurenz Albe
On Mon, 2020-06-15 at 13:47 +0300, Kristjan Mustkivi wrote:
> Still, pgstattuple reveals that the table size is 715MB while live
> tuple len is just 39MB and 94% of the table is vacant. I do not have
> much experience in interpreting this but it would seem that it is
> still getting bloated. Should the autovacuum be made even more
> aggressive? E.g toast.autovacuum_vacuum_scale_factor=0.01 instead of
> 0.05 and tweaked further when necessary until the size stabilizes
> (more precisely pgstattuple will reflect the bloat to be under
> control):
> 
> SELECT * FROM pgstattuple('pg_toast.pg_toast_293406');
> ─[ RECORD 1 ]──┬──
> table_len  │ 715776000
> tuple_count│ 25545
> tuple_len  │ 39241366
> tuple_percent  │ 5.48
> dead_tuple_count   │ 1116
> dead_tuple_len │ 1930508
> dead_tuple_percent │ 0.27
> free_space │ 669701052
> free_percent   │ 93.56

Indeed, the table is almost entirely air.

You should schedule down time and run a VACUUM (FULL) on that table.
That will rewrite the table and get rid of the bloat.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: SV: pg_service.conf and client support

2020-06-15 Thread Laurenz Albe
On Mon, 2020-06-15 at 11:58 +, Niels Jespersen wrote:
> > For your examples that means:
> > - The PostgreSQL ODBC server can use pg_service.conf
> > - NpgSQL cannot use the file.
> 
> You mention that the PostgreSQL ODBC driver can use pg_service.conf. But I 
> cannot
> find any examples om how to construct a connectionstring that PostgreSQL ODBC 
> will accept? 

After looking at the code, I am no longer sure.

There doesn't seem to be a way to specify a general connection string.

You could try setting the environment variable PGSERVICE to specify
the service.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Replication error

2020-06-15 Thread Sonam Sharma
I"m running postgres DB on docker & database replication have been enabled.
After configuring repmgr not able to switch over/failover operation.DB &
errors details are provided below.

Please help on this.


*  ssh -o Batchmode=yes -q -o ConnectTimeout=10 10.0.0.76
/usr/lib/postgresql/9.4/bin/repmgr -f /etc/repmgr.conf --version
2>/dev/null && echo "1" || echo "0"*
*ERROR: unable to execute "repmgr" on "10.0.0.76"*
*HINT: check "pg_bindir" is set to the correct path in "repmgr.conf"; *

*The value is set correctly of binddir. But still getting error. It is
adding psql after bin directory and then repmgr i.e
/usr/pgsql/pgsql9.5/bin/psql/repmgr. But on primary when I am doing which
repmgr it's showing correct path*


Re: Something else about Redo Logs disappearing

2020-06-15 Thread Laurenz Albe
On Mon, 2020-06-15 at 14:50 +0200, Peter wrote:
> ! An example:
> ! 
> ! - Backup #1 calls "pgpre.sh"
> ! - Backup #1 starts copying files
> ! - Backup #2 calls "pgpre.sh".
> !   This will cancel the first backup.
> ! - Backup #1 completes copying files.
> ! - Backup #1 calls "pgpost.sh".
> !   It will receive an error.
> !   So it has to invalidate the backup.
> ! - Backup #2 completes copying files.
> ! - Backup #2 calls "pgpost.sh".
> !   It gets a "backup_label" file and completes the backup.
> 
> That's not true.

Ah, yes, you are right.
It's a while since I wrote these scripts.

Since "pgpre.sh" and "pgpost.sh" are independent, there
is no way to tell which of them belongs to which other.

So calling "pgpost.sh" indeed ends the most recently started
backup and returns "backup_label" accordingly.

That means: the caller of the scripts has to make sure
not to start a second backup while the first one is running.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Something else about Redo Logs disappearing

2020-06-15 Thread Peter
On Mon, Jun 15, 2020 at 11:44:33AM +0200, Laurenz Albe wrote:
! On Sat, 2020-06-13 at 19:48 +0200, Peter wrote:
! > ! >  4. If, by misconfiguration and/or operator error, the backup system
! > ! > happens to start a second backup. in parallel to the first,
! > ! > then do I correctly assume, both backups will be rendered
! > ! > inconsistent while this may not be visible to the operator; and
! > ! > the earlier backup would be flagged as apparently successful while
! > ! > carrying the wrong (later) label?
! > ! 
! > ! If you are using my scripts and start a second backup while the first
! > ! one is still running, the first backup will be interrupted.
! > 
! > This is not what I am asking. It appears correct to me, that, on
! > the database, the first backup will be interrupted. But on the
! > tape side, this might go unnoticed, and on completion it will
! > successfully receive the termination code from the *SECOND*
! > backup - which means that on tape we will have a seemingly
! > successful backup, which
! >  1. is corrupted, and
! >  2. carries a wrong label.
! 
! That will only happen if the backup that uses my scripts does the
! wrong thing.

Yes. Occasionally software does the wrong thing, it's called "bugs".

! An example:
! 
! - Backup #1 calls "pgpre.sh"
! - Backup #1 starts copying files
! - Backup #2 calls "pgpre.sh".
!   This will cancel the first backup.
! - Backup #1 completes copying files.
! - Backup #1 calls "pgpost.sh".
!   It will receive an error.
!   So it has to invalidate the backup.
! - Backup #2 completes copying files.
! - Backup #2 calls "pgpost.sh".
!   It gets a "backup_label" file and completes the backup.

That's not true.


Now let me see how to compile a bash... and here we go:

! An example:
! 
! - Backup #1 calls "pgpre.sh"

> $ ./pgpre.sh
> backup starting location: 1/C858
> $

We now have:
> 24129 10  SJ   0:00.00 /usr/local/bin/bash ./pgpre.sh
> 24130 10  SJ   0:00.00 /usr/local/bin/bash ./pgpre.sh
> 24131 10  SJ   0:00.01 psql -Atq
> 24158 10  SCJ  0:00.00 sleep 5

And:
> postgres=# \d
>  List of relations
>  Schema |  Name  | Type  |  Owner   
> ++---+--
>  public | backup | table | postgres
> (1 row)
>  
> postgres=# select * from backup;
>  id |  state  |  pid  | backup_label | tablespace_map 
> +-+---+--+
>   1 | running | 24132 |  | 
> (1 row)

! - Backup #1 starts copying files

Let's suppose it does now.

! - Backup #2 calls "pgpre.sh".

> $ ./pgpre.sh
> backup starting location: 1/C924
> $ FATAL:  terminating connection due to administrator command
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> connection to server was lost
> Backup failed
> ./pgpre.sh: line 93: ${PSQL[1]}: ambiguous redirect
> 
> $ echo $?
> 0

!   This will cancel the first backup.

Yes, it seems it did:

> 25279 10  SJ   0:00.00 /usr/local/bin/bash ./pgpre.sh
> 25280 10  IWJ  0:00.00 /usr/local/bin/bash ./pgpre.sh
> 25281 10  SJ   0:00.01 psql -Atq
> 25402 10  SCJ  0:00.00 sleep 5

> postgres=# \d
>  List of relations
>  Schema |  Name  | Type  |  Owner   
> ++---+--
>  public | backup | table | postgres
> (1 row)
> 
> postgres=# select * from backup;
>  id |  state  |  pid  | backup_label | tablespace_map 
> +-+---+--+
>   1 | running | 25282 |  | 
> (1 row)

! - Backup #1 completes copying files.
! - Backup #1 calls "pgpost.sh".

> $ ./pgpost.sh 
> START WAL LOCATION: 1/C924 (file 0001000100C9)
> CHECKPOINT LOCATION: 1/C958
> BACKUP METHOD: streamed
> BACKUP FROM: master
> START TIME: 2020-06-15 14:09:41 CEST
> LABEL: 2020-06-15 14:09:40
> START TIMELINE: 1
>
> $ echo $?
> 0

!   It will receive an error.
!   So it has to invalidate the backup.

Where is the error?

What we now have is this:
No processes anymore.

>  id |  state   |  pid  |  backup_label
>   | tablespace_map 
> +--+---++
>   1 | complete | 25282 | START WAL LOCATION: 1/C924 (file 
> 0001000100C9)+| 
> |  |   | CHECKPOINT LOCATION: 1/C958  
>  +| 
> |  |   | BACKUP METHOD: streamed  
>  +| 
> |  |   | BACKUP FROM: master  
>  +| 
> |  |   | START TIME: 2020-06-15 14:09:41 CEST 
>  +| 
> |  |   | LABEL: 2020-06-15 14:09:40   
>  +| 
> |  |   | START TIMELINE: 1
>  +| 
> |  |   |

SV: pg_service.conf and client support

2020-06-15 Thread Niels Jespersen


-Oprindelig meddelelse-
Fra: Laurenz Albe  
Sendt: 15. juni 2020 11:24
Til: Niels Jespersen ; pgsql-gene...@postgresql.org
Emne: Re: pg_service.conf and client support

On Sun, 2020-06-14 at 05:03 +, Niels Jespersen wrote:
> I just found out about the pg service file. 
> https://www.postgresql.org/docs/current/libpq-pgservice.html
> 
> I don’t know why it took me so long finding this. I have been looking for 
> ways to abstract physical details of data location away for clients (much 
> like Oracle’s tnsnames).
> 
> Want to move a cluster to a new host. Sure, move it, edit pg_service.conf and 
> clients will not know the difference.
> 
> It works great for at least psql and psycopg2.
> 
> But I cannot find anything on pg_service.conf and the Postgres ODBC 
> driver and Npgsql for .Net
> 
> I know pg_service.conf support is implemented through libpq and support for 
> rivers not using libpq is not a given thing.
> 
> But I think the need for the abstraction of connection details is a general 
> one.
> 
> Can anyone shed som light on the ubiquitousness of support for 
> pg_service.conf?

You got it right: pg_service.conf is only used by libpq, so all clients that 
use libpq automatically support it.  Other clients don't support it.

For your examples that means:
- The PostgreSQL ODBC server can use pg_service.conf
- NpgSQL cannot use the file.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

Thank you Laurenz

You mention that the PostgreSQL ODBC driver can use pg_service.conf. But I 
cannot find any examples om how to construct a connectionstring that PostgreSQL 
ODBC will accept? 

Regards Niels





Re: TOAST table size in bytes growing despite working autovacuum

2020-06-15 Thread Kristjan Mustkivi
On Mon, Jun 15, 2020 at 12:17 PM Laurenz Albe  wrote:
>
> On Mon, 2020-06-15 at 11:51 +0300, Kristjan Mustkivi wrote:
> > I have a table which contains a "json" column and it gets heavily
> > updated. Before introducing toast.autovacuum_vacuum_scale_factor=0.05
> > and toast.autovacuum_vacuum_cost_limit=1000 this table bloated to
> > nearly 1TB in a short while. Now the n_dead_tup value is nicely under
> > control but still, the table is slowly growing in size but not in
> > rows. The odd thing is that the value of n_live_tup in the TOAST is
> > twice of that in the main table. I know it is a statistical value, but
> > this does not feel right.
> >
> > Why is that? What to do to make it stop growing?
>
> It is not surprising if there are more entries in the TOAST table than
> in the base table:  a big value will be split in several chunks,
> each of which is an entry in the TOAST table.
>
> To see if the TOAST table is bloated, use pgstattuples:
>
> SELECT * FROM pgstattuple('pg_toast.pg_toast_293406');
>
> Vacuum does not remove existing bloat, it just prevents increased bloat.

Thank you Laurenz,

So the TOAST table entries exceeding the base table entries are due to
that the toasted value is split and each chunk is considered as a
separate entry - good to know!

Still, pgstattuple reveals that the table size is 715MB while live
tuple len is just 39MB and 94% of the table is vacant. I do not have
much experience in interpreting this but it would seem that it is
still getting bloated. Should the autovacuum be made even more
aggressive? E.g toast.autovacuum_vacuum_scale_factor=0.01 instead of
0.05 and tweaked further when necessary until the size stabilizes
(more precisely pgstattuple will reflect the bloat to be under
control):

SELECT * FROM pgstattuple('pg_toast.pg_toast_293406');
─[ RECORD 1 ]──┬──
table_len  │ 715776000
tuple_count│ 25545
tuple_len  │ 39241366
tuple_percent  │ 5.48
dead_tuple_count   │ 1116
dead_tuple_len │ 1930508
dead_tuple_percent │ 0.27
free_space │ 669701052
free_percent   │ 93.56

With my the best,
-- 
Kristjan Mustkivi




Re: Something else about Redo Logs disappearing

2020-06-15 Thread Laurenz Albe
On Sat, 2020-06-13 at 19:48 +0200, Peter wrote:
> ! >  4. If, by misconfiguration and/or operator error, the backup system
> ! > happens to start a second backup. in parallel to the first,
> ! > then do I correctly assume, both backups will be rendered
> ! > inconsistent while this may not be visible to the operator; and
> ! > the earlier backup would be flagged as apparently successful while
> ! > carrying the wrong (later) label?
> ! 
> ! If you are using my scripts and start a second backup while the first
> ! one is still running, the first backup will be interrupted.
> 
> This is not what I am asking. It appears correct to me, that, on
> the database, the first backup will be interrupted. But on the
> tape side, this might go unnoticed, and on completion it will
> successfully receive the termination code from the *SECOND*
> backup - which means that on tape we will have a seemingly
> successful backup, which
>  1. is corrupted, and
>  2. carries a wrong label.

That will only happen if the backup that uses my scripts does the
wrong thing.

An example:

- Backup #1 calls "pgpre.sh"
- Backup #1 starts copying files
- Backup #2 calls "pgpre.sh".
  This will cancel the first backup.
- Backup #1 completes copying files.
- Backup #1 calls "pgpost.sh".
  It will receive an error.
  So it has to invalidate the backup.
- Backup #2 completes copying files.
- Backup #2 calls "pgpost.sh".
  It gets a "backup_label" file and completes the backup.

So the only way that something can go wrong would be if
backup #1 somehow does *not* invalidate the backup.

> ! This is specific to my scripts, PostgreSQL's non-exclusive backup
> ! can perform more than one concurrent backup successfully.
> ! I tried to keep things simple.
> 
> I understand. But the operator may not know that and/or accidentially
> start a second backup while one is still running. And this will then
> result in ...
> 
> ! If you have the wrong "backup_label", you end up with silent data 
> corruption.
> 
>  ... this.

Of course, if you do arbitrary nonsense like restoring a
backup without "backup_label", you will get arbitrary data
corruption.

It is a fundamental principle that, apart from "backup_label", there
is no way to tell a backup from a crashed data directory.
Any backup/recovery software must refuse to use a backup that lacks that file.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Oracle vs. PostgreSQL - a comment

2020-06-15 Thread Laurenz Albe
On Sun, 2020-06-14 at 09:17 -0400, Bruce Momjian wrote:
> On Sat, Jun 13, 2020 at 09:27:25PM -0400, Bruce Momjian wrote:
> > On Sat, Jun 13, 2020 at 05:06:37PM -0500, Ron wrote:
> > > On 6/13/20 1:46 PM, Bruce Momjian wrote:
> > > > On Wed, Jun  3, 2020 at 08:53:45PM +0200, Andreas Joseph Krogh wrote:
> > > > > I agree these are all technical issues, but nevertheless - 
> > > > > "implementation
> > > > > details", which DBAs don't care about. What's important from a DBA's
> > > > > perspective is not whether WAL is cluster-wide or database-wide, but 
> > > > > whether
> > > > > it's possible to manage backups/PITR/restores of individual databases 
> > > > > in a more
> > > > > convenient matter, which other RDBMS-vendors seem to provide.
> > > > > I love PG, have been using it professionally since 6.5, and our 
> > > > > company depends
> > > > > on it, but there are things other RDBMS-vendors do better...
> > > > The bigger issue is that while we _could_ do this, it would add more
> > > > problems and complexity, and ultimately, I think would make the
> > > > software less usable overall and would be a net-negative.  We know of no
> > > > way to do it without a ton of negatives.
> > > 
> > > How do other RDBMSs do it with ease?  (I know it's an architectural issue,
> > > but what's the architectural issue?)
> > 
> > I don't know.
> 
> I don't know the details, but I do know the general issues.  Other
> vendors must have sacrificed architectural simplicity, features,
> reliability, or performance to allow these things.  For example, it
> wouldn't be hard to just make databases another level of container above
> schemas to allow for simple cross-database queries, but we would lose
> the security isolation of databases (connection control. private system
> tables and extensions) to do that.  Having per-database WAL causes loss
> of performance, reliability issues, and architectural complexity.  Those
> problems might be solvable, but you will need to take a hit in one of
> these areas.

One example for what may be difficult:

If you GRANT a permission on a table to a user, you may get an entry in
"pg_catalog.pg_shdepend", which is a global table (it is shared by all
databases).

Now if you want to recover a single database, and you get a WAL entry
for that table, you'd have to "logically decode" that entry to figure
out if it should be applied or not (because it references a certain
database or not).

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com


Re: pg_service.conf and client support

2020-06-15 Thread Laurenz Albe
On Sun, 2020-06-14 at 05:03 +, Niels Jespersen wrote:
> I just found out about the pg service file. 
> https://www.postgresql.org/docs/current/libpq-pgservice.html
> 
> I don’t know why it took me so long finding this. I have been looking for 
> ways to abstract physical details of data location away for clients (much 
> like Oracle’s tnsnames).
> 
> Want to move a cluster to a new host. Sure, move it, edit pg_service.conf and 
> clients will not know the difference.
> 
> It works great for at least psql and psycopg2.
> 
> But I cannot find anything on pg_service.conf and the Postgres ODBC driver 
> and Npgsql for .Net
> 
> I know pg_service.conf support is implemented through libpq and support for 
> rivers not using libpq is not a given thing.
> 
> But I think the need for the abstraction of connection details is a general 
> one.
> 
> Can anyone shed som light on the ubiquitousness of support for 
> pg_service.conf?

You got it right: pg_service.conf is only used by libpq, so all clients that
use libpq automatically support it.  Other clients don't support it.

For your examples that means:
- The PostgreSQL ODBC server can use pg_service.conf
- NpgSQL cannot use the file.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: TOAST table size in bytes growing despite working autovacuum

2020-06-15 Thread Laurenz Albe
On Mon, 2020-06-15 at 11:51 +0300, Kristjan Mustkivi wrote:
> Dear all,
> 
> I have a table which contains a "json" column and it gets heavily
> updated. Before introducing toast.autovacuum_vacuum_scale_factor=0.05
> and toast.autovacuum_vacuum_cost_limit=1000 this table bloated to
> nearly 1TB in a short while. Now the n_dead_tup value is nicely under
> control but still, the table is slowly growing in size but not in
> rows. The odd thing is that the value of n_live_tup in the TOAST is
> twice of that in the main table. I know it is a statistical value, but
> this does not feel right.
> 
> Why is that? What to do to make it stop growing?
> 
> select n_live_tup, n_dead_tup, last_autovacuum, autovacuum_count from
> pg_stat_all_tables where relname = 'player_data_states';
> ─[ RECORD 1 ]┬─
> n_live_tup   │ 84730
> n_dead_tup   │ 8336
> last_autovacuum  │ 2020-06-15 08:23:58.88791+00
> autovacuum_count │ 11306
> 
> select n_live_tup, n_dead_tup, last_autovacuum, autovacuum_count from
> pg_stat_all_tables where relname = 'pg_toast_293406';
> ─[ RECORD 1 ]┬──
> n_live_tup   │ 168486
> n_dead_tup   │ 9835
> last_autovacuum  │ 2020-06-15 08:33:22.566087+00
> autovacuum_count │ 41021
> 
> The PG server is 11.7 (Debian 11.7-2.pgdg90+1)
> 
> And the table is
> 
>  Column │   Type│ Nullable │ Storage
> ┼───┼──┼
>  id │ bigint│ not null │ plain
>  cage_player_id │ bigint│ not null │ plain
>  cage_code  │ integer   │ not null │ plain
>  player_data│ json  │  │ extended
>  update_time│ timestamp with tz │ not null │ plain
> Indexes:
> "player_data_states_pk" PRIMARY KEY, btree (id)
> "player_data_states_uk1" UNIQUE CONSTRAINT, btree (cage_player_id,
> cage_code)
> Referenced by:
> TABLE "awards.player_data_state_changes" CONSTRAINT
> "player_data_state_changes_fk1" FOREIGN KEY (player_data_state_id)
> REFERENCES awards.player_data_states(id)
> Publications:
> "awards"
> Options: fillfactor=90, toast.autovacuum_vacuum_scale_factor=0.05,
> toast.autovacuum_vacuum_cost_limit=1000

It is not surprising if there are more entries in the TOAST table than
in the base table:  a big value will be split in several chunks,
each of which is an entry in the TOAST table.

To see if the TOAST table is bloated, use pgstattuples:

SELECT * FROM pgstattuple('pg_toast.pg_toast_293406');

Vacuum does not remove existing bloat, it just prevents increased bloat.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





TOAST table size in bytes growing despite working autovacuum

2020-06-15 Thread Kristjan Mustkivi
Dear all,

I have a table which contains a "json" column and it gets heavily
updated. Before introducing toast.autovacuum_vacuum_scale_factor=0.05
and toast.autovacuum_vacuum_cost_limit=1000 this table bloated to
nearly 1TB in a short while. Now the n_dead_tup value is nicely under
control but still, the table is slowly growing in size but not in
rows. The odd thing is that the value of n_live_tup in the TOAST is
twice of that in the main table. I know it is a statistical value, but
this does not feel right.

Why is that? What to do to make it stop growing?

select n_live_tup, n_dead_tup, last_autovacuum, autovacuum_count from
pg_stat_all_tables where relname = 'player_data_states';
─[ RECORD 1 ]┬─
n_live_tup   │ 84730
n_dead_tup   │ 8336
last_autovacuum  │ 2020-06-15 08:23:58.88791+00
autovacuum_count │ 11306

select n_live_tup, n_dead_tup, last_autovacuum, autovacuum_count from
pg_stat_all_tables where relname = 'pg_toast_293406';
─[ RECORD 1 ]┬──
n_live_tup   │ 168486
n_dead_tup   │ 9835
last_autovacuum  │ 2020-06-15 08:33:22.566087+00
autovacuum_count │ 41021

The PG server is 11.7 (Debian 11.7-2.pgdg90+1)

And the table is

 Column │   Type│ Nullable │ Storage
┼───┼──┼
 id │ bigint│ not null │ plain
 cage_player_id │ bigint│ not null │ plain
 cage_code  │ integer   │ not null │ plain
 player_data│ json  │  │ extended
 update_time│ timestamp with tz │ not null │ plain
Indexes:
"player_data_states_pk" PRIMARY KEY, btree (id)
"player_data_states_uk1" UNIQUE CONSTRAINT, btree (cage_player_id,
cage_code)
Referenced by:
TABLE "awards.player_data_state_changes" CONSTRAINT
"player_data_state_changes_fk1" FOREIGN KEY (player_data_state_id)
REFERENCES awards.player_data_states(id)
Publications:
"awards"
Options: fillfactor=90, toast.autovacuum_vacuum_scale_factor=0.05,
toast.autovacuum_vacuum_cost_limit=1000

Best regards
-- 
Kristjan Mustkivi


Re: A query in Streaming Replication

2020-06-15 Thread Sreerama Manoj
Hello,

  Can anyone check this and respond please!!

Thanks and regards,
Manoj.


On Sat, Jun 13, 2020, 20:26 Sreerama Manoj 
wrote:

> Hello,
>   I use streaming replication in async mode. When master gets down,
> slave will be promoted using a trigger file. During this process ".partial"
> file will be created and a WAL file with same ID will be created in a
> different time line in slave. When master comes back as slave, it will be
> synced to the current master. This is the normal procedure as far as I
> understood. But in some cases, ".partial" file is not getting created and
> peer DB which comes back as slave is unable to sync when this happens..
> Please suggest if this happens in any scenario and how to overcome this.
>
> Thanks & Regards:
> Manoj.
>
>
>