RES: [GENERAL] Trouble with regexp_matches

2016-11-05 Thread Edson Richter
> -Mensagem original-
> De: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
> Enviada em: sábado, 5 de novembro de 2016 15:13
> Para: Edson Richter ; pgsql-
> gene...@postgresql.org
> Assunto: Re: [GENERAL] Trouble with regexp_matches
> 
> On 11/05/2016 10:01 AM, Edson Richter wrote:
> > Dear list,
> >
> >
> >
> > Version stringPostgreSQL 9.4.10 on x86_64-unknown-linux-gnu,
> > compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
> >
> >
> >
> > I’m running the query below, and it is limiting results as if
> > “regexp_matches” being in where clause.
> >
> > IMHO, it is wrong: in case there is no match, shall return null or
> > empty array – not remove the result from the set!!!
> >
> >
> >
> > Is this a collateral effect of using regexp_matches in columns?
> >
> > If yes, shall not this information be BOLD RED FLASHING in
> > documentation (or it is already, and some kind sould would point me
> where)?
> 
> https://www.postgresql.org/docs/9.4/static/functions-matching.html


I see - I always believed that this page was related to WHERE clause or using 
functions in the PostgreSQL way (which is, in your turn, a alternative to 
"from" syntax) - not for the select clause.
But now that you mention it, and re-reading all the information, I can 
understand the implications.

Nevertheless, would be nice to put a huge warning at the "String functions" 
page about this behavior...

> 
> "It is possible to force regexp_matches() to always return one row by using a
> sub-select; this is particularly useful in a SELECT target list when you want 
> all
> rows returned, even non-matching ones:
> 
> SELECT col1, (SELECT regexp_matches(col2, '(bar)(beque)')) FROM tab; "
> 

Thanks, this worked - as well removing the "regexp_matches" and using 
"SUBSTRING( text FROM pattern)".


I really appreciate your help.


Kind regards,

Edson Richter


> 
> 
> >
> >
> >
> > -- First query (that is limiting results)
> > -
> >
> > select codigoocorrencia, datahoraocorrencia, datahoraimportacao,
> > observacao, regexp_matches(observacao, '\d\d/\d\d/\d\d\d\d')
> >
> > from batchloteocorrencia
> >
> > where codigoocorrencia = '091'
> >
> > and observacao is not null
> >
> > order by datahoraimportacao DESC
> >
> >
> >
> > Total results = 59
> >
> >
> >
> > --Second query (that is not limiting results, as I did
> > expect)-------
> > 
> >
> > select codigoocorrencia, datahoraocorrencia, datahoraimportacao,
> > observacao
> >
> > from batchloteocorrencia
> >
> > where codigoocorrencia = '091'
> >
> > and observacao is not null
> >
> > order by datahoraimportacao DESC
> >
> >
> >
> > Total results = 3826
> >
> >
> >
> >
> >
> > Why is that?
> >
> >
> >
> > Regards,
> >
> >
> >
> > Edson Richter
> >
> 
> 
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


RES: [GENERAL] Trouble with regexp_matches

2016-11-05 Thread Edson Richter
> -Mensagem original-
> De: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Enviada em: sábado, 5 de novembro de 2016 15:21
> Para: Edson Richter 
> Cc: pgsql-general@postgresql.org
> Assunto: Re: [GENERAL] Trouble with regexp_matches
> 
> Edson Richter  writes:
> > I’m running the query below, and it is limiting results as if
> “regexp_matches” being in where clause.
> > IMHO, it is wrong: in case there is no match, shall return null or empty 
> > array
> – not remove the result from the set!!!
> 
> Well, no, because regexp_matches() returns a set.  If there's no match,
> there's zero rows in the set.

For me, it is a strange behavior - putting something in select clause will 
restrict results as if it lies in join or where clauses.


> 
> The standard workaround is to use a scalar sub-select, which has the effect
> of converting a zero-row result into a NULL:
> 
> select codigoocorrencia, datahoraocorrencia, datahoraimportacao,
>observacao, (select regexp_matches(observacao, '\d\d/\d\d/\d\d\d\d'))
> from ...
> 
> As of v10 there will be a less confusing solution: use regexp_match() instead.
> 
>   regards, tom lane

Ok, for now, I've changed regexp_matches() to "... substring(observacao from 
'\d\d/\d\d/\d\d\d\d')" which does the job gracefully.

I still believe that an alert shall be made in the docs page (String 
functions), because seems confusing and error prone.


Thanks,

Edson Richter



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trouble with regexp_matches

2016-11-05 Thread Edson Richter
Dear list,

Version stringPostgreSQL 9.4.10 on x86_64-unknown-linux-gnu, compiled by 
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit

I’m running the query below, and it is limiting results as if “regexp_matches” 
being in where clause.
IMHO, it is wrong: in case there is no match, shall return null or empty array 
– not remove the result from the set!!!

Is this a collateral effect of using regexp_matches in columns?
If yes, shall not this information be BOLD RED FLASHING in documentation (or it 
is already, and some kind sould would point me where)?

-- First query (that is limiting results) 
-
select codigoocorrencia, datahoraocorrencia, datahoraimportacao, observacao, 
regexp_matches(observacao, '\d\d/\d\d/\d\d\d\d')
from batchloteocorrencia
where codigoocorrencia = '091'
and observacao is not null
order by datahoraimportacao DESC

Total results = 59

--Second query (that is not limiting results, as I did 
expect)---
select codigoocorrencia, datahoraocorrencia, datahoraimportacao, observacao
from batchloteocorrencia
where codigoocorrencia = '091'
and observacao is not null
order by datahoraimportacao DESC

Total results = 3826


Why is that?

Regards,

Edson Richter


Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-05 Thread Edson Richter
 Devrim Gündüz escreveu 

> Hi again,
>
> On Mon, 2016-10-03 at 00:18 +0000, Edson Richter wrote:
> > It is not working, I get the following error:
> >
> > [root@backup1 yum.repos.d]# LANG=C yum install pgadmin4 pgadmin4-web
> > Loaded plugins: ulninfo
> > epel/x86_64/metalink | 2.6 kB  00:00:00
> > ol7_UEKR3 | 1.2 kB  00:00:00
> > ol7_latest | 1.4 kB  00:00:00
> > pgdg96 | 4.1 kB  00:00:00
> > https://download.postgresql.org/pub/repos/yum/testing/9.6/redhat/rhel-7Server
> >  ;
> > -x86_64/repodata/repomd.xml:
> > [Errno 14] HTTPS Error 404 - Not Found
> > Trying other mirror.
>
> Can you please try again with this?
>
> yum --enablerepo pgdg96-updates-testing install pgadmin4 pgadmin4-web
>
> Just tested on:
> $ cat /etc/oracle-release
> Oracle Linux Server release 7.2
>
> If it also works for you, I'll push the updated packages to updates repo.

The command runs without errors! Thanks.

After install, I'll let you know if there is additional issues.

Thank you!

Regards,

Edson Richter

>
> Regards,
>
> --
> Devrim GÜNDÜZ
> EnterpriseDB: http://www.enterprisedb.com
> PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR


Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-04 Thread Edson Richter
Hi!


(sorry for top post, but I'm making a recall of current status - therefore 
avoiding deep search in details)


In short:

OS = Oracle Enterprise Linux 7.2 with Oracle Unbreakeable Kernel (a.k.a RHEL 
7.2)

Environment = Linux backup1.simfrete.com 3.8.13-118.11.2.el7uek.x86_64 #2 SMP 
Wed Sep 21 11:23:36 PDT 2016 x86_64 x86_64 x86_64 GNU/Linux

PostgreSQL = Using PostgreSQL 9.6 final, installed from official yum repository.

yum.repos with "pgdg96-updates-testing" enabled.

Manually installed all "flask..." packages.


Result of installation process:


[root@backup1 pgsql96]# LANG=C yum install pgadmin4-web
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-psycopg2-debug >= 2.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-passlib >= 1.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-html5lib >= 1.0b3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-crypto >= 2.6.1 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-click for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Running transaction check
---> Package python-blinker.noarch 0:1.4-1.rhel7 will be installed
---> Package python-click.noarch 0:6.3-1.el7 will be installed
---> Package python-html5lib.noarch 1:0.999-5.el7 will be installed
---> Package python-passlib.noarch 0:1.6.2-2.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
---> Package python2-crypto.x86_64 0:2.6.1-9.el7 will be installed
--> Processing Dependency: libtomcrypt.so.0()(64bit) for package: 
python2-crypto-2.6.1-9.el7.x86_64
--> Running transaction check
---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
--> Processing Dependency: libtommath >= 0.42.0 for package: 
libtomcrypt-1.17-23.el7.x86_64
--> Processing Dependency: libtommath.so.0()(64bit) for package: 
libtomcrypt-1.17-23.el7.x86_64
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Running transaction check
---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Finished Dependency Resolution
Error: Package: python-psycopg2-debug-2.6.2-2.rhel7.x86_64 (pgdg96)
   Requires: libpython2.7_d.so.1.0()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest


If I can help on anything, please let me know.



Regards,


Edson



De: pgsql-general-ow...@postgresql.org  em 
nome de Edson Richter 
Enviado: segunda-feira, 3 de outubro de 2016 13:24
Para: Devrim Gündüz; pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

On 03/10/2016 10:12, Devrim Gündüz wrote:
> Hi Edson,
>
> On Mon, 2016-10-03 at 00:18 +, Edson Richter wrote:
>> https://download.postgresql.org/pub/repos/yum/testing/9.6/redhat/rhel-7Server
>> -x86_64/repodata/repomd.xml:
>> [Errno 14] HTTPS Error 404 - Not Found
>> Trying other mirror.
> Fixed this error, sorry for that.
Thanks OK, this one is working - but I still get the following error:


[root@backup1 yum.repos.d]# LANG=C yum install pgadmin4-web
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-psycopg2-debug >= 2.6.2 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-passlib >= 1.6.2 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-html5lib >= 1.0b3 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-crypto >= 2.6.1 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-blinker >= 1.3 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-click for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Running transaction check
---> Package python-blinker.noarch 0:1.4-1.rhel7 will be installed
---> Package python-click.noarch 0:6.3-1.el7 will be installed
---> Package python-html5lib.noarch 1:0.999-5.el7 will be installed
---> Package

Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-04 Thread Edson Richter
Please, ignore this double post. It came from wrong e-mail address.

The issue is already being discussed in another discussion thread.


[GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-03 Thread Edson Richter
Dear community,


I'm trying to install pgAdmin4 in Oracle EL 7.

I've already installed PostgreSQL 9.6 final in same server (have EPEL enabled, 
as well pgdg 9.6 repos).

Running "yum install pgadmin4-web" I get the following result:


"[root@backup1 yum.repos.d]# LANG=C yum install pgadmin4-web

Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-psycopg2-debug >= 2.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-passlib >= 1.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-html5lib >= 1.0b3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-crypto >= 2.6.1 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-click for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
---> Package python-click.noarch 0:6.3-1.el7 will be installed
---> Package python-html5lib.noarch 1:0.999-5.el7 will be installed
---> Package python-passlib.noarch 0:1.6.2-2.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
---> Package python2-crypto.x86_64 0:2.6.1-9.el7 will be installed
--> Processing Dependency: libtomcrypt.so.0()(64bit) for package: 
python2-crypto-2.6.1-9.el7.x86_64
--> Running transaction check
---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
--> Processing Dependency: libtommath >= 0.42.0 for package: 
libtomcrypt-1.17-23.el7.x86_64
--> Processing Dependency: libtommath.so.0()(64bit) for package: 
libtomcrypt-1.17-23.el7.x86_64
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Running transaction check
---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Finished Dependency Resolution
Error: Package: pgadmin4-web-1.0-1.rhel7.noarch (pgdg96)
   Requires: python-blinker >= 1.3
Error: Package: python-psycopg2-debug-2.6.2-2.rhel7.x86_64 (pgdg96)
   Requires: libpython2.7_d.so.1.0()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest"

(the "LANG=C" at beginning of the line is to force output in english, please 
ignore it)

Have anyone faced similar issue or know how to correct this error?

Thanks in advance,

Edson Richter



Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-03 Thread Edson Richter
On 03/10/2016 10:12, Devrim Gündüz wrote:
> Hi Edson,
>
> On Mon, 2016-10-03 at 00:18 +0000, Edson Richter wrote:
>> https://download.postgresql.org/pub/repos/yum/testing/9.6/redhat/rhel-7Server
>> -x86_64/repodata/repomd.xml:
>> [Errno 14] HTTPS Error 404 - Not Found
>> Trying other mirror.
> Fixed this error, sorry for that.
Thanks OK, this one is working - but I still get the following error:


[root@backup1 yum.repos.d]# LANG=C yum install pgadmin4-web
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-psycopg2-debug >= 2.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-passlib >= 1.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-html5lib >= 1.0b3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-crypto >= 2.6.1 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-click for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Running transaction check
---> Package python-blinker.noarch 0:1.4-1.rhel7 will be installed
---> Package python-click.noarch 0:6.3-1.el7 will be installed
---> Package python-html5lib.noarch 1:0.999-5.el7 will be installed
---> Package python-passlib.noarch 0:1.6.2-2.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
---> Package python2-crypto.x86_64 0:2.6.1-9.el7 will be installed
--> Processing Dependency: libtomcrypt.so.0()(64bit) for package: 
python2-crypto-2.6.1-9.el7.x86_64
--> Running transaction check
---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
--> Processing Dependency: libtommath >= 0.42.0 for package: 
libtomcrypt-1.17-23.el7.x86_64
--> Processing Dependency: libtommath.so.0()(64bit) for package: 
libtomcrypt-1.17-23.el7.x86_64
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Running transaction check
---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Finished Dependency Resolution
Error: Package: python-psycopg2-debug-2.6.2-2.rhel7.x86_64 (pgdg96)
Requires: libpython2.7_d.so.1.0()(64bit)
  You could try using --skip-broken to work around the problem
  You could try running: rpm -Va --nofiles --nodigest

Regards,

Edson

>
>>>> Error: Package: python-psycopg2-debug-2.6.2-2.rhel7.x86_64
>>>> (pgdg96)   Requires: libpython2.7_d.so.1.0()(64bit)
>>>> You could try using --skip-broken to work around the problem
>>>> You could try running: rpm -Va --nofiles --nodigest"
>>> This is also something that I need to test, but it is too late in here now,
>>> will look tomorrow.
> Hmm, AFAICS, RHEL and OEL do not have python-debug package. I'll take a look.
>
> Regards,
> -- 
> Devrim GÜNDÜZ
> EnterpriseDB: http://www.enterprisedb.com
> PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-02 Thread Edson Richter

On 02/10/2016 17:04, Devrim Gündüz wrote:
> Hi,
>
> On Sun, 2016-10-02 at 17:52 +0000, Edson Richter wrote:
>> I'm trying to install pgAdmin4 in Oracle EL 7.
>>
>> I've already installed PostgreSQL 9.6 final in same server (have EPEL
>> enabled, as well pgdg 9.6 repos).
>>
>> Running "yum install pgadmin4-web" I get the following result:
>> Error: Package: pgadmin4-web-1.0-1.rhel7.noarch (pgdg96)
>> Requires: python-blinker >= 1.3
> I've tested pgadmin4 RPMs on CentOS 7 so far, it seems even though Fedoragit
> says something different, blinker does not seem to appear in the RHEL andOEL
> repos.
>
> So, just added blinker to PGDG repo, under testing. If you have the latest
> repository file, you can test this package with:
>
> yum --enablerepo pgdg96-updates-testing install pgadmin4-web
>
> (the package will appear in next hour)

It is not working, I get the following error:

[root@backup1 yum.repos.d]# LANG=C yum install pgadmin4 pgadmin4-web
Loaded plugins: ulninfo
epel/x86_64/metalink | 2.6 kB  00:00:00
ol7_UEKR3 | 1.2 kB  00:00:00
ol7_latest | 1.4 kB  00:00:00
pgdg96 | 4.1 kB  00:00:00
https://download.postgresql.org/pub/repos/yum/testing/9.6/redhat/rhel-7Server-x86_64/repodata/repomd.xml:
 
[Errno 14] HTTPS Error 404 - Not Found
Trying other mirror.


  One of the configured repositories failed (PostgreSQL 9.6 7Server - 
x86_64),
  and yum doesn't have enough cached data to continue. At this point the 
only
  safe thing yum can do is fail. There are a few ways to work "fix" this:

  1. Contact the upstream for the repository and get them to fix the 
problem.

  2. Reconfigure the baseurl/etc. for the repository, to point to a 
working
 upstream. This is most often useful if you are using a newer
 distribution release than is supported by the repository (and the
 packages for the previous distribution release still work).

  3. Disable the repository, so yum won't use it by default. Yum 
will then
 just ignore the repository until you permanently enable it 
again or use
 --enablerepo for temporary usage:

 yum-config-manager --disable pgdg96-updates-testing

  4. Configure the failing repository to be skipped, if it is 
unavailable.
 Note that yum will try to contact the repo. when it runs most 
commands,
 so will have to try and fail each time (and thus. yum will be 
be much
 slower). If it is a very temporary problem though, this is 
often a nice
 compromise:

 yum-config-manager --save 
--setopt=pgdg96-updates-testing.skip_if_unavailable=true

failure: repodata/repomd.xml from pgdg96-updates-testing: [Errno 256] No 
more mirrors to try.
https://download.postgresql.org/pub/repos/yum/testing/9.6/redhat/rhel-7Server-x86_64/repodata/repomd.xml:
 
[Errno 14] HTTPS Error 404 - Not Found


>
>> Error: Package: python-psycopg2-debug-2.6.2-2.rhel7.x86_64
>> (pgdg96)   Requires: libpython2.7_d.so.1.0()(64bit)
>>   You could try using --skip-broken to work around the problem
>>   You could try running: rpm -Va --nofiles --nodigest"
> This is also something that I need to test, but it is too late in here now,
> will look tomorrow.

I have several machines with OEL 7, I'll be delighted to help debug. I 
can even provide root access to a virtual machine with OEL 7 for if you 
need one.

Regards,

Edson Richter


>
> Regards,
> --
> Devrim GÜNDÜZ
> EnterpriseDB: http://www.enterprisedb.com
> PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-02 Thread Edson Richter

On 02/10/2016 16:05, Adrian Klaver wrote:
> On 10/02/2016 10:52 AM, Edson Richter wrote:
>> Dear community,
>>
>>
>> I'm trying to install pgAdmin4 in Oracle EL 7.
>>
>> I've already installed PostgreSQL 9.6 final in same server (have EPEL
>> enabled, as well pgdg 9.6 repos).
>>
>> Running "yum install pgadmin4-web" I get the following result:
>>
>>
>> "[root@backup1 yum.repos.d]# LANG=C yum install pgadmin4-web
>>
>> Loaded plugins: ulninfo
>> Resolving Dependencies
>> --> Running transaction check
>> ---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
>> --> Processing Dependency: python-psycopg2-debug >= 2.6.2 for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> --> Processing Dependency: python-passlib >= 1.6.2 for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> --> Processing Dependency: python-html5lib >= 1.0b3 for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> --> Processing Dependency: python-crypto >= 2.6.1 for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> --> Processing Dependency: python-blinker >= 1.3 for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> --> Processing Dependency: python-click for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> --> Running transaction check
>> ---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
>> --> Processing Dependency: python-blinker >= 1.3 for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> ---> Package python-click.noarch 0:6.3-1.el7 will be installed
>> ---> Package python-html5lib.noarch 1:0.999-5.el7 will be installed
>> ---> Package python-passlib.noarch 0:1.6.2-2.el7 will be installed
>> ---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be
>> installed
>> --> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package:
>> python-psycopg2-debug-2.6.2-2.rhel7.x86_64
>> ---> Package python2-crypto.x86_64 0:2.6.1-9.el7 will be installed
>> --> Processing Dependency: libtomcrypt.so.0()(64bit) for package:
>> python2-crypto-2.6.1-9.el7.x86_64
>> --> Running transaction check
>> ---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
>> --> Processing Dependency: libtommath >= 0.42.0 for package:
>> libtomcrypt-1.17-23.el7.x86_64
>> --> Processing Dependency: libtommath.so.0()(64bit) for package:
>> libtomcrypt-1.17-23.el7.x86_64
>> ---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
>> --> Processing Dependency: python-blinker >= 1.3 for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> ---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be
>> installed
>> --> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package:
>> python-psycopg2-debug-2.6.2-2.rhel7.x86_64
>> --> Running transaction check
>> ---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
>> ---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
>> --> Processing Dependency: python-blinker >= 1.3 for package:
>> pgadmin4-web-1.0-1.rhel7.noarch
>> ---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be
>> installed
>> --> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package:
>> python-psycopg2-debug-2.6.2-2.rhel7.x86_64
>> --> Finished Dependency Resolution
>> Error: Package: pgadmin4-web-1.0-1.rhel7.noarch (pgdg96)
>>Requires: python-blinker >= 1.3
>> Error: Package: python-psycopg2-debug-2.6.2-2.rhel7.x86_64 (pgdg96)
>>Requires: libpython2.7_d.so.1.0()(64bit)
>>  You could try using --skip-broken to work around the problem
>>  You could try running: rpm -Va --nofiles --nodigest"
>>
>> (the "LANG=C" at beginning of the line is to force output in english,
>> please ignore it)
>>
>> Have anyone faced similar issue or know how to correct this error?
>
> First what version of Python is your system default?

python.x86_64 2.7.5-39.0.1.el7_2 @ol7_latest


>
> Second there seems to be some unfilled dependencies:
>
> python-blinker >= 1.3

[root@backup1 yum.repos.d]# yum search python-blinker
Plugins carregados: ulninfo
=== Encontrado: python-blinker 

python-flask-principal.noarch : Identity management for Flask applications


>
> libpython2.7_d.so.1.0()(64bit)
>
> At a guess you can try to YUM install python-blinker for the first.
>

Not sure if I did it right, but python-flask-principal.noarch is already 
installed - would be a dependency problem? The package is not where it 
is supposed to be?

>
> For the second I am thinking you need something along the lines of
> python-devel installed.
>

That was my first thought:

[root@backup1 yum.repos.d]# LANG=C yum install python-devel
Loaded plugins: ulninfo
Package python-devel-2.7.5-39.0.1.el7_2.x86_64 already installed and 
latest version
Nothing to do


So it is already installed.

Regards,

Edson Richter


>
>>
>> Thanks in advance,
>>
>> Edson Richter
>>
>
>



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-02 Thread Edson Richter
Dear community,


I'm trying to install pgAdmin4 in Oracle EL 7.

I've already installed PostgreSQL 9.6 final in same server (have EPEL enabled, 
as well pgdg 9.6 repos).

Running "yum install pgadmin4-web" I get the following result:


"[root@backup1 yum.repos.d]# LANG=C yum install pgadmin4-web

Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-psycopg2-debug >= 2.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-passlib >= 1.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-html5lib >= 1.0b3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-crypto >= 2.6.1 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-click for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
---> Package python-click.noarch 0:6.3-1.el7 will be installed
---> Package python-html5lib.noarch 1:0.999-5.el7 will be installed
---> Package python-passlib.noarch 0:1.6.2-2.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
---> Package python2-crypto.x86_64 0:2.6.1-9.el7 will be installed
--> Processing Dependency: libtomcrypt.so.0()(64bit) for package: 
python2-crypto-2.6.1-9.el7.x86_64
--> Running transaction check
---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
--> Processing Dependency: libtommath >= 0.42.0 for package: 
libtomcrypt-1.17-23.el7.x86_64
--> Processing Dependency: libtommath.so.0()(64bit) for package: 
libtomcrypt-1.17-23.el7.x86_64
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Running transaction check
---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Finished Dependency Resolution
Error: Package: pgadmin4-web-1.0-1.rhel7.noarch (pgdg96)
   Requires: python-blinker >= 1.3
Error: Package: python-psycopg2-debug-2.6.2-2.rhel7.x86_64 (pgdg96)
   Requires: libpython2.7_d.so.1.0()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest"

(the "LANG=C" at beginning of the line is to force output in english, please 
ignore it)

Have anyone faced similar issue or know how to correct this error?

Thanks in advance,

Edson Richter



Re: [GENERAL] IDE for function/stored proc development.

2016-09-06 Thread Edson Richter
I do use Dbwrench, but is pretty basic, no fancy procedures / function 
development support.

Enviado do meu smartphone Sony Xperia™

 Pavel Stehule escreveu 

Hi

2016-09-03 11:36 GMT+02:00 Tim Uckun :

> Does anybody use an IDE for doing heavy duty stored proc development?
> PGadmin is decent but I am looking for something better.
>
> I have tried jetbrains with the db browser plugin and on the surface it
> seems like a good choice but it's really buggy when working with procs.
>
> I also tried datagrip by jetbrains and that too seems to be all over the
> place. It has some amazing features for working with the database but some
> of the simplest stuff is lacking or half baked.
>
> I looked at atom and could not find any useful plugins for PG.
>
> Anybody have experience with something awesome?
>

I am using the Emacs - but any editor should be ok. There is one rule -
edit file first, and import to database as next step. PGadmin is pretty bad
tool for maintaing stored procedures.

Regards

Pavel


Re: [GENERAL] IDE for function/stored proc development.

2016-09-06 Thread Edson Richter


 Martijn Tonies (Upscene Productions) escreveu 

> Good morning,
> 
>  >I looked at your purchase, and did not see any Postgres version. Am I 
> missing (/misunderstanding) something here?
> 
>  It’s not yet available, please wait until the end of the week 
> 
>  That being said, the pricing will be the same as for MySQL.
> 
>  With regards,
> 
> Martijn Tonies
> Upscene Productions
> http://www.upscene.com


Would be nice to have a table which enlist all features (in rows) and databases 
(in cols), and on intersections, add the "since date... ", "from date ..." or 
"planned" status...

You can split the table by bundle (basic /enterprise) so you will help your 
visitors to better understand your offer... 

Just my 2c. 

Sorry for being off topic... 

Edson Richter 

Re: [GENERAL] Proposal "stack trace" like debugging option in PostgreSQL

2016-07-31 Thread Edson Richter


> From: t...@sss.pgh.pa.us
> To: edsonrich...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Proposal "stack trace" like debugging option in 
> PostgreSQL
> Date: Sun, 31 Jul 2016 11:56:11 -0400
> 
> Edson Richter  writes:
> > But in production this is not possible, and I would to propose a feature 
> > that has less impact over production then a debug extension: a 
> > stacktrace of calls.
> 
> > Simular to Java stack traces, but disabled by default. When enabled, In 
> > case of an event like "duplicate key" (or a function raise exception) or 
> > other similar problems that wont allow the database to execute the SQL 
> > command,the strack trace will bring the complete list of function call.
> 
> Uh, doesn't the CONTEXT field of error messages give you that already?
> 
>   regards, tom lane

Would you give me an example where I can get the info you mention above? Do I 
need to enable some kind of parameter to get this context field?
I usually receive and error saying something about the duplicate key (or FK 
violation, or Check Constraint), but not the function (triggers) call chain 
that lead to that error.
Regards,
Edson Richter
  

Re: [GENERAL] Proposal "stack trace" like debugging option in PostgreSQL

2016-07-31 Thread Edson Richter


> Subject: Re: [GENERAL] Proposal "stack trace" like debugging option in 
> PostgreSQL
> To: edsonrich...@hotmail.com; pgsql-general@postgresql.org
> From: adrian.kla...@aklaver.com
> Date: Sat, 30 Jul 2016 15:26:42 -0700
> 
> On 07/30/2016 10:52 AM, Edson Richter wrote:
> > Dear community,
> >
> >
> > Sorry if this is not the right place for proposing new features. Also,
> > sorry if I'm proposing something already existing.
> >
> > I do currently use the "debug" extension to better understand the
> > entrophy of my application regarding database.
> 
> Can you be more specific about what you mean by debug extension?
> 

pgAdmin III debugger:shared_preload_libraries = 'plugin_debugger'   
 # (change requires restart)> > It might help provide folks with an idea of 
what you are looking for.> > >> > But in production this is not possible, and I 
would to propose a feature> > that has less impact over production then a debug 
extension: a> > stacktrace of calls.> >> > Simular to Java stack traces, but 
disabled by default. When enabled, In> > case of an event like "duplicate key" 
(or a function raise exception) or> > other similar problems that wont allow 
the database to execute the SQL> > command,the strack trace will bring the 
complete list of function call.> > Have you tried cranking up the log level:> > 
https://www.postgresql.org/docs/9.5/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN>
 > to one of the debug levels. Though that will result in a lot of log > output 
if you leave it on for any length of time.> Yes - I get the statements, but in 
case of an exception, I don't get the stack call showing how deep my function 
call is, which order has been in place. One update, insert or delete can throw 
a trigger that will cascade many levels until the problem is found.And, in 
general, is a bad idea (IMHO) to enable higher levels of logging in production 
systems when nothing is wrong. The stack trace is helpful when you don't have 
the logging, but something unexpected happens and you need more details. And, 
also, is unlikely you will be able to reproduce the sequence of error without 
that additional information.Edson Richter> >> > This would help to track down 
problems that escaped the development and> > test environments, and reached the 
production systems.> >> >> > If this feature already exists, please kindly 
point me to the docs. If> > not, please consider adding this in a future 
release.> >> >> > Thanks,> >> >> > > -- > Adrian Klaver> 
adrian.kla...@aklaver.com   

[GENERAL] Proposal "stack trace" like debugging option in PostgreSQL

2016-07-30 Thread Edson Richter

Dear community,


Sorry if this is not the right place for proposing new features. Also, 
sorry if I'm proposing something already existing.


I do currently use the "debug" extension to better understand the 
entrophy of my application regarding database.


But in production this is not possible, and I would to propose a feature 
that has less impact over production then a debug extension: a 
stacktrace of calls.


Simular to Java stack traces, but disabled by default. When enabled, In 
case of an event like "duplicate key" (or a function raise exception) or 
other similar problems that wont allow the database to execute the SQL 
command,the strack trace will bring the complete list of function call.


This would help to track down problems that escaped the development and 
test environments, and reached the production systems.



If this feature already exists, please kindly point me to the docs. If 
not, please consider adding this in a future release.



Thanks,


--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Edson Richter

Em 28/07/2016 13:07, Chris Travers escreveu:



On Thu, Jul 28, 2016 at 3:38 PM, Scott Marlowe 
mailto:scott.marl...@gmail.com>> wrote:


On Wed, Jul 27, 2016 at 9:51 AM, Geoff Winkless
mailto:pgsqlad...@geoff.dj>> wrote:
> On 27 July 2016 at 15:22, Scott Mead mailto:sco...@openscg.com>> wrote:
>>
>>  "The bug we ran into only affected certain releases of
Postgres 9.2 and
>> has been fixed for a long time now. However, we still find it
worrisome that
>> this class of bug can happen at all. A new version of Postgres
could be
>> released at any time that has a bug of this nature, and because
of the way
>> replication works, this issue has the potential to spread into
all of the
>> databases in a replication hierarchy."
>>
>>
>> ISTM that they needed a tire swing and were using a dump
truck.  Hopefully
>> they vectored somewhere in the middle and got themselves a nice
sandbox.
>
>
> At least his bug got fixed. The last 2 bugs I reported to MySQL
resulted in
> an initial refusal to accept any problem existed, followed by
(once that
> particular strategy had run out of steam) the developer simply
ignoring the
> bug until it was closed automatically by their bug system. As
far as I'm
> aware those bugs still exist in the most recent version.

Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.

The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.



As for MySQL issues, personally I love the fact that a single query 
inserting a bunch of rows can sometimes deadlock against itself.  And 
I love the fact that this is obliquely documented as expected 
behavior.  May I mention I am *really glad* PostgreSQL doesn't go the 
whole multi-threaded backend route and that this is exhibit A as to 
why (I am sure it is a thread race issue between index and table 
updates)?


Sorry, I think this is a biased vision. Multi-threading will show as 
much problems as multi-process - both has to have simultaneous access 
(or, at least, right semaphor implementation to serialize writes and 
syncronize reads).
The fact is **on this point at least** is that Postgres is correctly 
implemented, and MySQL is faulty.
I've faced the "lost FK integrity hell" (caused by the problem above) 
with MySQL long before decided to migrate all systems to PostgreSQL.
My personal experience is that MySQL is excellent for data that is not 
sensitive (web site, e-mail settings, etc). Everything else goes to 
PostgreSQL (or Oracle, or MS SQL Server, or Sybase, or DB2 - in *my* 
order of preference).



Regards,

Edson Richter



Re: [GENERAL] Index seems "lost" after consecutive deletes

2016-06-15 Thread Edson Richter

Em 14/06/2016 12:02, Edson Richter escreveu:

Em 14/06/2016 10:32, David G. Johnston escreveu:
On Tue, Jun 14, 2016 at 12:51 AM, Edson Richter 
wrote:


Em 14/06/2016 01:33, David G. Johnston escreveu:

On Monday, June 13, 2016, Edson Richter
mailto:edsonrich...@hotmail.com>> wrote:

Em 13/06/2016 23:36, Edson Richter escreveu:

Em 13/06/2016 23:18, rob stone escreveu:

On Mon, 2016-06-13 at 22:41 -0300, Edson Richter wrote:

Em 13/06/2016 22:33, Edson Richter escreveu:

I've a table "A" with 4,000,000 records.

I've decided to delete records from oldest to newest but I can't
delete records that have references in tables "B", "C" or "D".


so, I've


with qry as (

 select A.id

   from A

 where not exists (select 1 from B where B.a_id = A.id)

and not exists (select 1 from C where C.a_id = A.id)

and not exists (select 1 from D where D.a_id = A.id)

and A.creation_date < (now()::date - interval '12 month')

   order by A.id DESC

   limit 2000

)

delete from A where id in (select id from qry);


All three referenced tables have indexes (B.a_id; C.a_id; D.a_id)
in
order to make query faster.

So for first 2 million rows it worked really well, taking about 1
minute to delete each group of 2000 records.

Then, after a while I just started to get errors like:


​​
ERROR: update or delete in "A" violates foreign key
​​
"fk_C_A" in
"C".

DETAIL:  Key (id)=(3240124) is still referenced by table "C".


Seems to me that indexes got lost in the path - the query is
really
specific and no "C" referenced records can be in my deletion.

Has anyone faced a behavior like this?

Am I doing something wrong?



Of course:
Version stringPostgreSQL 9.4.8 on x86_64-unknown-linux-gnu,
compiled
by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
Oracle Linux 7 x64 with all updates. Running on EXT4 file system.
Computer is Dell R420 with mirrored disks, 80GB of RAM (database has
<
40GB in total).

Sorry for not putting the info in the first e-mail.

Edson



What does:-

SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;

return?

Is it a many-to-one or a one-to-one relationship?




SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;
 count
---
 1
(1 registro)


A.id is primary key of A table. Each table has its own
primary key.

Relationship to others table is 1-N, being N = {0,1}


A.id -> B.a_id (being B.a_id unique but not enforced by
unique key)

A.id -> C.a_id (being C.a_id unique but not enforced by
unique key)

A.id -> D.a_id (being D.a_id unique but not enforced by
unique key)


Regards,

Edson


Just in case, I've run:

- vacuum full analyze verbose;
- reindex index ix_c_a_id;

Result I get same error. So, I'm inclined to discard that
this is a index error.


Interesting:

with qry as (select A.id
 from A
where creatingdate < (now()::date - interval '12 month')
and not exists (select 1 from B where B.a_id = A.id)
and not exists (select 1 from C where C.a_id = A.id)
and not exists (select 1 from D where D.a_id = A.id)
order by A.id limit 2000)

select * from qry where id = 3240124;


Why do you assume 3240124 is within the first 2000 qualified
records that the CTE is limited to checking?


Because this is the key causing the error on delete.


​Wasn't the point though I do suspect your expectations are 
acceptable in this instance.  If you truly want to see if qry 
contains 3240124 you should lose the LIMIT 2000.​




Running the risk to deviate the focus, if records are ordered in the 
query, limiting them will always produce same result.








Total query runtime: 2.2 secs
0 rows retrieved.


Why delete causes error, but querying don't?


Given the error message this answer seems self-evident...


Not self-evident to me: delete says I'm trying to delete the
record with id = 3240124 and I can't. But the select says this
record with id = 3240124 is not there!!!


​This error:
 ​ERROR: update or delete in "A" violates foreign key "fk_C_A" in
​ ​
"C".
is impossible to encounter when executing a pure select...


Yes, but is also impossible to get this error if the record is

Re: [GENERAL] Index seems "lost" after consecutive deletes

2016-06-14 Thread Edson Richter

Em 14/06/2016 10:32, David G. Johnston escreveu:
On Tue, Jun 14, 2016 at 12:51 AM, Edson Richter 
mailto:edsonrich...@hotmail.com>>wrote:


Em 14/06/2016 01:33, David G. Johnston escreveu:

On Monday, June 13, 2016, Edson Richter mailto:edsonrich...@hotmail.com>> wrote:

Em 13/06/2016 23:36, Edson Richter escreveu:

Em 13/06/2016 23:18, rob stone escreveu:

On Mon, 2016-06-13 at 22:41 -0300, Edson Richter wrote:

Em 13/06/2016 22:33, Edson Richter escreveu:

I've a table "A" with 4,000,000 records.

I've decided to delete records from oldest to newest but I can't
delete records that have references in tables "B", "C" or "D".


so, I've


with qry as (

 select A.id

   from A

 where not exists (select 1 from B where B.a_id = A.id)

and not exists (select 1 from C where C.a_id = A.id)

and not exists (select 1 from D where D.a_id = A.id)

and A.creation_date < (now()::date - interval '12 month')

   order by A.id DESC

   limit 2000

)

delete from A where id in (select id from qry);


All three referenced tables have indexes (B.a_id; C.a_id; D.a_id)
in
order to make query faster.

So for first 2 million rows it worked really well, taking about 1
minute to delete each group of 2000 records.

Then, after a while I just started to get errors like:


​​
ERROR: update or delete in "A" violates foreign key
​​
"fk_C_A" in
"C".

DETAIL:  Key (id)=(3240124) is still referenced by table "C".


Seems to me that indexes got lost in the path - the query is
really
specific and no "C" referenced records can be in my deletion.

Has anyone faced a behavior like this?

Am I doing something wrong?



Of course:
Version stringPostgreSQL 9.4.8 on x86_64-unknown-linux-gnu,
compiled
by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
Oracle Linux 7 x64 with all updates. Running on EXT4 file system.
Computer is Dell R420 with mirrored disks, 80GB of RAM (database has
<
40GB in total).

Sorry for not putting the info in the first e-mail.

Edson



What does:-

SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;

return?

Is it a many-to-one or a one-to-one relationship?




SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;
 count
---
 1
(1 registro)


A.id is primary key of A table. Each table has its own
primary key.

Relationship to others table is 1-N, being N = {0,1}


A.id -> B.a_id (being B.a_id unique but not enforced by
unique key)

A.id -> C.a_id (being C.a_id unique but not enforced by
unique key)

A.id -> D.a_id (being D.a_id unique but not enforced by
unique key)


Regards,

Edson


Just in case, I've run:

- vacuum full analyze verbose;
- reindex index ix_c_a_id;

Result I get same error. So, I'm inclined to discard that
this is a index error.


Interesting:

with qry as (select A.id
 from A
where creatingdate < (now()::date - interval '12 month')
and not exists (select 1 from B where B.a_id = A.id)
and not exists (select 1 from C where C.a_id = A.id)
and not exists (select 1 from D where D.a_id = A.id)
order by A.id limit 2000)

select * from qry where id = 3240124;


Why do you assume 3240124 is within the first 2000 qualified
records that the CTE is limited to checking?


Because this is the key causing the error on delete.


​Wasn't the point though I do suspect your expectations are acceptable 
in this instance.  If you truly want to see if qry contains 3240124 
you should lose the LIMIT 2000.​




Running the risk to deviate the focus, if records are ordered in the 
query, limiting them will always produce same result.








Total query runtime: 2.2 secs
0 rows retrieved.


Why delete causes error, but querying don't?


Given the error message this answer seems self-evident...


Not self-evident to me: delete says I'm trying to delete the
record with id = 3240124 and I can't. But the select says this
record with id = 3240124 is not there!!!


​This error:
 ​ERROR: update or delete in "A" violates foreign key "fk_C_A" in
​ ​
"C".
is impossible to encounter when executing a pure select...


Yes, but is also impossible to get this error if the record is not in

Re: [GENERAL] Index seems "lost" after consecutive deletes

2016-06-13 Thread Edson Richter

Em 14/06/2016 01:33, David G. Johnston escreveu:
On Monday, June 13, 2016, Edson Richter <mailto:edsonrich...@hotmail.com>> wrote:


Em 13/06/2016 23:36, Edson Richter escreveu:

Em 13/06/2016 23:18, rob stone escreveu:

On Mon, 2016-06-13 at 22:41 -0300, Edson Richter wrote:

Em 13/06/2016 22:33, Edson Richter escreveu:

I've a table "A" with 4,000,000 records.

I've decided to delete records from oldest to newest but I can't
delete records that have references in tables "B", "C" or "D".


so, I've


with qry as (

 select A.id

   from A

 where not exists (select 1 from B where B.a_id = A.id)

and not exists (select 1 from C where C.a_id = A.id)

and not exists (select 1 from D where D.a_id = A.id)

and A.creation_date < (now()::date - interval '12 month')

   order by A.id DESC

   limit 2000

)

delete from A where id in (select id from qry);


All three referenced tables have indexes (B.a_id; C.a_id; D.a_id)
in
order to make query faster.

So for first 2 million rows it worked really well, taking about 1
minute to delete each group of 2000 records.

Then, after a while I just started to get errors like:


ERROR: update or delete in "A" violates foreign key "fk_C_A" in
"C".

DETAIL:  Key (id)=(3240124) is still referenced by table "C".


Seems to me that indexes got lost in the path - the query is
really
specific and no "C" referenced records can be in my deletion.

Has anyone faced a behavior like this?

Am I doing something wrong?



Of course:
Version stringPostgreSQL 9.4.8 on x86_64-unknown-linux-gnu,
compiled
by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
Oracle Linux 7 x64 with all updates. Running on EXT4 file system.
Computer is Dell R420 with mirrored disks, 80GB of RAM (database has
<
40GB in total).

Sorry for not putting the info in the first e-mail.

Edson



What does:-

SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;

return?

Is it a many-to-one or a one-to-one relationship?




SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;
 count
---
 1
(1 registro)


A.id is primary key of A table. Each table has its own primary key.

Relationship to others table is 1-N, being N = {0,1}


A.id -> B.a_id (being B.a_id unique but not enforced by unique key)

A.id -> C.a_id (being C.a_id unique but not enforced by unique key)

A.id -> D.a_id (being D.a_id unique but not enforced by unique key)


Regards,

Edson


Just in case, I've run:

- vacuum full analyze verbose;
- reindex index ix_c_a_id;

Result I get same error. So, I'm inclined to discard that this is
a index error.


Interesting:

with qry as (select A.id
 from A
where creatingdate < (now()::date - interval '12 month')
and not exists (select 1 from B where B.a_id = A.id)
and not exists (select 1 from C where C.a_id = A.id)
and not exists (select 1 from D where D.a_id = A.id)
order by A.id limit 2000)

select * from qry where id = 3240124;


Why do you assume 3240124 is within the first 2000 qualified records 
that the CTE is limited to checking?


Because this is the key causing the error on delete.




Total query runtime: 2.2 secs
0 rows retrieved.


Why delete causes error, but querying don't?


Given the error message this answer seems self-evident...


Not self-evident to me: delete says I'm trying to delete the record with 
id = 3240124 and I can't. But the select says this record with id = 
3240124 is not there!!!




Would it be a bug when using delete ... where id in (subquery)?

I'm unsure regarding the solution but I suspect the problem is that 
between selecting the A row and deleting it another concurrent process 
added a record to C that, if you were to re-run the select would cause 
the row from A to be skipped.  But the single query doesn't have that 
option so it ends up failing.


There is a FOR UPDATE clause you can add to the select but I don't 
think that works here since table C is the one being altered and at 
the time of the query there is nothing to lock.


I'm doubting this is a bug, just poor concurrency understanding.  
Sorry I cannot enlighten further at the moment.


There is not other process adding/updating records - and even there is, 
the ID would be so high (because it is a sequence) that cannot be the 
3240124 (this record has been generated more than 2 years ago - that's 
why my query has the "creationdate" filter - I don't touch records 
created within last 12 months).


Also, I've tried the same with "for update": same error!


Edson


David J.





Re: [GENERAL] Index seems "lost" after consecutive deletes

2016-06-13 Thread Edson Richter

Em 13/06/2016 23:36, Edson Richter escreveu:

Em 13/06/2016 23:18, rob stone escreveu:

On Mon, 2016-06-13 at 22:41 -0300, Edson Richter wrote:

Em 13/06/2016 22:33, Edson Richter escreveu:

I've a table "A" with 4,000,000 records.

I've decided to delete records from oldest to newest but I can't
delete records that have references in tables "B", "C" or "D".


so, I've


with qry as (

 select A.id

   from A

 where not exists (select 1 from B where B.a_id = A.id)

and not exists (select 1 from C where C.a_id = A.id)

and not exists (select 1 from D where D.a_id = A.id)

and A.creation_date < (now()::date - interval '12 month')

   order by A.id DESC

   limit 2000

)

delete from A where id in (select id from qry);


All three referenced tables have indexes (B.a_id; C.a_id; D.a_id)
in
order to make query faster.

So for first 2 million rows it worked really well, taking about 1
minute to delete each group of 2000 records.

Then, after a while I just started to get errors like:


ERROR: update or delete in "A" violates foreign key "fk_C_A" in
"C".

DETAIL:  Key (id)=(3240124) is still referenced by table "C".


Seems to me that indexes got lost in the path - the query is
really
specific and no "C" referenced records can be in my deletion.

Has anyone faced a behavior like this?

Am I doing something wrong?



Of course:
Version stringPostgreSQL 9.4.8 on x86_64-unknown-linux-gnu,
compiled
by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
Oracle Linux 7 x64 with all updates. Running on EXT4 file system.
Computer is Dell R420 with mirrored disks, 80GB of RAM (database has
<
40GB in total).

Sorry for not putting the info in the first e-mail.

Edson



What does:-

SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;

return?

Is it a many-to-one or a one-to-one relationship?




SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;
 count
---
 1
(1 registro)


A.id is primary key of A table. Each table has its own primary key.

Relationship to others table is 1-N, being N = {0,1}


A.id -> B.a_id (being B.a_id unique but not enforced by unique key)

A.id -> C.a_id (being C.a_id unique but not enforced by unique key)

A.id -> D.a_id (being D.a_id unique but not enforced by unique key)


Regards,

Edson


Just in case, I've run:

- vacuum full analyze verbose;
- reindex index ix_c_a_id;

Result I get same error. So, I'm inclined to discard that this is a 
index error.



Interesting:

with qry as (select A.id
 from A
where creatingdate < (now()::date - interval '12 month')
and not exists (select 1 from B where B.a_id = A.id)
and not exists (select 1 from C where C.a_id = A.id)
and not exists (select 1 from D where D.a_id = A.id)
order by A.id limit 2000)

select * from qry where id = 3240124;

Total query runtime: 2.2 secs
0 rows retrieved.


Why delete causes error, but querying don't?
Would it be a bug when using delete ... where id in (subquery)?

Regards,

Edson


Re: [GENERAL] Index seems "lost" after consecutive deletes

2016-06-13 Thread Edson Richter

Em 13/06/2016 23:18, rob stone escreveu:

On Mon, 2016-06-13 at 22:41 -0300, Edson Richter wrote:

Em 13/06/2016 22:33, Edson Richter escreveu:

I've a table "A" with 4,000,000 records.

I've decided to delete records from oldest to newest but I can't
delete records that have references in tables "B", "C" or "D".


so, I've


with qry as (

 select A.id

   from A

 where not exists (select 1 from B where B.a_id = A.id)

and not exists (select 1 from C where C.a_id = A.id)

and not exists (select 1 from D where D.a_id = A.id)

and A.creation_date < (now()::date - interval '12 month')

   order by A.id DESC

   limit 2000

)

delete from A where id in (select id from qry);


All three referenced tables have indexes (B.a_id; C.a_id; D.a_id)
in
order to make query faster.

So for first 2 million rows it worked really well, taking about 1
minute to delete each group of 2000 records.

Then, after a while I just started to get errors like:


ERROR: update or delete in "A" violates foreign key "fk_C_A" in
"C".

DETAIL:  Key (id)=(3240124) is still referenced by table "C".


Seems to me that indexes got lost in the path - the query is
really
specific and no "C" referenced records can be in my deletion.

Has anyone faced a behavior like this?

Am I doing something wrong?



Of course:
Version stringPostgreSQL 9.4.8 on x86_64-unknown-linux-gnu,
compiled
by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
Oracle Linux 7 x64 with all updates. Running on EXT4 file system.
Computer is Dell R420 with mirrored disks, 80GB of RAM (database has
<
40GB in total).

Sorry for not putting the info in the first e-mail.

Edson



What does:-

SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;

return?

Is it a many-to-one or a one-to-one relationship?




SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;

 count
---
 1
(1 registro)


A.id is primary key of A table. Each table has its own primary key.

Relationship to others table is 1-N, being N = {0,1}


A.id -> B.a_id (being B.a_id unique but not enforced by unique key)

A.id -> C.a_id (being C.a_id unique but not enforced by unique key)

A.id -> D.a_id (being D.a_id unique but not enforced by unique key)


Regards,

Edson


Re: [GENERAL] Index seems "lost" after consecutive deletes

2016-06-13 Thread Edson Richter


Em 13/06/2016 22:33, Edson Richter escreveu:

I've a table "A" with 4,000,000 records.

I've decided to delete records from oldest to newest but I can't 
delete records that have references in tables "B", "C" or "D".



so, I've


with qry as (

select A.id

  from A

where not exists (select 1 from B where B.a_id = A.id)

   and not exists (select 1 from C where C.a_id = A.id)

   and not exists (select 1 from D where D.a_id = A.id)

   and A.creation_date < (now()::date - interval '12 month')

  order by A.id DESC

  limit 2000

)

delete from A where id in (select id from qry);


All three referenced tables have indexes (B.a_id; C.a_id; D.a_id) in 
order to make query faster.


So for first 2 million rows it worked really well, taking about 1 
minute to delete each group of 2000 records.


Then, after a while I just started to get errors like:


ERROR: update or delete in "A" violates foreign key "fk_C_A" in "C".

DETAIL:  Key (id)=(3240124) is still referenced by table "C".


Seems to me that indexes got lost in the path - the query is really 
specific and no "C" referenced records can be in my deletion.


Has anyone faced a behavior like this?

Am I doing something wrong?




Of course:
Version stringPostgreSQL 9.4.8 on x86_64-unknown-linux-gnu, compiled 
by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit

Oracle Linux 7 x64 with all updates. Running on EXT4 file system.
Computer is Dell R420 with mirrored disks, 80GB of RAM (database has < 
40GB in total).


Sorry for not putting the info in the first e-mail.

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index seems "lost" after consecutive deletes

2016-06-13 Thread Edson Richter

I've a table "A" with 4,000,000 records.

I've decided to delete records from oldest to newest but I can't delete 
records that have references in tables "B", "C" or "D".



so, I've


with qry as (

select A.id

  from A

where not exists (select 1 from B where B.a_id = A.id)

   and not exists (select 1 from C where C.a_id = A.id)

   and not exists (select 1 from D where D.a_id = A.id)

   and A.creation_date < (now()::date - interval '12 month')

  order by A.id DESC

  limit 2000

)

delete from A where id in (select id from qry);


All three referenced tables have indexes (B.a_id; C.a_id; D.a_id) in 
order to make query faster.


So for first 2 million rows it worked really well, taking about 1 minute 
to delete each group of 2000 records.


Then, after a while I just started to get errors like:


ERROR: update or delete in "A" violates foreign key "fk_C_A" in "C".

DETAIL:  Key (id)=(3240124) is still referenced by table "C".


Seems to me that indexes got lost in the path - the query is really specific and no 
"C" referenced records can be in my deletion.

Has anyone faced a behavior like this?

Am I doing something wrong?


--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Columnar store as default for PostgreSQL 10?

2016-05-17 Thread Edson Richter

Em 17/05/2016 11:07, Merlin Moncure escreveu:

On Mon, Apr 25, 2016 at 8:48 PM, Adam Brusselback
 wrote:

It is not difficult to simulate column store in a row store system if
you're willing to decompose your tables into (what is essentially)
BCNF fragments.  It simply is laborious for designers and programmers.

I could see a true column store having much better performance than tricking
a row based system into it.  Just think of the per-row overhead we currently
have at 28 bytes per row.  Breaking up data manually like that may help a
little, but if you don't have a very wide table to begin with, it could turn
out you save next to nothing by doing so.  A column store wouldn't have this
issue, and could potentially have much better performance.

FYI tuple header is 23 bytes, not 28 bytes
(http://www.postgresql.org/docs/9.5/static/storage-page-layout.html).
Personally I think column stores are a bit overrated.  They are faster
at certain things (in some cases much faster) but tend to put pretty
onerous requirements on application design so that they are very much
a special case vehicle.

merlin



+1 (to not change current defaults).

I would tend to avoid columnar store "as default" because this would 
badly affect hundred of thousands of applications around the world. 
Columnar store should have its own niche, but certainly doesn't fit my 
needs.


Would you give a "option to change the store" is another history.

As I work with objects at programming side, and ORM works just so well, 
it is a really waste of time (and other resources) to change systems 
that are working well in the past 10 or more years.


Just my 2c,

Edson Richter





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-13 Thread Edson Richter

Em 13/04/2016 11:18, Adrian Klaver escreveu:

On 04/13/2016 06:58 AM, Edson Richter wrote:




Another trouble I've found: I've used "pg_dump" and "pg_restore" to
create the new CustomerTest database in my cluster. Immediately,
replication started to replicate the 60Gb data into slave, causing big
trouble.
Does mark it as "template" avoids replication of that "copied" database?
How can I mark a database to "do not replicate"?


With the Postgres built in binary replication you can't, it replicates 
the entire cluster. There are third party solutions that offer that 
choice:


http://www.postgresql.org/docs/9.5/interactive/different-replication-solutions.html 



Table 25-1. High Availability, Load Balancing, and Replication Feature 
Matrix


Thanks, I'll look at that.

It has been mentioned before, running a non-production database on the 
same cluster as the production database is a generally not a good 
idea. Per previous suggestions I would host your CustomerTest database 
on another instance/cluster of Postgres listening on a different port. 
Then all you customers have to do is create a connection that points 
at the new port.


Thanks for the concern.
This "CustomerTest" database is a staging, for customer approval before 
upgrading the production system.
I bet the users will only open the system, and say it is ok. As crowded 
as people are those days, I doubt they will validate something that is 
already validated by our development team.

But our contractor requires, and we provide.
Since we have "express devivery of new versions" (almost 2 per week), we 
would like to automate the staging environment.


Thanks,

Edson





Thanks,

Edson









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-13 Thread Edson Richter

Em 12/04/2016 12:53, Edson Richter escreveu:


*De: *Adrian Klaver <mailto:adrian.kla...@aklaver.com>
*Enviado:*terça-feira, 12 de abril de 2016 12:04
*Para: *Edson Richter <mailto:edsonrich...@hotmail.com>; 
pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org>

*Assunto: *Re: [GENERAL] Fastest way to duplicate a quite large database

On 04/12/2016 07:51 AM, Edson Richter wrote:
> Same machine, same cluster - just different database name.

Hmm, running tests against the same cluster you are running the
production database would seem to be a performance hit against the
production database and potentially dangerous should the tests trip a
bug that crashes the server.

>
> Atenciosamente,
>
> Edson Carlos Ericksson Richter
>
> Em 12/04/2016 11:46, John R Pierce escreveu:
>> On 4/12/2016 7:25 AM, Edson Richter wrote:
>>>
>>> I have a database "Customer" with about 60Gb of data.
>>> I know I can backup and restore, but this seems too slow.
>>>
>>> Is there any other option to duplicate this database as
>>> "CustomerTest" as fast as possible (even fastar than backup/restore)
>>> - better if in one operation (something like "copy database A to B")?
>>> I would like to run this everyday, overnight, with minimal impact to
>>> prepare a test environment based on production data.
>>
>>
>> copy to the same machine, or copy to a different test server?
>> different answers.
>>
>>
>>
>
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com

Hi Adrian,

Thanks for your insight. This is not a “test system” in the way I’m 
testing the database server code.


This is kind of “pre-production evaluation”, the stage were customer 
will say “yes” or “no” for publishing a new version of our system into 
production.


Also, server is plenty of RAM and processor cores, so I don’t foresee 
any kind of trouble here.


The is risk is lower than running a heavy reporting system over the 
database server.


The point is that customers want to test the new version of our system 
as close as possible of the production environment.


Thanks,

Edson




Another trouble I've found: I've used "pg_dump" and "pg_restore" to 
create the new CustomerTest database in my cluster. Immediately, 
replication started to replicate the 60Gb data into slave, causing big 
trouble.

Does mark it as "template" avoids replication of that "copied" database?
How can I mark a database to "do not replicate"?

Thanks,

Edson




RES: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Edson Richter

De: Adrian Klaver
Enviado:terça-feira, 12 de abril de 2016 12:04
Para: Edson Richter; pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Fastest way to duplicate a quite large database

On 04/12/2016 07:51 AM, Edson Richter wrote:
> Same machine, same cluster - just different database name.

Hmm, running tests against the same cluster you are running the 
production database would seem to be a performance hit against the 
production database and potentially dangerous should the tests trip a 
bug that crashes the server.

>
> Atenciosamente,
>
> Edson Carlos Ericksson Richter
>
> Em 12/04/2016 11:46, John R Pierce escreveu:
>> On 4/12/2016 7:25 AM, Edson Richter wrote:
>>>
>>> I have a database "Customer" with about 60Gb of data.
>>> I know I can backup and restore, but this seems too slow.
>>>
>>> Is there any other option to duplicate this database as
>>> "CustomerTest" as fast as possible (even fastar than backup/restore)
>>> - better if in one operation (something like "copy database A to B")?
>>> I would like to run this everyday, overnight, with minimal impact to
>>> prepare a test environment based on production data.
>>
>>
>> copy to the same machine, or copy to a different test server?
>> different answers.
>>
>>
>>
>
>
>


-- 
Adrian Klaver
adrian.kla...@aklaver.com


Hi Adrian,

Thanks for your insight. This is not a “test system” in the way I’m testing the 
database server code.
This is kind of “pre-production evaluation”, the stage were customer will say 
“yes” or “no” for publishing a new version of our system into production.

Also, server is plenty of RAM and processor cores, so I don’t foresee any kind 
of trouble here.

The is risk is lower than running a heavy reporting system over the database 
server.

The point is that customers want to test the new version of our system as close 
as possible of the production environment.

Thanks,

Edson


Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Edson Richter

Same machine, same cluster - just different database name.

Atenciosamente,

Edson Carlos Ericksson Richter

Em 12/04/2016 11:46, John R Pierce escreveu:

On 4/12/2016 7:25 AM, Edson Richter wrote:


I have a database "Customer" with about 60Gb of data.
I know I can backup and restore, but this seems too slow.

Is there any other option to duplicate this database as 
"CustomerTest" as fast as possible (even fastar than backup/restore) 
- better if in one operation (something like "copy database A to B")?
I would like to run this everyday, overnight, with minimal impact to 
prepare a test environment based on production data. 



copy to the same machine, or copy to a different test server? 
different answers.








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Edson Richter

Hi!

I have a database "Customer" with about 60Gb of data.
I know I can backup and restore, but this seems too slow.

Is there any other option to duplicate this database as "CustomerTest" 
as fast as possible (even fastar than backup/restore) - better if in one 
operation (something like "copy database A to B")?
I would like to run this everyday, overnight, with minimal impact to 
prepare a test environment based on production data.


Thanks,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best approach for multi-database system

2016-03-10 Thread Edson Richter

Em 10/03/2016 17:18, Andy Colson escreveu:

please don't top post.
> Em 10/03/2016 16:56, Andy Colson escreveu:

On 3/10/2016 1:41 PM, Edson Richter wrote:

Hi!

Using async replication between geographically database servers. I've
some 9.3 and some 9.4 servers, so doesn't matter the version (the only
reason I've not migrated all to 9.4 yet is because I'm short on time).

I've experienced some synchronization failures between Master-Slave
servers (for example, if connection is lost for 4 to 6 hours, the 
number

of wall keep segments may not suffice).
I've created some bash scripts that will reacreate the master-slave
using rsync, which seems to be farily easy - but really slow.
Depending on server size (130Gb for example) take really long on a 
5Mbps

link - in some cases, all night long.

This is the first approach: all databases in one server, and if I 
loose

the sync between master and slave, all data need to be transfered
between server and slave and is really slow.

The other approach I've tested is to split each database in one 
cluster.

By using the second approach, I can resynch each server separeted (for
example 15Gb instead 130Gb).
This approach have the problema that it is needed to configure and
maintain another cluster for each database.

Is there a best practice for multi-database systems with async
replication to avoid such "resync" problems?


Thanks for your orientation,



I wrote about my setup here:

http://www.postgresql.org/message-id/548f3954.8040...@squeakycode.net

Would be happy to answer questions.

-Andy









On 3/10/2016 2:14 PM, Edson Richter wrote:
> Interesting. This logicel (only remove WAL files after been applied)
> should be available in PostgreSQL by default.
> I mean, to have an option wheter every 15 minutes a master server query
> slave for completeness, or slave notify master last successfully
> imported WAL.
>
>
> Atenciosamente,
>
> Edson Carlos Ericksson Richter
>


It is.  Starting in 9.4 you can use replication slots.

-Andy




Sorry for toppost. This is the only list I've this rule, and sometimes I 
just forget...

I'll learn about replication slots.
Thanks for the tip.

Regards,

Edson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best approach for multi-database system

2016-03-10 Thread Edson Richter
Interesting. This logicel (only remove WAL files after been applied) 
should be available in PostgreSQL by default.
I mean, to have an option wheter every 15 minutes a master server query 
slave for completeness, or slave notify master last successfully 
imported WAL.



Atenciosamente,

Edson Carlos Ericksson Richter

Em 10/03/2016 16:56, Andy Colson escreveu:

On 3/10/2016 1:41 PM, Edson Richter wrote:

Hi!

Using async replication between geographically database servers. I've
some 9.3 and some 9.4 servers, so doesn't matter the version (the only
reason I've not migrated all to 9.4 yet is because I'm short on time).

I've experienced some synchronization failures between Master-Slave
servers (for example, if connection is lost for 4 to 6 hours, the number
of wall keep segments may not suffice).
I've created some bash scripts that will reacreate the master-slave
using rsync, which seems to be farily easy - but really slow.
Depending on server size (130Gb for example) take really long on a 5Mbps
link - in some cases, all night long.

This is the first approach: all databases in one server, and if I loose
the sync between master and slave, all data need to be transfered
between server and slave and is really slow.

The other approach I've tested is to split each database in one cluster.
By using the second approach, I can resynch each server separeted (for
example 15Gb instead 130Gb).
This approach have the problema that it is needed to configure and
maintain another cluster for each database.

Is there a best practice for multi-database systems with async
replication to avoid such "resync" problems?


Thanks for your orientation,



I wrote about my setup here:

http://www.postgresql.org/message-id/548f3954.8040...@squeakycode.net

Would be happy to answer questions.

-Andy







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Best approach for multi-database system

2016-03-10 Thread Edson Richter

Hi!

Using async replication between geographically database servers. I've 
some 9.3 and some 9.4 servers, so doesn't matter the version (the only 
reason I've not migrated all to 9.4 yet is because I'm short on time).


I've experienced some synchronization failures between Master-Slave 
servers (for example, if connection is lost for 4 to 6 hours, the number 
of wall keep segments may not suffice).
I've created some bash scripts that will reacreate the master-slave 
using rsync, which seems to be farily easy - but really slow.
Depending on server size (130Gb for example) take really long on a 5Mbps 
link - in some cases, all night long.


This is the first approach: all databases in one server, and if I loose 
the sync between master and slave, all data need to be transfered 
between server and slave and is really slow.


The other approach I've tested is to split each database in one cluster.
By using the second approach, I can resynch each server separeted (for 
example 15Gb instead 130Gb).
This approach have the problema that it is needed to configure and 
maintain another cluster for each database.


Is there a best practice for multi-database systems with async 
replication to avoid such "resync" problems?



Thanks for your orientation,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] XMLEXISTS on legacy XML with malformed xmlns

2016-02-12 Thread Edson Richter


2016-02-12 1:53 GMT+01:00 Edson Richter <mailto:edsonrich...@hotmail.com>>:


Hi!

I've some (about 1M records) containing legacy XML I would like to
parse and apply XMLEXISTS.

This is the query:

select * from xmllog
 where xpath_exists(('//MyDocument[@DocNum =
''000411828'']'::text), xmlparse(document cdataout));

This is the error:

ERRO: could not parse XML document
SQL state: 2200M
Detail: line 2: xmlns:leg: 'LEGACYAPP - SEND MSG EVENTS ABOUT' is
not a valid URI

^

This is the sample XML with malformed xmlns (I've shortenet the
data, but the important thing here is the malformed xmlns):

"

  

  0
  PostgreSQL uses libxml2, but the usage isn't too configurable. So my 
advice is using defensive strategy and clean/fix wrong namespace with 
string tools - replace function.


Regards

Pavel


Thanks, Pavel.

I did suspect that. But then I have about 10.000 new records each week, 
and I've no control over the system that generates it.

It is a shame, but sometimes we have to live with such problems.

Regards,

Edson Richter




[GENERAL] XMLEXISTS on legacy XML with malformed xmlns

2016-02-11 Thread Edson Richter

Hi!

I've some (about 1M records) containing legacy XML I would like to parse 
and apply XMLEXISTS.


This is the query:

select * from xmllog
 where xpath_exists(('//MyDocument[@DocNum = ''000411828'']'::text), 
xmlparse(document cdataout));


This is the error:

ERRO: could not parse XML document
SQL state: 2200M
Detail: line 2: xmlns:leg: 'LEGACYAPP - SEND MSG EVENTS ABOUT' is not a 
valid URI


^

This is the sample XML with malformed xmlns (I've shortenet the data, 
but the important thing here is the malformed xmlns):


"

  

  0
  I can easly read this XML in Notepad++, and also in Java - but 
PostgreSQL always throw error.


Can you plase tell me how can make PostgreSQL ignore this malformed 
xmlns and proceed processing the XML?


Thanks,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Edson Richter

Em 10/02/2016 13:32, Andy Colson escreveu:

On 2/9/2016 10:10 PM, ioan ghip wrote:

I have a Firebird SQL database running on one of my servers which has
about 50k inserts, about 100k updates and about 30k deletes every day.
There are about 4 million records in 24 tables. I have a bunch of stored
procedures, triggers, events and views that I'm using.
Firebird works fairly well, but from time to time the database gets
corrupted and I couldn't figure out yet (after many years of running)
what's the reason. When this happens I run "gfix -mend -full -ignore",
backup and restore the db and everything is fine until next problem in a
week, or a month.

I never used PostgreSQL. Yesterday I installed it on my development
machine and after few tests I saw that it's fairly easy to use.

Does anyone have experience with both, Firebird and PostgreSQL? Is
PostgreSQL way better performing than Firebird? Is it worth the effort
moving away from Firebird? Would I gain stability and increased 
performance?


Thanks.




One of our windows apps runs on a client/server setup in the office, 
and then on laptop for remote use.  We use Firebird (FB) for both.  
Its a quick simple install, runs in 8 meg of ram, has zero maintenance.


The only time I've seen corruptions is anti-virus scanning the db, and 
HD/raid problems.


FB is a nice little db.  That said, I can wholeheartedly recommend PG! 
It could still run on a laptop, might require a bit more maintenance, 
but on a dedicated server, it would be able to grow and use all the 
resources available.


If you have HD/raid problems, then you wont gain stability. Upgrading 
between major versions is also more difficult.


That said, yes, you'd gain stability and performance, and not only 
that, a huge amount of functionality.  A Huge Amount!


FB has, replace() for string ops, oh and substring().  Baa. That's 
nothing compared to PG's.  Its like that Aladdin song 'a whole new 
world'!


You know, in FB, when one person does a large delete on a table? The 
next person that runs a select will perform the vacuum on it. Its the 
person running the select that pays the time for a huge delete.  In 
PG, there is a background vacuum task, so users don't pay the price.


Respect for FB, but my heart belongs to PG.

-Andy



+1

Also, running a office server, being it small or huge, you can have a 
replicated server - so it is virtually impossible to loose data.


Synchronous and asynchronous replication is really easy to implement in 
PG, and makes it a strong (in terms of lossless) database server - even 
when compared with Oracle, MS SQL or IBM Db2. You can have two database 
servers, being one updatable and two for queries - that make reporting 
faster, for example!


By using BARMAN you can have online incremental backups to a third 
server, which a unvaluable for online transaction and operation. You may 
never ever loose data again - except if you database server crashes 
(hardware failure), and your replicated server crashes at same time 
(hardware failure also), and then you may loose up to last 15Mb of 
changes (the amount of data transfered to backup server on each 
incremental step).


So if your concern if for safety: keep your servers geografically 
separated, or, at minimum, in different eletrical and network 
installations, preferable in different buildings, using good hardware 
(with twins disks, power lines, network interfaces - all doubled). 
Personally, I do like Dell R420 servers for database servers - they are 
really reliable in my setup.


Finally, you can have embed database running togheter with your app - 
and even the for said additional maintenance, you can schedule it or 
even throw from inside your app.


You will see it is possible to have a 99.999% database uptime with no 
hasless, running for years without aditional DBA interference.


Also, the tooling to help planning indices and test query performance is 
really good, and the PgAdmin III has been good and quite strong (has 
some flaws, but nothing that really interfere in its usage).



Regards,

Edson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Statement timeout

2016-01-29 Thread Edson Richter

I've set statement timeout in postgresql.conf to 300s.
Now, I have a schema update procedure at application startup I would 
like to run without timeout, or with significant larger timeout (let's 
say, 1s).


It is possible to change statement timeout at runtime before issuing the 
command (for one connection only) without changing postgresql.conf?


I'm using JDBC, and I've imagined something like


java.sql.Connection cn = getConnection(); // this is a function that 
returns a connection form the connection's pool.

java.sql.Statement st = cn.createStatement();
cn.setAutoCommit(false);
st.executeQuery("set statement_timeout = 1s");
st.executeUpdate("update table XYZ set value_non_zero = 1 where 
value_non_zero is null or value_non_zero = 0");
st.executeUpdate("alter table XYZ modify value_non_zero not null default 
1");

cn.commit();

Something like that (I know sql syntax is wrong, this is just a lazy 
example)...


--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BDR and TX obeyance

2016-01-05 Thread Edson Richter

Em 05/01/2016 11:42, Riley Berton escreveu:

Edson Richter  writes:


BTW, I'm also looking for a "synchronous multi-master" solution... If
you find one, please share :-)
The only solution I've found so far is a middleware that is close, the
C-Jdbc/Sequoia, which seems not being actively maintained for a while
now.

See Postgres-R for sync multi-master.
http://www.postgres-r.org/documentation/

Note that it is specifically geared towards low-latency environments and
is likely not suitable for geo-distributed applications. It hasn't been
touched in 4 years so likely not actively maintained.

riley


That seems to be what I'm looking for...
As soon as I get some free time, I'll give a try.

Regards,

Edson




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BDR and TX obeyance

2016-01-04 Thread Edson Richter
BTW, I'm also looking for a "synchronous multi-master" solution... If 
you find one, please share :-)
The only solution I've found so far is a middleware that is close, the 
C-Jdbc/Sequoia, which seems not being actively maintained for a while now.


Regards,

Edson

Atenciosamente,

Edson Carlos Ericksson Richter

Em 04/01/2016 18:09, Riley Berton escreveu:

I have been experimenting with BDR and have a question about how BDR
interacts with transactions.

bdrdemo=# create table thingy (id INT, value TEXT, PRIMARY KEY(id));
CREATE TABLE
bdrdemo=# create table tx_log(id INT, msg TEXT, PRIMARY KEY(id));
CREATE TABLE
bdrdemo=# insert into thingy (id, value) VALUES (1, 'insert from node1');
INSERT 0 1

 From node1:

bdrdemo=# begin;
BEGIN
bdrdemo=# update thingy set value='update from node1' where id=1;
UPDATE 1
bdrdemo=# insert into tx_log (id, msg) values (1, 'tx log insert from node1');
INSERT 0 1
bdrdemo=# commit;
COMMIT

Simultaneously from node2:

bdrdemo=# begin;
BEGIN
bdrdemo=# update thingy set value='update from node2' where id=1;
UPDATE 1
bdrdemo=# insert into tx_log (id, msg) values (2, 'tx log insert from node2');
INSERT 0 1
bdrdemo=# commit;
COMMIT

...

bdrdemo=# select * from tx_log ;
  id |   msg
+--
   1 | tx log insert from node1
   2 | tx log insert from node2
(2 rows)

bdrdemo=# select * from thingy ;
  id |   value
+---
   1 | update from node2
(1 row)

The conflict on the "thingy" table has resulted in node2 winning based
on last_update wins default resolution.  However, both inserts have
applied.  My expectation is that the entire TX applies or does not
apply.  This expectation is clearly wrong.

Question is: is there a way (via a custom conflict handler) to have the
TX obeyed?  I can't see a way to even implement a simple bank account
database that changes multiple tables in a single transaction without
having the data end up in an inconsistent state.  Am I missing something
obvious here?

Thanks in advance for any help.

riley





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BDR and TX obeyance

2016-01-04 Thread Edson Richter

I think this is the nature of "async multi master"...
IMHO, It would be necessary to be "sync multi master" (with two-phase 
commit?) to get the behavior you expect.


Atenciosamente,

Edson Carlos Ericksson Richter

Em 04/01/2016 18:09, Riley Berton escreveu:

I have been experimenting with BDR and have a question about how BDR
interacts with transactions.

bdrdemo=# create table thingy (id INT, value TEXT, PRIMARY KEY(id));
CREATE TABLE
bdrdemo=# create table tx_log(id INT, msg TEXT, PRIMARY KEY(id));
CREATE TABLE
bdrdemo=# insert into thingy (id, value) VALUES (1, 'insert from node1');
INSERT 0 1

 From node1:

bdrdemo=# begin;
BEGIN
bdrdemo=# update thingy set value='update from node1' where id=1;
UPDATE 1
bdrdemo=# insert into tx_log (id, msg) values (1, 'tx log insert from node1');
INSERT 0 1
bdrdemo=# commit;
COMMIT

Simultaneously from node2:

bdrdemo=# begin;
BEGIN
bdrdemo=# update thingy set value='update from node2' where id=1;
UPDATE 1
bdrdemo=# insert into tx_log (id, msg) values (2, 'tx log insert from node2');
INSERT 0 1
bdrdemo=# commit;
COMMIT

...

bdrdemo=# select * from tx_log ;
  id |   msg
+--
   1 | tx log insert from node1
   2 | tx log insert from node2
(2 rows)

bdrdemo=# select * from thingy ;
  id |   value
+---
   1 | update from node2
(1 row)

The conflict on the "thingy" table has resulted in node2 winning based
on last_update wins default resolution.  However, both inserts have
applied.  My expectation is that the entire TX applies or does not
apply.  This expectation is clearly wrong.

Question is: is there a way (via a custom conflict handler) to have the
TX obeyed?  I can't see a way to even implement a simple bank account
database that changes multiple tables in a single transaction without
having the data end up in an inconsistent state.  Am I missing something
obvious here?

Thanks in advance for any help.

riley





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Overhead changing varchar(2000) to text

2015-12-15 Thread Edson Richter

Em 15/12/2015 00:27, Jim Nasby escreveu:

On 12/9/15 5:43 PM, Edson Richter wrote:

Actually, the biggest change is that I don't have to keep another
constraint between app and database - if I want to increase the user
perceived space, now I just have to change the application (of course,
under the limits).


For what it's worth, I usually put some limit on fields that a webapp 
can write to in the database. That way a bug in the app (or malicious 
action) can't just start allocating gigabytes of stuff in your database.


Thanks, Jim. It is a wise advice.
I really do that today, but I'll start double checking all text fields 
with special care.


Kind regards,

Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Overhead changing varchar(2000) to text

2015-12-09 Thread Edson Richter

Thanks.
After reading, I've run some tests and found no storage changes in 
tables moving from varchar(2000) to text.
Actually, the biggest change is that I don't have to keep another 
constraint between app and database - if I want to increase the user 
perceived space, now I just have to change the application (of course, 
under the limits).


Atenciosamente,

Edson Carlos Ericksson Richter

Em 09/12/2015 21:17, Kevin Grittner escreveu:

On Wed, Dec 9, 2015 at 5:13 PM, Edson Richter  wrote:


I do have several tables that uses varchar(2000) as store for remarks.
Lately, one customer need to store more than 2000 characteres, and I'm
considering changing from varchar(2000) to text.

What is the overhead?

None -- they are stored in exactly the same format; the only
difference is whether the length is limited.


Is there any place where I can learn about storage impacto for each data
type?

http://www.postgresql.org/docs/current/interactive/datatype-character.html





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Overhead changing varchar(2000) to text

2015-12-09 Thread Edson Richter

Hi!

I do have several tables that uses varchar(2000) as store for remarks.
Lately, one customer need to store more than 2000 characteres, and I'm 
considering changing from varchar(2000) to text.


What is the overhead?

Is there any place where I can learn about storage impacto for each data 
type?


Thanks,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Timezone and Brazilian DST

2015-10-28 Thread Edson Richter

Perfect explanation and indeed useful suggestions.
I'll play a bit with a development server.

Thanks,

Atenciosamente,

Edson Carlos Ericksson Richter

Em 28/10/2015 12:15, Yves Dorfsman escreveu:

On 2015-10-27 20:29, Edson Richter wrote:

Hi!

Using PostgreSQL 9.3.10 x86_64 Oracle EL7 compiled by gcc (GCC) 4.8.3
20140911, installed using yum repository.

In postgresql.conf, I do have:

timezone="America/Sao_Paulo"

Since DST is in place in Brazil, it is enough to "systemctl reload
postgresql-9.3" to make it effective?
Or a complete restart is required?

First a side note, if you work with systems and people in more than one time
zone, I strongly recommend and it will make your life much simpler if you
configure all your servers in UTC (looking at logs, reasoning about
automated/cron jobs etc...).

Note that you can set the timezone on a per session basis with `set
timezone="America/Sao_Paulo"`.

Now if you do want to work in local time zone, so America/Sao Paulo in your 
case:

If the timezone is set correctly, we do not need to re-start anything when the
DST switch happens.

Postgresql will follow the time zone set at the OS level, run the command
`timedatectl` to check if your RedHat is set to what you expect. In postgres,
use `show timezone` and verify that it says "localtime".

If you want postgres to use at a different time zone than the OS, then do
configure timezone in postgresql.conf, and yes you will need to restart
postgres once you have made that change.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Timezone and Brazilian DST

2015-10-28 Thread Edson Richter

Thanks, Tom.
I'll play a bit with this in a development server.


Regards,

Atenciosamente,

Edson Carlos Ericksson Richter

Em 28/10/2015 12:06, Tom Lane escreveu:

Adrian Klaver  writes:

No, if the above does not indicate a problem, then the issue is
probably, as Francisco said, in the timezone definitions. The thing is,
you are on 9.3.10 which has the latest time zone data file:

Since OEL is a naked ripoff of Red Hat, I would assume that they configure
Postgres the same way Red Hat does, ie --with-system-tzdata.  So what will
matter in that respect is whether your "tzdata" package is up to date, not
which PG version you're running.

However, AFAICT the spring DST transition rule in Brazil hasn't changed
since 2008 or so, so it seems rather unlikely that anybody would have
tzdata old enough for that to be a problem.

I'm betting the OP simply didn't have Postgres' timezone parameter set
properly.  Yes, that can be fixed with a reload (as a moment's
experimentation would have shown).

regards, tom lane






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Timezone and Brazilian DST

2015-10-27 Thread Edson Richter


 Adrian Klaver escreveu 

> On 10/27/2015 07:29 PM, Edson Richter wrote:
> > Hi!
> >
> > Using PostgreSQL 9.3.10 x86_64 Oracle EL7 compiled by gcc (GCC) 4.8.3
> > 20140911, installed using yum repository.
> >
> > In postgresql.conf, I do have:
> >
> > timezone="America/Sao_Paulo"
> >
> > Since DST is in place in Brazil, it is enough to "systemctl reload
> > postgresql-9.3" to make it effective?
> 
> Are you saying that you just changed the timezone setting or that time 
> in Brazil just went from standard time to DST and you want to know 
> whether Postgres will automatically pick that up with the existing 
> setting as above?

Configured since first setup. Brazil went DST few days ago.
Today, executing

Select now()
Resulted in

2015-10-28 02:45:37-03:00

I do expect

2015-10-28 03:45:37-02:00

I suspect that Postgres is ignoring the DST change. 

I just want to know if reload or restart is enough to make Postgres get into 
track.

Thanks,

Edson

> 
> > Or a complete restart is required?
> >
> > Thanks,
> >
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] PostgreSQL Timezone and Brazilian DST

2015-10-27 Thread Edson Richter

Hi!

Using PostgreSQL 9.3.10 x86_64 Oracle EL7 compiled by gcc (GCC) 4.8.3 
20140911, installed using yum repository.


In postgresql.conf, I do have:

timezone="America/Sao_Paulo"

Since DST is in place in Brazil, it is enough to "systemctl reload 
postgresql-9.3" to make it effective?

Or a complete restart is required?

Thanks,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication with 9.4

2015-10-03 Thread Edson Richter


 Madovsky escreveu 

> Hi,
> 
> I would like to fix a issue I'm facing of with the version 9.4 streaming 
> replication.
> is it possible to set on the fly the synchronous commit on the master 
> (or standby?)
> which only sync commit the hot standby node used by the client who has a 
> read only sql session on?
> example:
> node1 node2 node3 are hot standby with replication slots (physical) on 
> master 1
> node1 has the priority 1 since it's first on the names list
> now a client open a ready only session on node3 and start a transaction 
> update 

Sorry for my ignorance, but can you start a "transaction update" over a read 
only connection?

Edson


but
> within this transaction a sql select is done on the hot standby from the 
> updated table
> 
> thanks
> 
> Franck
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Grouping sets, cube and rollup

2015-08-25 Thread Edson Richter
Any chance to get those amazing wonderful features backported to 9.4?

Thanks, 

Edson 

Enviado do meu smartphone Sony Xperia™

Re: [GENERAL] Sync replication + high latency server

2015-08-13 Thread Edson Richter


Em 13/08/2015 00:40, Joshua D. Drake escreveu:


On 08/12/2015 05:33 PM, Edson Richter wrote:

Hi!

I've a situation where I would like to keep sync replication, where
servers have 10Mbps network connection but high latency (normally, ~20ms
but sometimes, 1000ms~2000ms, even 3000ms when network is under load).

Considering that I will keep enough WAL files (let's say, 200 WAL
segments on a server with low write rate):

What happens if I configure timeout in master server for, let's say, 10
seconds?
Will sync replication survive if timeout happens, and then, network
"recover" it self?


I think you are misunderstanding how sync rep works.


I don't think so: sync replication will force commit on both databases 
before returning OK to my app server.






Or will I be obligated to use async replication in such scenario?



async is your friend here.


And what about the timeout when using sync replication? What will happen?

Thanks,

Edson




JD






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Sync replication + high latency server

2015-08-12 Thread Edson Richter
Hi!

I've a situation where I would like to keep sync replication, where servers 
have 10Mbps network connection but high latency (normally, ~20ms but sometimes, 
1000ms~2000ms, even 3000ms when network is under load).

Considering that I will keep enough WAL files (let's say, 200 WAL segments on a 
server with low write rate):

What happens if I configure timeout in master server for, let's say, 10 seconds?
Will sync replication survive if timeout happens, and then, network "recover" 
it self?

Or will I be obligated to use async replication in such scenario?

I'll appreciate your expert insight.


Thanks,

Edson

  

[GENERAL] WAL archive "resend policy"

2015-06-29 Thread Edson Richter

Dear all,

Another question about WAR archiving: what is the "resend policy" if 
remote storage runs out of space?
The failed archives will be resend automatically in future, or there is 
need for manual interation?


Thanks,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_timeout and WAL size

2015-06-29 Thread Edson Richter

Thanks, Adrian.

That's the reference I was looking for.


Atenciosamente,

Edson Carlos Ericksson Richter

Em 29/06/2015 15:55, Adrian Klaver escreveu:

On 06/29/2015 11:49 AM, Edson Richter wrote:

Dear community,

I'm using PostgreSQL 9.3.6 on Linux x64.

Would sound a stupid questions, and sorry if it was already asked
before: if I set the "archive_timeout", and then I have them sent every
minute (for example), are the files still 16MB in size, or are they
truncated and sent in smaller sizes as well?
I've not found this information in docs.


See here:

http://www.postgresql.org/docs/9.4/interactive/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING 



"archive_timeout (integer)

The archive_command is only invoked for completed WAL segments. 
Hence, if your server generates little WAL traffic (or has slack 
periods where it does so), there could be a long delay between the 
completion of a transaction and its safe recording in archive storage. 
To limit how old unarchived data can be, you can set archive_timeout 
to force the server to switch to a new WAL segment file periodically. 
When this parameter is greater than zero, the server will switch to a 
new segment file whenever this many seconds have elapsed since the 
last segment file switch, and there has been any database activity, 
including a single checkpoint. (Increasing checkpoint_timeout will 
reduce unnecessary checkpoints on an idle system.) Note that archived

^^
files that are closed early due to a forced switch are still the same
^
length as completely full files. Therefore, it is unwise to use a very
^^^
short archive_timeout — it will bloat your archive storage. 
archive_timeout settings of a minute or so are usually reasonable. You 
should consider using streaming replication, instead of archiving, if 
you want data to be copied off the master server more quickly than 
that. This parameter can only be set in the postgresql.conf file or on 
the server command line.

"


Thanks,








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] archive_timeout and WAL size

2015-06-29 Thread Edson Richter

Dear community,

I'm using PostgreSQL 9.3.6 on Linux x64.

Would sound a stupid questions, and sorry if it was already asked 
before: if I set the "archive_timeout", and then I have them sent every 
minute (for example), are the files still 16MB in size, or are they 
truncated and sent in smaller sizes as well?

I've not found this information in docs.

Thanks,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Invalid memory alloc

2015-04-23 Thread Edson Richter

On 23-04-2015 16:55, Marc-André Goderre wrote:

Hello, I'm processing  a 100Million row table.
I get error message about memory and I'ld like to know what can cause this 
issue.

...
psql:/home/ubuntu/create_topo.sql:12: NOTICE:  104855000 edges processed
psql:/home/ubuntu/create_topo.sql:12: NOTICE:  104856000 edges processed
psql:/home/ubuntu/create_topo.sql:12: NOTICE:  104857000 edges processed
psql:/home/ubuntu/create_topo.sql:12: NOTICE:  invalid memory alloc request 
size 1677721600
psql:/home/ubuntu/create_topo.sql:12: NOTICE:  UPDATE public.way_noded SET 
source = 88374866,target = 88362922 WHERE id =  142645362
  pgr_createtopology

  FAIL
(1 row)

The server  has a 10Gb of shared_buffer.
Do you thing this quantity of memory allowed should normaly be enough to 
process the data?

Thanks
Marc


My question would sound stupid... you have 10Gb shared buffer, but how 
much physical memory on this server?
How have you configured the kernel swappines, overcommit_memoryt, 
overcommit_ratio?

Have you set anything different in shmmax or shmall?

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Intermitent connection could corrupt slave database?

2015-04-10 Thread Edson Richter

Hi, folks!

I've a async master to slave database replication, both 9.3.5 running 
Oracle Linux 7 x64.
Today, a intermitent link that ended with abrupt interruption, and made 
necessary to promote the slave database to master.
After promotion, two medium databases (<5Gb) became active without 
issues, and two of our biggest databases (around 60Gb) became corrupt 
with error "... read only 0 of 8192" (the already known error). The 
original databases are protected - besides not yet available due link 
issues, so I'm on the "safe side".


But I'm curious if this "intermitent link" scenario is already covered 
(perhaps in later releases), or this is something we cannot foresee/deal 
with?



--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I unite 2 selects?

2015-01-29 Thread Edson Richter

You mean union?

select 1 as t1
union
select 2 as t2
union
select 3 as t3

?

Atenciosamente,

Edson Carlos Ericksson Richter

On 29-01-2015 16:58, Sterpu Victor wrote:

Hello
Can I write a query where I receive a single result set from many queries?
Something like this: SELECT (SELECT 1 AS t1, 2 AS t2), (SELECT 3 AS t3)
I tried exactly this but the error is: "ERROR:  subquery must return 
only one column"

But I don't see why it must have only one column.
Thank you



   

This email has been checked for viruses by Avast antivirus software.
www.avast.com 



/*DISCLAIMER*:
Acest mesaj de posta electronica si documentele aferente sunt 
confidentiale. Este interzisa distribuirea, dezvaluirea sau orice alt 
mod de utilizare a lor. Daca nu sunteti destinatarul acestui mesaj, 
este interzis sa actionati in baza acestor informatii. Citirea, 
copierea, distribuirea, dezvaluirea sau utilizarea in alt mod a 
informatiei continute in acest mesaj constituie o incalcare a legii. 
Daca ati primit mesajul din greseala, va rugam sa il distrugeti, 
anuntand expeditorul de eroarea comisa. Intrucat nu poate fi garantat 
faptul ca posta electronica este un mod sigur si lipsit de erori de 
transmitere a informatiilor, este responsabilitatea dvs. sa va 
asigurati ca mesajul (inclusiv documentele alaturate lui) este validat 
si autorizat spre a fi utilizat in mediul dvs./







Re: [GENERAL] Replication: How to query current segments allocation relative to "Wal keep segments"?

2015-01-05 Thread Edson Richter

On 05-01-2015 10:02, Michael Paquier wrote:

On Mon, Jan 5, 2015 at 6:51 PM, Edson Carlos Ericksson Richter
 wrote:

Would this kind of count being recorded somewhere else?
How does the server knows that the wal_segments have been exhausted?

You should evaluate the amount of wal_keep_segments necessary using
the replication lag in terms of WAL position differences (LSN) between
the master and its slaves. pg_stat_replication gives you the WAL
position (LSN) up to where each slave has received WAL information.
Combine it with pg_current_xlog_location() to determine what is the
current location master is writing WAL and you can evaluate the number
of WAL files that need to be retained on master. Knowing that each WAL
file is normally 16MB, simply use pg_xlog_location_diff to calculate
the WAL lag as a difference of bytes (for 9.4 a simple difference
operation is possible with the data type pg_lsn), and then guess from
it the number of WAL files that are actually necessary.

If you care that much about WAL retention btw, consider using
replication slots with 9.4, just be careful to monitor the partition
where pg_xlog sits in.
Despite being a completely valid statement, I've two contrary thoughts 
about it:


1) I cannot migrate production servers at my free will. It requires long 
planning, and probably will happen only in one or two years from now 
(year end 2015 or 2016)
2) I do prefer to monitor how much wal segments I'm really using (and in 
need), and then fix them up to 25% above this limit, than giving a 
chance to blow my disk space (I mean, is preferable to stop replication 
than put in risk whole database because of disk space)


Thanks,

Edson



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Divergences in view source code - both servers 9.3.5

2014-12-08 Thread Edson Richter

Right, Tom! (again :-) )
Ubuntu is running 9.3.4 (I've installed 9.3.5 - but for some reason, it 
have not started).
I suppose I'll uninstall 9.3.4 and then install 9.3.5. Since it is a 
development machine, I can do that safely.


Thanks for your fast and precise answer, as always!

Atenciosamente,

Edson Carlos Ericksson Richter

On 08-12-2014 16:53, Tom Lane wrote:

select version()




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Divergences in view source code - both servers 9.3.5

2014-12-08 Thread Edson Richter

I do have two servers running 9.3.5.
One server is CentOS 6.5 x64 with all updates.
The other server is Ubuntu 14.04-1 x64 with all updates.
Both run PostgreSQL 9.3.5 x64 - one installed using YUM, other with APT.

When I compare source code of the same view, I get the following 
differences:


*_On "Ubuntu" server:_*

( ( ( ( select
e.id,
e.nome as "empresa",
'1. nfe'::text as "tipo",
( select sum(arquivosimportados.contasucesso) as sum


_*On CentOS server:*_

select
e.id,
e.nome as "empresa",
'1. nfe'::text as "tipo",
( select sum(arquivosimportados.contasucesso) as sum


Why does the same CREATE VIEW statement result in different source codes 
in the server side?



Regards,

--
Atenciosamente,

Edson Carlos Ericksson Richter



Re: [GENERAL] Centos 7 - anyone?

2014-11-10 Thread Edson Richter

I've found this very useful:

http://people.planetpostgresql.org/devrim/index.php?/archives/82-Running-more-than-one-9.4-9.3-instance-in-parallel-on-RHEL-7.html



Atenciosamente,

Edson Carlos Ericksson Richter

On 10-11-2014 16:45, Edson Richter wrote:

Hi!

I've just received a new server, and I've the intention to install 
Centos7 and PostgreSQL 9.3.5.
After installing the yum repo (as described in 
http://www.postgresql.org/download/linux/redhat/), I've installed 
everything I need with the command


yum install postgresql93 postgresql93-server postgresql93-contrib

Which eventually worked very well.
After that, I've created a /etc/sysconfig/pgsql/postgresql-9.3 
configuration file (as I used to do in Centos 6.5), and then "service 
postgresql-9.3 initdb" ! crash, bang, boom.

Wont work, with error:

"[root@localhost pgsql]# service postgresql-9.3 initdb
The service command supports only basic LSB actions (start, stop, 
restart, try-restart, reload, force-reload, status). For other 
actions, please try to use systemctl.

[root@localhost pgsql]# systemctl --help
systemctl [OPTIONS...] {COMMAND} ..."

So, I've learned that is a new way of doing things... Reading the yum 
repository documentation, I've learned:


"|postgresql-setup initdb
systemctl enable postgresql.service"

Which works partially: it have not taken my 
/etc/sysconfig/postgresql-9.3 configuration file for path and other 
things.


Can anyone point me what I've done wrong?

Thanks,|
--
Atenciosamente,

Edson Carlos Ericksson Richter




[GENERAL] Centos 7 - anyone?

2014-11-10 Thread Edson Richter

Hi!

I've just received a new server, and I've the intention to install 
Centos7 and PostgreSQL 9.3.5.
After installing the yum repo (as described in 
http://www.postgresql.org/download/linux/redhat/), I've installed 
everything I need with the command


yum install postgresql93 postgresql93-server postgresql93-contrib

Which eventually worked very well.
After that, I've created a /etc/sysconfig/pgsql/postgresql-9.3 
configuration file (as I used to do in Centos 6.5), and then "service 
postgresql-9.3 initdb" ! crash, bang, boom.

Wont work, with error:

"[root@localhost pgsql]# service postgresql-9.3 initdb
The service command supports only basic LSB actions (start, stop, 
restart, try-restart, reload, force-reload, status). For other actions, 
please try to use systemctl.

[root@localhost pgsql]# systemctl --help
systemctl [OPTIONS...] {COMMAND} ..."

So, I've learned that is a new way of doing things... Reading the yum 
repository documentation, I've learned:


"|postgresql-setup initdb
systemctl enable postgresql.service"

Which works partially: it have not taken my 
/etc/sysconfig/postgresql-9.3 configuration file for path and other things.


Can anyone point me what I've done wrong?

Thanks,|

--
Atenciosamente,

Edson Carlos Ericksson Richter



Re : Re : [GENERAL] Query "top 10 and others"

2014-07-05 Thread Edson Richter
Thanks!I'll investigate (explain) performance for both versions. Atenciosamente,Edson Richter -- Mensagem original --De: David Johnston Data: 04/07/2014 21h20Para: Edson Richter;Cc:pgsql-general@postgresql.org;Assunto:Re: Re : [GENERAL] Query "top 10 and others"> with QRY as (select C1.country, C1.state, sum(C1.population)
>   from places C1
>   group by 1, 2
>    order by 3 DESC
>   limit 10)
> 
> select * from QRY
> union
> select 'others' as "country", '' as "state", sum(population)
>   from places
>  where not exists (select 1 from QRY where country = QRY.country and state
> = QRY.state)
> (not tested)
​with QRY as ( SELECT country, state, sum(population) as st_pop FROM places GROUP BY country, state ), u1 AS ​( SELECT country, state, st_pop FROM QRY ORDER BY st_pop DESC LIMIT 10 )
, u2 AS ( SELECT 'other' AS country, '' AS state, sum(st_pop) FROM QRY WHERE NOT EXISTS (
SELECT 1 FROM u1 WHERE (QRY.country, QRY.state) = (u1.country, u1,state))
SELECT * FROM u1UNION ALLSELECT * FROM u2
;David J.



Re : [GENERAL] Query "top 10 and others"

2014-07-04 Thread Edson Richter
Would you please provide an example, even hypothetical?  Atenciosamente,Edson Richter -- Mensagem original --De: David G Johnston Data: 04/07/2014 19h22Para: pgsql-general@postgresql.org;Assunto:Re: [GENERAL] Query "top 10 and others"Edson Richter wrote
> I would like to construct a query, to be used in graphics (Pie Chart, to
> be more precise), and to avoid having 500 slices, I would like to make a
> query that returns the top ten, and then all the rest summed.
> 
> I know I can do it by using some repetition, like:
> 
> a) Assume "places" table with population with structure
> 
> create table places (
>   id as integer primary key,
>   country as varchar(3),
>   state as varchar(50),
>   city as varchar(50),
>   population integer
> )
> 
> b) There are not so many records in table (my country have ~5500 cities,
> and 27 states), and initially, there will be only 1 country.
> 
> with QRY as (select C1.country, C1.state, sum(C1.population)
>   from places C1
>   group by 1, 2
>    order by 3 DESC
>   limit 10)
> 
> select * from QRY
> union
> select 'others' as "country", '' as "state", sum(population)
>   from places
>  where not exists (select 1 from QRY where country = QRY.country and state
> = QRY.state)
> 
> 
> Can I simplify the query by using some sort of window function or other
> PostgreSQL feature I don't know yet?

This is the best solution; though it may be faster to calculate all the sums
in the CTE then limit 10 the first union part and sum the remaining sums in
the second part.  That way you do not have to scan the entire places table
twice.

David J.






--
View this message in context: http://postgresql.1045698.n5.nabble.com/Query-top-10-and-others-tp5810597p5810601.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Query "top 10 and others"

2014-07-04 Thread Edson Richter
I would like to construct a query, to be used in graphics (Pie Chart, to be 
more precise), and to avoid having 500 slices, I would like to make a query 
that returns the top ten, and then all the rest summed.

I know I can do it by using some repetition, like:

a) Assume "places" table with population with structure

create table places (
  id as integer primary key,
  country as varchar(3),
  state as varchar(50),
  city as varchar(50),
  population integer
)

b) There are not so many records in table (my country have ~5500 cities, and 27 
states), and initially, there will be only 1 country.

with QRY as (select C1.country, C1.state, sum(C1.population)
  from places C1
  group by 1, 2
   order by 3 DESC
  limit 10)

select * from QRY
union
select 'others' as "country", '' as "state", sum(population)
  from places
 where not exists (select 1 from QRY where country = QRY.country and state = 
QRY.state)


Can I simplify the query by using some sort of window function or other 
PostgreSQL feature I don't know yet?


Thanks,

Edson Richter

  

Re: Re : [GENERAL] Global value/global variable?

2014-06-19 Thread Edson Richter
I've ended creating a table to store database parameters, and using a subquery, 
I can achieve the "global variable" effect.
Thanks for your insight,
Edson.

From: edsonrich...@hotmail.com
To: pgsql-general@postgresql.org
Subject: Re : [GENERAL] Global value/global variable?
Date: Thu, 19 Jun 2014 14:39:45 +

Yes. It's similar, but consider I'm using a connection pool, so I''ve no 
control on when the connections are established/destroyed.I think I need 
something with scope broader than session... Atenciosamente,

Edson Richter 

-- Mensagem original --
De: John McKown
Data: 19/06/2014 8h44
Para: Edson Richter;
Assunto:Re: [GENERAL] Global value/global variable?

A psql session is shown below:
tsh009=# \set lpar '\'LIH1\''tsh009=# select * from capped where lpar=:lpar 
limit 5; lpar |   started   |ended
--+-+- LIH1 | 2014-06-09 21:57:13 | 
2014-06-09 22:21:21 LIH1 | 2014-06-09 22:42:12 | 2014-06-09 23:06:22 LIH1 | 
2014-06-09 23:22:12 | 2014-06-09 23:39:12
 LIH1 | 2014-06-09 23:52:12 | 2014-06-10 00:01:21 LIH1 | 2014-06-10 01:07:11 | 
2014-06-10 01:07:51(5 rows)

Is this close to what you are looking for?


On Wed, Jun 18, 2014 at 9:50 PM, Edson Richter  wrote:




It is possible to define a global value/variable in PostgreSQL in a way that I 
can use it in any query/view/function?For example, I do have a connection 
string I use for dblink connections in several places (specially, inside views).
Then, if I want to change the connection string, I do have to change every view 
manually.If I can set a kind of global variable, then I just use it inside 
every view - then, at my application startup (or even at PostgreSQL startup, if 
I can set this string at postgresql.conf level), would set this "global 
variable" to point the current string.

Your enlightment will be really welcome.
Regards,
Edson
  


-- 
There is nothing more pleasant than traveling and meeting new people!
Genghis Khan

Maranatha! <><

John McKown



  

Re : [GENERAL] Global value/global variable?

2014-06-19 Thread Edson Richter
Yes. It's similar, but consider I'm using a connection pool, so I''ve no control on when the connections are established/destroyed.I think I need something with scope broader than session... Atenciosamente,Edson Richter -- Mensagem original --De: John McKownData: 19/06/2014 8h44Para: Edson Richter;Assunto:Re: [GENERAL] Global value/global variable?A psql session is shown below:tsh009=# \set lpar '\'LIH1\''tsh009=# select * from capped where lpar=:lpar limit 5; lpar |   started   |    ended    
--+-+- LIH1 | 2014-06-09 21:57:13 | 2014-06-09 22:21:21 LIH1 | 2014-06-09 22:42:12 | 2014-06-09 23:06:22 LIH1 | 2014-06-09 23:22:12 | 2014-06-09 23:39:12
 LIH1 | 2014-06-09 23:52:12 | 2014-06-10 00:01:21 LIH1 | 2014-06-10 01:07:11 | 2014-06-10 01:07:51(5 rows)Is this close to what you are looking for?
On Wed, Jun 18, 2014 at 9:50 PM, Edson Richter <edsonrich...@hotmail.com> wrote:



It is possible to define a global value/variable in PostgreSQL in a way that I can use it in any query/view/function?For example, I do have a connection string I use for dblink connections in several places (specially, inside views).
Then, if I want to change the connection string, I do have to change every view manually.If I can set a kind of global variable, then I just use it inside every view - then, at my application startup (or even at PostgreSQL startup, if I can set this string at postgresql.conf level), would set this "global variable" to point the current string.
Your enlightment will be really welcome.Regards,Edson 		 	   		  
-- There is nothing more pleasant than traveling and meeting new people!Genghis KhanMaranatha! <><
John McKown




[GENERAL] Global value/global variable?

2014-06-18 Thread Edson Richter
It is possible to define a global value/variable in PostgreSQL in a way that I 
can use it in any query/view/function?For example, I do have a connection 
string I use for dblink connections in several places (specially, inside 
views).Then, if I want to change the connection string, I do have to change 
every view manually.If I can set a kind of global variable, then I just use it 
inside every view - then, at my application startup (or even at PostgreSQL 
startup, if I can set this string at postgresql.conf level), would set this 
"global variable" to point the current string.
Your enlightment will be really welcome.
Regards,
Edson
  

Re: [GENERAL] Backups over slave instead master?

2014-05-02 Thread Edson Richter

Em 01/05/2014 16:39, bricklen escreveu:


On Thu, May 1, 2014 at 8:54 AM, Shaun Thomas <mailto:stho...@optionshouse.com>> wrote:


On 05/01/2014 10:31 AM, Edson Richter wrote:

I'm wondering if would be possible to execute these backups in
the slave
server instead, so I can avoid the overhead of backups on
master system?


If you're on PostgreSQL 9.3, you can backup the slave server
safely. If not, you'll need to run this command on the master
system first:

SELECT pg_start_backup('some-label');

After the backup is done, run this on the master server:

SELECT pg_stop_backup();


Or alternatively, if "backup" = pg_dump, then backups can taken from 
the slave too. Have a look at pg_xlog_replay_pause() + pg_dump + 
pg_xlog_replay_resume().

http://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL-TABLE

Yes, backup = pg_dump.
Can I use it on slave even on 9.2.7?

Edson



[GENERAL] Backups over slave instead master?

2014-05-01 Thread Edson Richter
I've a basic setup with async replication between two distant, 
geographically separated servers over vpn.

Replication happens every 0.5 seconds or so, and is incredible reliable.
Today, I've been using backup on master server every twelve hours.
I'm wondering if would be possible to execute these backups in the slave 
server instead, so I can avoid the overhead of backups on master system?


Thanks,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select max(id) causes AccessExclusiveLock?

2014-04-28 Thread Edson Richter

Em 28/04/2014 12:01, Tom Lane escreveu:

Edson Richter  writes:

When I issue a
select max(id) from MyTableName
Postgres 9.2.7 is scaling a AccessExclusiveLock and causing large delays.

Really?

regression=# create table MyTableName (
regression(#id integer not null primary key,
regression(#seqYear char(4),
regression(#seqCount integer);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "mytablename_pkey" for 
table "mytablename"
CREATE TABLE
regression=# begin;
BEGIN
regression=# select max(id) from MyTableName;
  max
-
 
(1 row)


regression=# select * from pg_locks;
   locktype  | database | relation | page | tuple | virtualxid | transactionid 
| classid | objid | objsubid | virtualtransaction |  pid  |  mode   | 
granted | fastpath
+--+--+--+---++---+-+---+--++---+-+-+--
  relation   |16384 |11069 |  |   ||   
| |   |  | 2/120853   | 12432 | AccessShareLock | t 
  | t
  relation   |16384 |42142 |  |   ||   
| |   |  | 2/120853   | 12432 | AccessShareLock | t 
  | t
  relation   |16384 |42139 |  |   ||   
| |   |  | 2/120853   | 12432 | AccessShareLock | t 
  | t
  virtualxid |  |  |  |   | 2/120853   |   
| |   |  | 2/120853   | 12432 | ExclusiveLock   | t 
  | t
(4 rows)

I see nothing higher than AccessShareLock on the table.

regards, tom lane




Really big sorry!!!

The programmer added a


LOCK TABLE MyTableName


just before issuing the select max(id) from MyTableName.


I do suspect this is the case, right?

Really sorry, I should have look into the code before asking!

Thanks for your (always) fast support.

Regards,


Edson



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Select max(id) causes AccessExclusiveLock?

2014-04-28 Thread Edson Richter

On table

create table MyTableName (
  id integer not null primary key,
  seqYear char(4),
  seqCount integer,
  ...)

Where id is primary key and (seqYear, seqCount) is a unique index.
Table facts: about 40 fields, 1 million records.

When I issue a

select max(id) from MyTableName


Postgres 9.2.7 is scaling a AccessExclusiveLock and causing large delays.

Is that expected? Is there a way to avoid the AccessExclusiveLock?

Thanks,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does "checkpointer" is consumig ~1.2Gb of RAM?

2014-03-31 Thread Edson Richter

  
  
Em 31/03/2014 17:06, Scott Marlowe
  escreveu:


  It's not. If the RES shows 1215M And the SHR is
1214M, then checkpointer is only using 1M. The difference
between the two is the shared memory. RES includes SHR.
  


Thanks, now I understand.

Regards,

Edson



  

On Mon, Mar 31, 2014 at 1:44 PM, Edson
  Richter <edsonrich...@hotmail.com>
  wrote:
  

  Em 31/03/2014 00:38, Scott Marlowe escreveu:
  
  

  

  
On Sun, Mar 30, 2014 at
  8:43 PM, Edson Richter <edsonrich...@hotmail.com>
  wrote:
  
 I'm
  curious about the "checkpointer" process
  and its configuration.
  What are the configuration options that
  affects the checkpointer process?
  Currently, under load, this process goes
  up to about 1.2GB of RAM:
  
  
  
  
  -- 



  

  
  

No it really doesn't.
  VIRT is every thing it touches whether it uses
  it individually or with other processes. RES
  is what THIS process is using all by itself.
  SHR is what it's accessing of shared memory.
  Here's a short explanation of what those three
  values mean: http://linuxpoison.blogspot.com/2009/10/what-is-difference-among-virt-res-and.html
  

Also for a more
  technical one run "man top" and search for
  RES, SHR, and VIRT

  

  
  

  
  Yes, I understand that. That's why I've asked: why is
  checkpointer process consuming 1.2Gb of RAM (1215M RES
  more precisely), and which parameter affects its memory
  consumption?
  
  Thanks,
  
  Edson
  

-- 
  
  

  
 Edson Carlos Ericksson Richter
  
  Sistemas para Bibliotecas, Escolinhas
Infantis
Projetos sob medida para sua empresa 
  
  
Celular:
(51)
  9318-9766
(51)
  8585-0796
  
  
 "A mente que se abre a uma
nova ideia jamais voltará ao seu tamanho
original"
  - Albert Einstein 
  

  
  

  

  




-- 
To understand recursion, one must first understand recursion.
  



-- 
  
  

  

Edson Carlos Ericksson Richter
  
  Sistemas para Bibliotecas, Escolinhas Infantis
Projetos sob medida para sua empresa

  
  
Celular:
(51) 9318-9766
(51) 8585-0796
  
  

  "A mente que se abre a uma nova ideia jamais voltará ao
seu tamanho original"
  - Albert Einstein

  

  
  

  



Re: [GENERAL] Why does "checkpointer" is consumig ~1.2Gb of RAM?

2014-03-31 Thread Edson Richter

  
  
Em 31/03/2014 00:38, Scott Marlowe
  escreveu:


  

  On Sun, Mar 30, 2014 at 8:43 PM,
Edson Richter <edsonrich...@hotmail.com>
wrote:

   I'm curious about
the "checkpointer" process and its configuration.
What are the configuration options that affects the
checkpointer process?
Currently, under load, this process goes up to about
1.2GB of RAM:




-- 
  
  
  

  


  
  No it really doesn't. VIRT is every
thing it touches whether it uses it individually or with
other processes. RES is what THIS process is using all by
itself. SHR is what it's accessing of shared memory. Here's
a short explanation of what those three values mean: http://linuxpoison.blogspot.com/2009/10/what-is-difference-among-virt-res-and.html

  
  Also for a more technical one run
"man top" and search for RES, SHR, and VIRT
  

  


Yes, I understand that. That's why I've asked: why is checkpointer
process consuming 1.2Gb of RAM (1215M RES more precisely), and which
parameter affects its memory consumption?

Thanks,

Edson

-- 
  
  

  

Edson Carlos Ericksson Richter
  
  Sistemas para Bibliotecas, Escolinhas Infantis
Projetos sob medida para sua empresa

  
  
Celular:
(51) 9318-9766
(51) 8585-0796
  
  

  "A mente que se abre a uma nova ideia jamais voltará ao
seu tamanho original"
  - Albert Einstein

  

  
  

  



[GENERAL] Why does "checkpointer" is consumig ~1.2Gb of RAM?

2014-03-30 Thread Edson Richter

  
  
I'm curious about the "checkpointer" process and its configuration.
What are the configuration options that affects the checkpointer
process?
Currently, under load, this process goes up to about 1.2GB of RAM:




-- 
  
  

  

Edson Carlos Ericksson Richter
  
  Sistemas para Bibliotecas, Escolinhas Infantis
Projetos sob medida para sua empresa

  
  
Celular:
(51) 9318-9766
(51) 8585-0796
  
  

  "A mente que se abre a uma nova ideia jamais voltará ao
seu tamanho original"
  - Albert Einstein

  

  
  

  



Re: [GENERAL] Add column name to error message?

2014-03-18 Thread Edson Richter

Em 18/03/2014 17:44, Tom Lane escreveu:

Edson Richter  writes:

Em 18/03/2014 17:17, Francisco Olarte escreveu:

Maybe they are not hiding it, but the error is raised by a value
checking routine which does not know where the value comes from / goes
to ( ie, it's a 'check_varchar(xx, maxlen), which is used to check
columns, function aprameters, type casts) ( that happens to me in
my own code in some places where the context / error message is
extremely difficult to propagate down / up  ) .

Thanks, Francisco. I don't know internals.
But, doesn't the "value checking routing" caller's know which column is
being tested?

No.  Typically all it would know is that it's evaluating a targetlist; the
fact that the resulting tuple is going to be inserted into some particular
table is known only at much higher levels (not even the same plan node).

I've thought about using our "errcontext()" mechanism to help with this,
but there's still a lot of additional infrastructure that would be needed
... and that infrastructure would not be without cost.  I'm afraid it
would result in measurable slowdowns in query execution.  Not sure if
people would think that's a good tradeoff.

regards, tom lane



Thanks, Tom.
I don't know much about Postgres source code. I was wondering if it 
works like in Java apps, where the exception thrown is capture by higher 
layer until having enough information to provide complete erros message 
to the user (being a user a real user or kind of interface - jdbc or 
odbc - user).


Now I understand better, and I'm also afraid of getting slowdown. I do 
prefer the higher performance at price of having to discover the right 
problem on my own.


Regards,

Edson



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add column name to error message?

2014-03-18 Thread Edson Richter

Em 18/03/2014 17:17, Francisco Olarte escreveu:

Hi:


On Tue, Mar 18, 2014 at 8:22 PM, Edson Richter 
mailto:edsonrich...@hotmail.com>> wrote:


Since my table can have more than one "character varying(20)"
inside, would be useful to know which one has throw the error...
Is there any reasoning (security, perhaps) for hiding column name
in the error?


Maybe they are not hiding it, but the error is raised by a value 
checking routine which does not know where the value comes from / goes 
to ( ie, it's a 'check_varchar(xx, maxlen), which is used to check 
columns, function aprameters, type casts) ( that happens to me in 
my own code in some places where the context / error message is 
extremely difficult to propagate down / up  ) .


Francisco Olarte.


Thanks, Francisco. I don't know internals.
But, doesn't the "value checking routing" caller's know which column is 
being tested?


Regards,
--

*Edson Carlos Ericksson Richter*
/Sistemas para Bibliotecas, Escolinhas Infantis
Projetos sob medida para sua empresa/
Celular:(51) 9318-9766
(51) 8585-0796
/"A mente que se abre a uma nova ideia jamais voltará ao seu tamanho 
original"/

- Albert Einstein




[GENERAL] Add column name to error message?

2014-03-18 Thread Edson Richter

Hi!

I would like to have a improved error message with column name. Today 
(9.2.3), I receive the following error:


"ERROR: value too long for type character varying(20)"

Why not this more intuitive error message:

"ERROR: value too long for type character varying(20) at column XYZ"

Since my table can have more than one "character varying(20)" inside, 
would be useful to know which one has throw the error...
Is there any reasoning (security, perhaps) for hiding column name in the 
error?


Thanks,
--

*Edson Carlos Ericksson Richter*
/Sistemas para Bibliotecas, Escolinhas Infantis
Projetos sob medida para sua empresa/
Celular:(51) 9318-9766
(51) 8585-0796
/"A mente que se abre a uma nova ideia jamais voltará ao seu tamanho 
original"/

- Albert Einstein




Re: [GENERAL] Help with connection issue - started today

2014-02-06 Thread Edson Richter

Em 06/02/2014 14:25, Bret Stern escreveu:

You checked pg_hba_conf for host 10.68.73.6?
Is there a matching log entry for the rejection?


Thanks for your effort, Bret!
Problem already solved (I've posted the solution in the list past days).
Problem was a "context.xml" with a wrong database server reference - the 
reason that made Tomcat to start considering that file is a mystery to 
me, but removing the file, it became normal again.


Regards,

Edson




On Wed, 2014-02-05 at 09:21 -0200, Edson Richter wrote:

Dear all, I need your advise. Found a tricky situation.

Without any changes in the configuration files, a **local** connection
to a local VPN IP address could not be established with the following error:

2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException:
ERROR: could not establish connection
Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master", database "master", SSL off

org.postgresql.util.PSQLException: ERROR: could not establish connection
Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master", database "master", SSL off

  at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
  at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
  at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
  at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
  at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
  at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
  at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
  at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
  at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
  at br.com...

If I run from the Local Server:
[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
   count
---
   9
(1 registro)


If I run from a Remote Server:

[root@myremoteserver ssh]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
   count
---
   9
(1 registro)


So, seems that only JDBC is getting in trouble to connect to a local IP
address.
Current interfaces (and yes, the tunneling is running - otherwise the
remote server could not connect):

[root@mylocalserver logs]# ifconfig
eth0  Link encap:Ethernet  Endereço de HW XX:XX:XX:XX:XX:XX
inet end.: xxx.xx.xxx.85  Bcast:xxx.xx.xxx.87
Masc:255.255.255.248
endereço inet6: ::xxx:::/64 Escopo:Link
UP BROADCASTRUNNING MULTICAST  MTU:1500  Métrica:1
RX packets:172557 errors:0 dropped:0 overruns:0 frame:0
TX packets:293439 errors:0 dropped:0 overruns:0 carrier:0
colisões:0 txqueuelen:1000
RX bytes:24537681 (23.4 MiB)  TX bytes:227413210 (216.8 MiB)
IRQ:28 Memória:fb00-fb7f

loLink encap:Loopback Local
inet end.: 127.0.0.1  Masc:255.0.0.0
endereço inet6: ::1/128 Escopo:Máquina
UP LOOPBACKRUNNING  MTU:16436  Métrica:1
RX packets:138156 errors:0 dropped:0 overruns:0 frame:0
TX packets:138156 errors:0 dropped:0 overruns:0 carrier:0
colisões:0 txqueuelen:0
RX bytes:31993170 (30.5 MiB)  TX bytes:31993170 (30.5 MiB)

tun0  Link encap:Não Especificado  Endereço de HW
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet end.: 10.68.73.6  P-a-P:10.68.73.5 Masc:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Métrica:1
RX packets:149106 errors:0 dropped:0 overruns:0 frame:0
TX packets:274534 errors:0 dropped:199 overruns:0 carrier:0
colisões:0 txqueuelen:100
RX bytes:6291572 (6.0 MiB)  TX bytes:381732404 (364.0 MiB)














--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help with connection issue - started today

2014-02-06 Thread Edson Richter

Dear all, I need your advise. Found a tricky situation.

Without any changes in the configuration files, a **local** connection 
to a local VPN IP address could not be established with the following error:


2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException: 
ERROR: could not establish connection
  Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
"master", database "master", SSL off


org.postgresql.util.PSQLException: ERROR: could not establish connection
  Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
"master", database "master", SSL off


at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
at 
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
at 
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
at 
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)

at br.com...

If I run from the Local Server:
[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
 count
---
 9
(1 registro)


If I run from a Remote Server:

[root@myremoteserver ssh]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
 count
---
 9
(1 registro)


So, seems that only JDBC is getting in trouble to connect to a local IP 
address.
Current interfaces (and yes, the tunneling is running - otherwise the 
remote server could not connect):


[root@mylocalserver logs]# ifconfig
eth0  Link encap:Ethernet  Endereço de HW XX:XX:XX:XX:XX:XX
  inet end.: xxx.xx.xxx.85  Bcast:xxx.xx.xxx.87 
Masc:255.255.255.248

  endereço inet6: ::xxx:::/64 Escopo:Link
  UP BROADCASTRUNNING MULTICAST  MTU:1500  Métrica:1
  RX packets:172557 errors:0 dropped:0 overruns:0 frame:0
  TX packets:293439 errors:0 dropped:0 overruns:0 carrier:0
  colisões:0 txqueuelen:1000
  RX bytes:24537681 (23.4 MiB)  TX bytes:227413210 (216.8 MiB)
  IRQ:28 Memória:fb00-fb7f

loLink encap:Loopback Local
  inet end.: 127.0.0.1  Masc:255.0.0.0
  endereço inet6: ::1/128 Escopo:Máquina
  UP LOOPBACKRUNNING  MTU:16436  Métrica:1
  RX packets:138156 errors:0 dropped:0 overruns:0 frame:0
  TX packets:138156 errors:0 dropped:0 overruns:0 carrier:0
  colisões:0 txqueuelen:0
  RX bytes:31993170 (30.5 MiB)  TX bytes:31993170 (30.5 MiB)

tun0  Link encap:Não Especificado  Endereço de HW 
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00

  inet end.: 10.68.73.6  P-a-P:10.68.73.5 Masc:255.255.255.255
  UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Métrica:1
  RX packets:149106 errors:0 dropped:0 overruns:0 frame:0
  TX packets:274534 errors:0 dropped:199 overruns:0 carrier:0
  colisões:0 txqueuelen:100
  RX bytes:6291572 (6.0 MiB)  TX bytes:381732404 (364.0 MiB)






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection problems - local IP address refused!

2014-02-05 Thread Edson Richter

Em 05/02/2014 15:08, Adrian Klaver escreveu:

On 02/05/2014 08:53 AM, Edson Richter wrote:

Em 05/02/2014 13:03, Adrian Klaver escreveu:

On 02/05/2014 05:43 AM, Edson Richter wrote:

Em 05/02/2014 11:12, Alban Hertroys escreveu:

Are you connecting to the correct host?

On 5 February 2014 12:37, Edson Richter 
wrote:

Without any changes in the configuration files, a **local**
connection to a
local VPN IP address could not be established with the following
error:

The error (JDBC connection using driver version 1002):
2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException:
ERROR:
could not establish connection
   Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master",
database "master", SSL off

Sure! On the exatcly same server:

[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.


The error message:

 Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master", database "master", SSL off

indicates your JDBC connection is not connecting to a database with
the pg_hba.conf you think it is.

What are the connection parameters for the JDBC connection?

So where is the database running?

What interface is it listening on?

I know it has been asked, but is there another Postgres instance 
running?




Well, there is only one PostgreSQL database running in this machine. It
has three interfaces:

lo (127.0.0.1)
eth0 (xxx.xxx.xxx.xxx)
tun0 (10.68.73.6)

PostgreSQL is configured to listen_adrress =  '*'
Firewall allows evertyhing from lo and tun0.
AFAIK (please correct me if I'm wrong) JDBC knows nothing about
pg_hba.conf; this error message comes from PostgresSQL, don't?
I can connect in the very same machine using psql. It is really weird
problem.


Your JDBC connection does not 'know' about pg_hba.conf but it is 
triggering the error and capturing the exception:


2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException:
ERROR: could not establish connection

It would seem something about the JDBC connection is incorrect. This 
is why I asked for the connection parameters in the JDBC connection. 
For some reason it is not connecting in the same manner as psql. The 
chore now is to find out why?


Thanks, Adrian!

Your ideas helped a lot to get me to the solution.
My problem was I had a "context.xml" file in my deployment folder.
I can't say why Tomcat wasn't reading this file in the past - but 
anyway, removing the context.xml file from the deployment solved the 
problem.


Regards,

Edson






I'll need to dig a bit more.

Edson










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection problems - local IP address refused!

2014-02-05 Thread Edson Richter

Em 05/02/2014 11:51, rob stone escreveu:

O
n Wed, 2014-02-05 at 09:37 -0200, Edson Richter wrote:W
ithout any changes in the configuration files, a **local** connection to
a local VPN IP address could not be established with the following
error:

The error (JDBC connection using driver version 1002):
2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException:
ERROR: could not establish connection
   Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master", database "master", SSL off

org.postgresql.util.PSQLException: ERROR: could not establish
connection
   Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master", database "master", SSL off

 at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
 at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
 at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
 at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
 at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
 at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
 at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
 at br.com...

Current pg_hba.conf:
local   all all
peer
hostall all 127.0.0.1/32 md5
hostall all 10.68.73.0/24   md5

If I run from the Local Server:
[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
  count
---
  9
(1 registro)


If I run from a Remote Server:
[root@myremoteserver ssh]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
  count
---
  9
(1 registro)


So, seems that only JDBC is getting in trouble to connect to a local
IP address.
Current interfaces (and yes, the tunneling is running - otherwise the
remote server could not connect):

[root@mylocalserver logs]# ifconfig
eth0  Link encap:Ethernet  Endereço de HW XX:XX:XX:XX:XX:XX
   inet end.: xxx.xx.xxx.85  Bcast:xxx.xx.xxx.87
Masc:255.255.255.248
   endereço inet6: ::xxx:::/64 Escopo:Link
   UP BROADCASTRUNNING MULTICAST  MTU:1500  Métrica:1
   RX packets:172557 errors:0 dropped:0 overruns:0 frame:0
   TX packets:293439 errors:0 dropped:0 overruns:0 carrier:0
   colisões:0 txqueuelen:1000
   RX bytes:24537681 (23.4 MiB)  TX bytes:227413210 (216.8
MiB)
   IRQ:28 Memória:fb00-fb7f

loLink encap:Loopback Local
   inet end.: 127.0.0.1  Masc:255.0.0.0
   endereço inet6: ::1/128 Escopo:Máquina
   UP LOOPBACKRUNNING  MTU:16436  Métrica:1
   RX packets:138156 errors:0 dropped:0 overruns:0 frame:0
   TX packets:138156 errors:0 dropped:0 overruns:0 carrier:0
   colisões:0 txqueuelen:0
   RX bytes:31993170 (30.5 MiB)  TX bytes:31993170 (30.5 MiB)

tun0  Link encap:Não Especificado  Endereço de HW
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
   inet end.: 10.68.73.6  P-a-P:10.68.73.5
Masc:255.255.255.255
   UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Métrica:1
   RX packets:149106 errors:0 dropped:0 overruns:0 frame:0
   TX packets:274534 errors:0 dropped:199 overruns:0 carrier:0
   colisões:0 txqueuelen:100
   RX bytes:6291572 (6.0 MiB)  TX bytes:381732404 (364.0 MiB)



The current JDBC version is 1100.
>From where did you obtain version 1002?


I've downloaded the source, add some debug information and build from 
the sources. I've been using this for a long time.



I think that your PostgreSql version and JDBC version are incompatible.

Well, should I use 9.3 1100 with a 9.2.6 server database?

I use ExecuteQuery and JDBC drivers daily for testing, etc.


I've processed > 1.000.000 transactions with this driver in this server 
last month...



The error your are pulling appears to be an issue with versions.


Well, I'll give a try with a new driver, but I do suspect something 
wrong in identifying network names...


Thanks for your help. I'll try your suggestion.

Regards,

Edson


HTH

Robert







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection problems - local IP address refused!

2014-02-05 Thread Edson Richter

Em 05/02/2014 15:08, Adrian Klaver escreveu:

On 02/05/2014 08:53 AM, Edson Richter wrote:

Em 05/02/2014 13:03, Adrian Klaver escreveu:

On 02/05/2014 05:43 AM, Edson Richter wrote:

Em 05/02/2014 11:12, Alban Hertroys escreveu:

Are you connecting to the correct host?

On 5 February 2014 12:37, Edson Richter 
wrote:

Without any changes in the configuration files, a **local**
connection to a
local VPN IP address could not be established with the following
error:

The error (JDBC connection using driver version 1002):
2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException:
ERROR:
could not establish connection
   Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master",
database "master", SSL off

Sure! On the exatcly same server:

[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.


The error message:

 Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master", database "master", SSL off

indicates your JDBC connection is not connecting to a database with
the pg_hba.conf you think it is.

What are the connection parameters for the JDBC connection?

So where is the database running?

What interface is it listening on?

I know it has been asked, but is there another Postgres instance 
running?




Well, there is only one PostgreSQL database running in this machine. It
has three interfaces:

lo (127.0.0.1)
eth0 (xxx.xxx.xxx.xxx)
tun0 (10.68.73.6)

PostgreSQL is configured to listen_adrress =  '*'
Firewall allows evertyhing from lo and tun0.
AFAIK (please correct me if I'm wrong) JDBC knows nothing about
pg_hba.conf; this error message comes from PostgresSQL, don't?
I can connect in the very same machine using psql. It is really weird
problem.


Your JDBC connection does not 'know' about pg_hba.conf but it is 
triggering the error and capturing the exception:


2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException:
ERROR: could not establish connection

It would seem something about the JDBC connection is incorrect. This 
is why I asked for the connection parameters in the JDBC connection. 
For some reason it is not connecting in the same manner as psql. The 
chore now is to find out why?




You are right. And then, problem becomes even more weird. The JDBC 
connection is:


 

Must be something related to reverse address resolution (converting from 
127.0.0.1 to "MyLocalServer" and then "MyLocalServer" is being mapped to 
10.68.73.6), must be affected by the VPN.

I just can't figure out why and where...

Edson



I'll need to dig a bit more.

Edson










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection problems - local IP address refused!

2014-02-05 Thread Edson Richter

Em 05/02/2014 13:03, Adrian Klaver escreveu:

On 02/05/2014 05:43 AM, Edson Richter wrote:

Em 05/02/2014 11:12, Alban Hertroys escreveu:

Are you connecting to the correct host?

On 5 February 2014 12:37, Edson Richter  
wrote:

Without any changes in the configuration files, a **local**
connection to a
local VPN IP address could not be established with the following 
error:


The error (JDBC connection using driver version 1002):
2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException:
ERROR:
could not establish connection
   Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master",
database "master", SSL off

Sure! On the exatcly same server:

[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.


The error message:

 Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
"master", database "master", SSL off


indicates your JDBC connection is not connecting to a database with 
the pg_hba.conf you think it is.


What are the connection parameters for the JDBC connection?

So where is the database running?

What interface is it listening on?

I know it has been asked, but is there another Postgres instance running?


Well, there is only one PostgreSQL database running in this machine. It 
has three interfaces:


lo (127.0.0.1)
eth0 (xxx.xxx.xxx.xxx)
tun0 (10.68.73.6)

PostgreSQL is configured to listen_adrress =  '*'
Firewall allows evertyhing from lo and tun0.
AFAIK (please correct me if I'm wrong) JDBC knows nothing about 
pg_hba.conf; this error message comes from PostgresSQL, don't?
I can connect in the very same machine using psql. It is really weird 
problem.


I'll need to dig a bit more.

Edson



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection problems - local IP address refused!

2014-02-05 Thread Edson Richter

Em 05/02/2014 11:12, Alban Hertroys escreveu:

Are you connecting to the correct host?

On 5 February 2014 12:37, Edson Richter  wrote:

Without any changes in the configuration files, a **local** connection to a
local VPN IP address could not be established with the following error:

The error (JDBC connection using driver version 1002):
2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException: ERROR:
could not establish connection
   Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user "master",
database "master", SSL off

Sure! On the exatcly same server:

[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
 count
---
 9
(1 registro)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Connection problems - local IP address refused!

2014-02-05 Thread Edson Richter

Dear all, I need your advise. Found a tricky situation.

Without any changes in the configuration files, a **local** connection 
to a local VPN IP address could not be established with the following 
error:


_*The error (JDBC connection using driver version 1002):*_
2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException: 
ERROR: could not establish connection
  Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
"master", database "master", SSL off


org.postgresql.util.PSQLException: ERROR: could not establish connection
  Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
"master", database "master", SSL off


at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
at 
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
at 
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
at 
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)

at br.com...

_*Current pg_hba.conf:*_
local   all all peer
hostall all 127.0.0.1/32 md5
hostall all 10.68.73.0/24   md5

_*If I run from the Local Server: *_
[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
 count
---
 9
(1 registro)


_*If I run from a Remote Server: *_
[root@myremoteserver ssh]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
 count
---
 9
(1 registro)


So, seems that only JDBC is getting in trouble to connect to a local IP 
address.
Current interfaces (and yes, the tunneling is running - otherwise the 
remote server could not connect):


[root@mylocalserver logs]# ifconfig
eth0  Link encap:Ethernet  Endereço de HW XX:XX:XX:XX:XX:XX
  inet end.: xxx.xx.xxx.85  Bcast:xxx.xx.xxx.87 
Masc:255.255.255.248

  endereço inet6: ::xxx:::/64 Escopo:Link
  UP BROADCASTRUNNING MULTICAST  MTU:1500  Métrica:1
  RX packets:172557 errors:0 dropped:0 overruns:0 frame:0
  TX packets:293439 errors:0 dropped:0 overruns:0 carrier:0
  colisões:0 txqueuelen:1000
  RX bytes:24537681 (23.4 MiB)  TX bytes:227413210 (216.8 MiB)
  IRQ:28 Memória:fb00-fb7f

loLink encap:Loopback Local
  inet end.: 127.0.0.1  Masc:255.0.0.0
  endereço inet6: ::1/128 Escopo:Máquina
  UP LOOPBACKRUNNING  MTU:16436  Métrica:1
  RX packets:138156 errors:0 dropped:0 overruns:0 frame:0
  TX packets:138156 errors:0 dropped:0 overruns:0 carrier:0
  colisões:0 txqueuelen:0
  RX bytes:31993170 (30.5 MiB)  TX bytes:31993170 (30.5 MiB)

tun0  Link encap:Não Especificado  Endereço de HW 
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00

  inet end.: 10.68.73.6  P-a-P:10.68.73.5 Masc:255.255.255.255
  UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Métrica:1
  RX packets:149106 errors:0 dropped:0 overruns:0 frame:0
  TX packets:274534 errors:0 dropped:199 overruns:0 carrier:0
  colisões:0 txqueuelen:100
  RX bytes:6291572 (6.0 MiB)  TX bytes:381732404 (364.0 MiB)



Re: [GENERAL] Transparent exchange BDE from Oracle to PostgreSQL

2014-02-02 Thread Edson Richter

Em 02/02/2014 04:24, John R Pierce escreveu:

On 2/1/2014 4:26 PM, Adrian Klaver wrote:

On 02/01/2014 03:26 PM, John R Pierce wrote:

On 2/1/2014 3:18 PM, Edson Richter wrote:

It is possible to put a PostgreSQL database in substitution of a
Oracle database for an app running BDE?
Any tips (beyond obvious "check Oracle specific funcionality")?


what is BDE  ?


http://en.wikipedia.org/wiki/Borland_Database_Engine


if its using ODBC or something to connect to Oracle, then you should 
be able to use ODBC to connect to PostgreSQL.


Will the Borland TurboWhatever code run as-is?  well, that depends on 
what its doing with Oracle.  good luck.




Thanks - my thoughts, perhaps, was if someone with experience on BDE 
could give some light here :-) - I'm not a BDE expert myself; that's why 
I'm looking for help.
BDE is some kind of "obscure" technology - it needs some special 
drivers, and I honestly don't know if it will work with pure ODBC. Even 
with Oracle, I had trouble because I've to install those software in 
certain sequence otherwise drivers are not added to Windows registry (I 
hate it).

I appreciate your insights and your time.

Regards,

Edson



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transparent exchange BDE from Oracle to PostgreSQL

2014-02-01 Thread Edson Richter
It is possible to put a PostgreSQL database in substitution of a Oracle 
database for an app running BDE?

Any tips (beyond obvious "check Oracle specific funcionality")?

Regards,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replicating SQL 2000 to PostgreSQL 9.x

2014-01-30 Thread Edson Richter

Em 29/01/2014 12:42, Edson Richter escreveu:
I'm trying to setup SQL 2000 replication to PostgreSQL 9.x, and follow 
those steps:


1) setup the publisher and distributor in SQL 2000
2) setup the article (12 tables)
3) setup the linked server to PostgreSQL and executed some queries to 
test (with success)
4) created the 12 tables in PostgreSQL using correct data types (I've 
used Microsoft guide to Oracle compatibility, which seems logical to me)

5) setup the Push Subscriber without creating schema/tables

After few seconds, it fails, and the only error I've have is quite 
vague (is a generic "An access violation occurred" error).
I've tried to configure PostgreSQL ODBC driver for logging (without 
success: no logs generated).


Is there someone successfully setup SQL 2k to PostgreSQL replication 
that could share some thoughts?


Thanks,

Edson



Problem solved!

Just in case, if someone faces this issue in future, it is already 
documented here :-). I could not find any useful tip on the internet, 
and saw many people saying "problem solved" but not clue about what was 
done, so here it is:


Updating ODBC driver from 8.04 to 9.01 solved the Access Violation 
error. Be warned, latest ODBC drivers will not run on Windows 2000 (some 
C header reference DLL that only exists in XP and later). They have to 
be recompiled with special headers in order to work.



Regards,

Edson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replicating SQL 2000 to PostgreSQL 9.x

2014-01-29 Thread Edson Richter
I'm trying to setup SQL 2000 replication to PostgreSQL 9.x, and follow 
those steps:


1) setup the publisher and distributor in SQL 2000
2) setup the article (12 tables)
3) setup the linked server to PostgreSQL and executed some queries to 
test (with success)
4) created the 12 tables in PostgreSQL using correct data types (I've 
used Microsoft guide to Oracle compatibility, which seems logical to me)

5) setup the Push Subscriber without creating schema/tables

After few seconds, it fails, and the only error I've have is quite vague 
(is a generic "An access violation occurred" error).
I've tried to configure PostgreSQL ODBC driver for logging (without 
success: no logs generated).


Is there someone successfully setup SQL 2k to PostgreSQL replication 
that could share some thoughts?


Thanks,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A Simple web application

2014-01-27 Thread Edson Richter

Em 27/01/2014 13:19, Raymond O'Donnell escreveu:

On 27/01/2014 15:16, Edson Richter wrote:

Em 27/01/2014 10:42, Andreas Kretschmer escreveu:

Raymond O'Donnell  wrote:


On 27/01/2014 12:11, Manoj Soni wrote:

Name, Age and Sex
Which tutorial, may help me in this regard.

I did a Google search on "tomcat netbeans postgresql tutorial" and
turned up some likely-looking links should work for you too. :-)

Right, and as a hint: dont store the AGE of a person, store the
birthdate instead.


Andreas

Another hint: don't store "sex" (do/don't do?), but "gender" (even
better if tristate: male/female/not informed or "other").
I already had lots of trouble with customers asking me to adjust my
systems to this new "situation", then I've to change systems and
database...

+1 to this... there was a lengthy thread on this list some years ago on
this very subject, and it was an eye-opener to see the possibilities
that emerged.

Ray.


Yes - indeed, this is so complex, that my current mapping is a 
One-to-many mapping with a aux "gender" table - so each customer would 
add as many options as they want.


Regards,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A Simple web application

2014-01-27 Thread Edson Richter

Em 27/01/2014 10:42, Andreas Kretschmer escreveu:

Raymond O'Donnell  wrote:


On 27/01/2014 12:11, Manoj Soni wrote:

Name, Age and Sex
Which tutorial, may help me in this regard.

I did a Google search on "tomcat netbeans postgresql tutorial" and
turned up some likely-looking links should work for you too. :-)

Right, and as a hint: dont store the AGE of a person, store the
birthdate instead.


Andreas


Another hint: don't store "sex" (do/don't do?), but "gender" (even 
better if tristate: male/female/not informed or "other").
I already had lots of trouble with customers asking me to adjust my 
systems to this new "situation", then I've to change systems and database...


Regards,

Edson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of size ???

2013-11-22 Thread Edson Richter

Em 19/11/2013 02:30, Brian Wong escreveu:
I've tried any work_mem value from 1gb all the way up to 40gb, with no 
effect on the error.  I'd like to think of this problem as a server 
process memory (not the server's buffers) or client process memory 
issue, primarily because when we tested the error there was no other 
load whatsoever.  Unfortunately,  the error doesn't say what kinda 
memory ran out.


--- Original Message ---

From: "bricklen" 
Sent: November 18, 2013 7:25 PM
To: "Brian Wong" 
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request 
of size ???


On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong > wrote:


We'd like to seek out your expertise on postgresql regarding this
error that we're getting in an analytical database.

Some specs:
proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS: Oracle Enterprise Linux 6.3
postgresql version: 9.1.9
shared_buffers: 18GB

After doing a lot of googling, I've tried setting FETCH_COUNT on
psql AND/OR setting work_mem.  I'm just not able to work around
this issue, unless if I take most of the MAX() functions out but
just one.



Excuse me (or just ignore me) if it is a stupid question, but have you 
configured sysctl.conf accordingly?
For instance, to use larget memory settings, I had to configure my EL as 
follows:


# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296


Regards,

Edson




What is your work_mem set to?
Did testing show that shared_buffers set to 18GB was effective? That 
seems about 2 to 3 times beyond what you probably want.




Re: [GENERAL] Browsing postgres database using the Eclipse QuantumDB plugin

2013-11-22 Thread Edson Richter

Em 22/11/2013 12:54, Adrian Klaver escreveu:

On 11/22/2013 05:46 AM, Mike Kienenberger wrote:

Has anyone successfully connected and browsed a postgres database
using the Eclipse QuantumDB plugin?


You might get a answer sooner here:

http://sourceforge.net/p/quantum/discussion/24178/



I can connect and execute sql, but the existing table list is always
empty as if no meta information is ever provided to the browser
plugin.   At first, I thought it might be a permission problem with
the database user I provided, so I also tried it with the postgres
user, but had the same results.

I've always had QuantumDB work with Oracle, hsqldb, h2database, and
other platforms.





Or you can try NetBeans 7.4 (perhaps, 7.4.1 is being released soon). 
Works like a charm for me.


Regards,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG replication across DataCenters

2013-11-22 Thread Edson Richter

Em 22/11/2013 08:43, Kaushal Shriyan escreveu:

Hi,

I have read on the web that Postgresql DB supports replication across 
data centers. Any real life usecase examples if it has been 
implemented by anyone. Please also help me understand the caveats i 
need to take care if i implement this setup.


Regards,

Kaushal
We have used asynchronous replication across datacenters with 100% 
success since 9.1. Currently we use 9.2.
Our setup involves a internet tunnel between servers. Servers have about 
2.000km of distance from each other.
The only points you need to take attention is tuning number of 
wal_keep_segments and timeout, and the initial load (that can be huge, 
depends on your data).


Regards,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres as In-Memory Database?

2013-11-19 Thread Edson Richter

Em 20/11/2013 01:30, Jeff Janes escreveu:



On Tuesday, November 19, 2013, Edson Richter wrote:

Em 19/11/2013 22:29, Jeff Janes escreveu:

On Sun, Nov 17, 2013 at 4:46 PM, Edson Richter
> wrote:

Yes, those optimizations I was talking about: having database
server store transaction log in high speed solid state disks
and consider it done while background thread will update data
in slower disks...

There is no reason to wait for fsync in slow disks to
guarantee consistency... If database server crashes, then it
just need to "redo" log transactions from fast disk into
slower data storage and database server is ready to go (I
think this is Sybase/MS SQL strategy for years).



Using a nonvolatile write cache for pg_xlog is certainly possible
and often done with PostgreSQL.  It is not important that the
nonvolatile write cache is fronting for SSD, fronting for HDD is
fine as the write cache turns the xlog into pure sequential
writes and HDD should not have a problem keeping up.

Cheers,

Jeff

Hum... I agree about the tecnology (SSD x HDD, etc) - but may be I
misunderstood, but I have read that to keep always safe data, I
must use fsync, and as result every transaction must wait for data
to be written in disk before returning as success.


A transaction must wait for the *xlog* to fsynced to "disk", but 
non-volatile write cache counts as disk.  It does not need to wait for 
the ordinary data files to be fsynced.  Checkpoints do need to wait 
for the ordinary data files to be fsynced, but the checkpoint process 
is a background process and it can wait for that without impeding user 
processes.


If the checkpointer falls far enough behind, then things do start to 
fall apart, but I think that this is true of any system. So you can't 
just get get a BBU for the xlog and ignore all other IO 
entirely--eventually the other data does need to reach disk, and if it 
gets dirtied faster than it gets cleaned for a prolonged period then 
things will freeze up.


By using the approach I've described you will have fsync (and data
will be 100% safe), but transaction is considered success once
written in the transaction log that is pure sequencial (and even
pre-allocated space, without need to ask OS for new files or new
space) - and also no need to wait for slow operations to write
data in data pages.

Am I wrong?


No user-facing process needs to wait for the data pages to fsync, 
unless things have really gotten fouled up.


Cheers,

Jeff

Ok, I still have one doubt (I'm learning a lot, tkx!):

What happens, then, if data has been commited (so it is in xlog), but it 
is not in data pages yet, and it doesn't fit in memory buffers anymore: 
how would PostgreSQL query data without having to wait for checkpoint 
happend and data be available in data pages?


Regards,

Edson


Re: [GENERAL] Postgres as In-Memory Database?

2013-11-19 Thread Edson Richter

Em 19/11/2013 22:29, Jeff Janes escreveu:
On Sun, Nov 17, 2013 at 4:46 PM, Edson Richter 
mailto:edsonrich...@hotmail.com>> wrote:


Yes, those optimizations I was talking about: having database
server store transaction log in high speed solid state disks and
consider it done while background thread will update data in
slower disks...

There is no reason to wait for fsync in slow disks to guarantee
consistency... If database server crashes, then it just need to
"redo" log transactions from fast disk into slower data storage
and database server is ready to go (I think this is Sybase/MS SQL
strategy for years).



Using a nonvolatile write cache for pg_xlog is certainly possible and 
often done with PostgreSQL.  It is not important that the nonvolatile 
write cache is fronting for SSD, fronting for HDD is fine as the write 
cache turns the xlog into pure sequential writes and HDD should not 
have a problem keeping up.


Cheers,

Jeff
Hum... I agree about the tecnology (SSD x HDD, etc) - but may be I 
misunderstood, but I have read that to keep always safe data, I must use 
fsync, and as result every transaction must wait for data to be written 
in disk before returning as success.
By using the approach I've described you will have fsync (and data will 
be 100% safe), but transaction is considered success once written in the 
transaction log that is pure sequencial (and even pre-allocated space, 
without need to ask OS for new files or new space) - and also no need to 
wait for slow operations to write data in data pages.


Am I wrong?

Edson



Re: [GENERAL] Postgres as In-Memory Database?

2013-11-17 Thread Edson Richter

Em 17/11/2013 22:02, Gavin Flower escreveu:

On 18/11/13 12:53, Stefan Keller wrote:

Hi Martijn

2013/11/17 Martijn van Oosterhout > wrote:

> If your dataset fits in memory then the problem is trivial: any decent
> programming language provides you with all the necessary tools to deal
> with data purely in memory.

What about Atomicity, Concurrency and about SQL query language and 
the extension mechanisms of Postgres? To me, that's not trivial.


> There are also quite a lot of databases that cover this area.

Agreed. That's what partially triggered my question, It's notably 
Oracle TimesTen, MS SQL Server 2014 (project Hekaton), (distributed) 
"MySQL Cluster", SAP HANA or SQLite >3. To me this rather confirms 
that an architecture and/or configuration for in-memory could be an 
issue also in Postgres.


The actual architecture of Postgres assumes that memory resources are 
expensive and optimizes avoiding disk I/O. Having more memory 
available affects database design e.g. that it can optimize for a 
working set to be stored entirely in main memory.


--Stefan


[...]

It would allow optimised indexes that store memory pointers of 
individual records, rather than to a block & then search for the 
record - as well as other optimisations that only make sense when data 
is known to be in RAM (and RAM is plentiful).  As already big severs 
can have a TerraByte or more of RAM, that will become more & more 
common place.  I have 32GB on my development box.



Cheers,
Gavin


Yes, those optimizations I was talking about: having database server 
store transaction log in high speed solid state disks and consider it 
done while background thread will update data in slower disks...


There is no reason to wait for fsync in slow disks to guarantee 
consistency... If database server crashes, then it just need to "redo" 
log transactions from fast disk into slower data storage and database 
server is ready to go (I think this is Sybase/MS SQL strategy for years).


Also, consider to have lazy loading (current?) or eager loading 
(perhaps, I just learned a bit about pg_warmcache).


And, of course, indexes that would point to pages in disk to memory 
areas when in RAM - as you just mentioned.



Regards,

Edson


Re: [GENERAL] Postgres as In-Memory Database?

2013-11-17 Thread Edson Richter

Em 17/11/2013 20:46, Andreas Brandl escreveu:

Edson,


Em 17/11/2013 19:26, Stefan Keller escreveu:

Hi Edson

As Rob wrote: Having a feature like an in-memory table like SQLite
has
[1] would make application cahces obsoleteand interesting to
discuss
(but that was'nt exactly what I asked above).

Hi, Stephan,

I don't think any feature you add to database server would bring
obsolescence to app server caches: app server caches have just no lag
at
all:

1) Don't need network connection to database server
2) Don't need to materialize results (for instance, I have in mind a
Java or .Net app server running hundred thousands of objects in
memory).

IMHO, no matter how much you improve database, app server caches
provides additional level of speed that cannot be achieved by
database.

That said, I still can see huge improvements in database server.
Having strong in memory operation would bring substantial
improvements.
For instance, if you have in-memory database (tables, indexes, etc)
for
all sort of queries, and just **commit** to disks, then you will have
unprecedent performance.
I would get benefit from this architecture, since typical customer
database has < 64Gb on size (after 2 or 3 years of data recording).
So,
a database server with 64Gb of memory would keep everything in
memory,
and just commit data to disc.

In this case, commited data would be instantly available to queries
(because they are all in memory) while log (changes) is recorded in a
fast disk (a SSD, perhaps) and then those changes are made persistent
data, written async into slow massive disks (SCSI or SAS).

This would allow also a hybrid operation (too keep as much data pages
as
possible in memory, with a target of 50% or more in memory).

When database server is started, it would have lazy load (data is
loaded
and kept in memory as it is used) or eager load (for slower startup
but
faster execution).

not sure I fully understand your point. Isn't this the typical 
mode-of-operation plus added cache warming?

Anyways, just wanted to point you to [1] which gives a good overview of cache 
warming techniques.

Regards,
Andreas

[1] http://raghavt.blogspot.fr/2012/04/caching-in-postgresql.html



Worndeful, never knew about it.
I'm ready ASAP.

Regards

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres as In-Memory Database?

2013-11-17 Thread Edson Richter

Em 17/11/2013 19:26, Stefan Keller escreveu:

Hi Edson

As Rob wrote: Having a feature like an in-memory table like SQLite has 
[1] would make application cahces obsoleteand interesting to discuss 
(but that was'nt exactly what I asked above).


Hi, Stephan,

I don't think any feature you add to database server would bring 
obsolescence to app server caches: app server caches have just no lag at 
all:


1) Don't need network connection to database server
2) Don't need to materialize results (for instance, I have in mind a 
Java or .Net app server running hundred thousands of objects in memory).


IMHO, no matter how much you improve database, app server caches 
provides additional level of speed that cannot be achieved by database.


That said, I still can see huge improvements in database server.
Having strong in memory operation would bring substantial improvements.
For instance, if you have in-memory database (tables, indexes, etc) for 
all sort of queries, and just **commit** to disks, then you will have 
unprecedent performance.
I would get benefit from this architecture, since typical customer 
database has < 64Gb on size (after 2 or 3 years of data recording). So, 
a database server with 64Gb of memory would keep everything in memory, 
and just commit data to disc.


In this case, commited data would be instantly available to queries 
(because they are all in memory) while log (changes) is recorded in a 
fast disk (a SSD, perhaps) and then those changes are made persistent 
data, written async into slow massive disks (SCSI or SAS).


This would allow also a hybrid operation (too keep as much data pages as 
possible in memory, with a target of 50% or more in memory).


When database server is started, it would have lazy load (data is loaded 
and kept in memory as it is used) or eager load (for slower startup but 
faster execution).


May be I'm just wondering too much, since I don't know PostgreSQL 
internals...



Regards,

Edson



--Stefan


[1] http://www.sqlite.org/inmemorydb.html
[2] http://www.postgresql.org/docs/9.1/static/non-durability.html

2013/11/17 Edson Richter <mailto:edsonrich...@hotmail.com>>


Em 17/11/2013 12:15, rob stone escreveu:


On Sun, 2013-11-17 at 12:25 +0100, Stefan Keller wrote:

How can Postgres be used and configured as an In-Memory
Database?


Does anybody know of thoughts or presentations about this
"NoSQL
feature" - beyond e.g. "Perspectives on NoSQL" from Gavin
Roy at PGCon
2010)?


Given, say 128 GB memory or more, and (read-mostly) data
that fit's
into this, what are the hints to optimize Postgres
(postgresql.conf
etc.)?


-- Stefan

Not as being completely "in memory".
Back in the "good ol'days" of DMS II (written in Algol and ran on
Burroughs mainframes) and Linc II (also Burroughs) it was
possible to
define certain tables as being memory resident. This was
useful for low
volatile data such as salutations, street types, county or
state codes,
time zones, preferred languages, etc.
It saved disk I/O twice. Firstly building your drop down lists and
secondly when the entered data hit the server and was validated.
It would be good to have a similar feature in PostgreSql.
If a table was altered by, say inserting a new street type,
then the
data base engine has to refresh the cache. This is the only
overhead.

Cheers,
Robert


For this purpose (building drop down lists, salutations, street
types, county or state codes), I keep a permanent data cache at
app server side (after all, they will be shared among all users -
even on a multi tenant application). This avoids network
connection, and keep database server memory available for database
operations (like reporting and transactions).
But I agree there are lots of gaings having a "in memory" option
for tables and so. I believe PostgreSQL already does that
automatically (most used tables are kept in memory cache).

Edson.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general







  1   2   3   >