Unable to Connect to DB Instance

2018-07-04 Thread Boblitz John
Forwarded from pgadmin-support – wrong list

From: Ashesh Vashi [mailto:ashesh.va...@enterprisedb.com]
Sent: Mittwoch, 4. Juli 2018 12:14
To: Boblitz John 
Cc: pgadmin-support 
Subject: Re: Unable to Connect to DB Instance


On Wed, Jul 4, 2018, 15:19 Boblitz John 
mailto:john.bobl...@bertschi.com>> wrote:
Good Morning,

Beginning yesterday morning, users have been unable to fully connect to our DB 
Instance.


1.   At the time of the initial report – I was connected to the DB via 
pgAdmin and could perform queries without problem.

2.   Users reported messages similar to “could not open file 
"global/11801": No such file or directory”

3.   At that time, connection logging was turned off and there were no 
messages in the log files.

4.   As this is a development environment, I turned logging on in the 
config and restarted the DB

5.   After restart, neither I, nor the Users could fully reconnect.

6.   I have performed a SYS Level backup (tar of the whole postgres 
directory tree)

7.   I cannot perform a DB level Backup (same errors occur)

System Details
Linux Debian  7.11
Postgres  9.1 (9.1.24lts-0+d)

Please send your queries to 
pgsql-gene...@postgresql.org for database 
server issues.
This is a pgAdmin support list.

-- Thanks, Ashesh
It appears that we can connect to the DB Server itself as I get “connection 
received” and “connection authorized” – but when trying to access the DB 
itself, several errors are raised (see below).
I am assuming that some internals are no longer consistent – the file 
“global/11801” for instance really does not exist on the system.

Questions:


1.   Is there any way to recover from this (backup is unfortunately rather 
old)

2.   What are possible causes?  I’d like to prevent this from happening on 
my production servers.

** I am aware that we are on older releases, and yes, we plan to migrate to 
more current releases “soon” ™ …


Thanks in advance.

John Boblitz


Exceprt from Log:
2018-07-04 09:15:13 CEST 192.168.250.50(28559) [unknown]LOG:  connection 
received: host=192.168.250.50 port=28559
2018-07-04 09:15:14 CEST 192.168.250.50(28559) postgresLOG:  connection 
authorized: user=dbadmin database=postgres
2018-07-04 09:15:14 CEST 192.168.250.50(28559) postgresERROR:  could not open 
file "global/11801": No such file or directory
2018-07-04 09:15:14 CEST 192.168.250.50(28559) postgresSTATEMENT:  SELECT 
usecreatedb, usesuper, CASE WHEN usesuper THEN pg_postmaster_start_time() ELSE 
NULL END as upsince, CASE WHEN usesuper THEN pg_conf_load_time() ELSE NULL END 
as confloadedsince, CASE WHEN usesuper THEN pg_is_in_recovery() ELSE NULL END 
as inrecovery, CASE WHEN usesuper THEN pg_last_xlog_receive_location() ELSE 
NULL END as receiveloc, CASE WHEN usesuper THEN pg_last_xlog_replay_location() 
ELSE NULL END as replayloc, CASE WHEN usesuper THEN 
pg_last_xact_replay_timestamp() ELSE NULL END as replay_timestamp, CASE WHEN 
usesuper AND pg_is_in_recovery() THEN pg_is_xlog_replay_paused() ELSE NULL END 
as isreplaypaused
  FROM pg_user WHERE usename=current_user
2018-07-04 09:15:19 CEST 192.168.250.50(28559) postgresERROR:  could not open 
file "global/11801": No such file or directory
2018-07-04 09:15:19 CEST 192.168.250.50(28559) postgresSTATEMENT:  SELECT 
rolcreaterole, rolcreatedb FROM pg_roles WHERE rolname = current_user;
2018-07-04 09:15:22 CEST 192.168.250.50(28561) [unknown]LOG:  connection 
received: host=192.168.250.50 port=28561
2018-07-04 09:15:22 CEST 192.168.250.50(28561) g11BaseLOG:  connection 
authorized: user=dbadmin database=g11Base
2018-07-04 09:15:23 CEST 192.168.250.50(28561) g11BaseERROR:  could not open 
file "pg_tblspc/24579/PG_9.1_201105231/24580/11866": No such file or directory
2018-07-04 09:15:23 CEST 192.168.250.50(28561) g11BaseSTATEMENT:  SELECT CASE 
WHEN nspname LIKE E'pg\\_temp\\_%' THEN 1
WHEN (nspname LIKE E'pg\\_%') THEN 0
ELSE 3 END AS nsptyp,
   nsp.nspname, nsp.oid, pg_get_userbyid(nspowner) AS 
namespaceowner, nspacl, description,   has_schema_privilege(nsp.oid, 
'CREATE') as cancreate,
(SELECT array_agg(label) FROM pg_seclabels sl1 WHERE 
sl1.objoid=nsp.oid) AS labels,
(SELECT array_agg(provider) FROM pg_seclabels sl2 WHERE 
sl2.objoid=nsp.oid) AS providers
  FROM pg_namespace nsp
  LEFT OUTER JOIN pg_description des ON (des.objoid=nsp.oid AND 
des.classoid='pg_namespace'::regclass)
WHERE NOT ((nspname = 'pg_catalog' AND EXISTS (SELECT 1 FROM 
pg_class WHERE relname = 'pg_class' AND relnamespace = nsp.oid LIMIT 1)) OR
(nspname = 'pgagent' AND EXISTS (SELECT 1 FROM pg_class WHERE 
relname = 'pga_job' AND relnamespace = nsp.oid LIMIT 1)) OR
(nspname = 'information_schema' AND EXISTS (SELECT 1 FROM 
pg

Re: Parallel Aware

2018-07-04 Thread jbrant
You're not alone - I have the same issue on PostgreSQL 10.3, even with
"force_parallel_mode" enabled and a query plan that includes bitmap heap
scans (which should be parallelizable).



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Return select statement with sql case statement

2018-07-04 Thread hmidi slim
Hi,
I need to use conditional expression in my query, So I want to make a query
like this:
select numberOfPremiumDays
case  when numberOfPremiumDays = date_part('day',
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then
select product_id,
price
from product
where occupation_type_id = 1
group by product_id, occupation_type_id

However, in the documentation I found that the return of case was a value,
not like in my case I want to return a select statement.
How can I use a conditional expression in a sql query?
Best Regards.


Postgres sometimes stalling on 'percentile_cont'

2018-07-04 Thread Tom van Tilburg
Hi,

I have a set of relatively complex queries producing tables (including
postgis and pgpointcloud functions) that I run consecutively, and sometimes
(depending on the base-data) my 5th query stalls (CPU 100%, runs forever)
seemingly on the percentile_cont function. *When I replace percentile_cont
with just a value it passes*.

The setup roughly looks like this:
query1 - creates 77 records with polygons
query2 - creates 89 records with polygons
query3 - creates ~350k records with points (inside above polygons)
query4 - creates ~220k records with points clustered by height (from result
query3)
query5 - creates ~102k records with point clustered by normal (from result
query3)

The odd thing is, when I run query5 directly after query4, it will stall on
some datasets (always same sets). Though when I cancel the query and run it
again, it will pass in about 2 seconds.
When I run  query5 from a different shell directly after query4, it also
passes.

Does anyone have a clue on how to research this further? How can I look
into the process itself to see what is cycling the CPU? How to break this
problem down to smaller chunks? I am a bit out of options myself.

For what it's worth:

SELECT version();
"PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
6.3.0-12ubuntu2) 6.3.0 20170406, 64-bit"

SELECT postgis_full_version();
"POSTGIS="2.5.0beta1dev r16609" [EXTENSION] PGSQL="100"
GEOS="3.6.2-CAPI-1.10.2 4d2925d6" SFCGAL="1.3.1" PROJ="Rel. 5.1.0, June
1st, 2018" GDAL="GDAL 2.3.1, released 2018/06/22" LIBXML="2.9.4"
LIBJSON="0.12.1" LIBPROTOBUF="1.2.1" RASTER"

The part of query5 where I replace percentile_cont with just a value:
SELECT
c1.id,c1.cid, c1.blockid,
CASE
WHEN aspectmedian > 360 THEN aspectmedian - 360
ELSE aspectmedian
END as aspect,
pt geom
FROM clustered c1
JOIN (
SELECT cid, blockid, aspectclass,
--find median aspect and median height within bucket
--percentile_cont(0.5) within group (order by aspect) as
aspectmedian
0 as aspectmedian --replaced by value 0
FROM clustered
GROUP BY blockid,cid, aspectclass
) c2 ON (c1.cid = c2.cid AND c1.blockid = c2.blockid AND c1.aspectclass =
c2.aspectclass)
WHERE c1.cid Is Not Null;


Best,
 Tom


Re: Return select statement with sql case statement

2018-07-04 Thread Ron

On 07/04/2018 07:48 AM, hmidi slim wrote:

Hi,
I need to use conditional expression in my query, So I want to make a 
query like this:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id

However, in the documentation I found that the return of case was a value, 
not like in my case I want to return a select statement.

How can I use a conditional expression in a sql query?
Best Regards.


The CASE clause is used to return one of many choices.  Based on this 
example, you need to do this:


select numberOfPremiumDays,
   product_id,
   price
   from product
where occupation_type_id = 1
  and  numberOfPremiumDays = date_part('day', ('2018-11-05'::timestamp) - 
('2018-11-01'::timestamp))

group by product_id, occupation_type_id



--
Angular momentum makes the world go 'round.



Re: pg_dump out of memory

2018-07-04 Thread Andy Colson

On 07/04/2018 12:31 AM, David Rowley wrote:

On 4 July 2018 at 14:43, Andy Colson  wrote:

I moved a physical box to a VM, and set its memory to 1Gig.  Everything
runs fine except one backup:


/pub/backup# pg_dump -Fc -U postgres -f wildfire.backup wildfirep

g_dump: Dumping the contents of table "ofrrds" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:  out of memory
DETAIL:  Failed on request of size 1073741823.> pg_dump: The command was: COPY 
public.ofrrds (id, updateddate, bytes) TO
stdout;


There will be less memory pressure on the server if the pg_dump was
performed from another host. When running pg_dump locally the 290MB
bytea value will be allocated in both the backend process pg_dump is
using and pg_dump itself. Running the backup remotely won't require
the latter to be allocated on the server.


I've been reducing my memory settings:

maintenance_work_mem = 80MB
work_mem = 5MB
shared_buffers = 200MB


You may also get it to work by reducing shared_buffers further.
work_mem won't have any affect, neither will maintenance_work_mem.

Failing that, the suggestions of more RAM and/or swap look good.



Adding more ram to the vm is the simplest option.  I just seems a waste cuz of 
one backup.

Thanks all.

-Andy



Re: Return select statement with sql case statement

2018-07-04 Thread hmidi slim
Actually, I need the use of case because based on the numberOfPremiumDays
there are different type of treatment:
select numberOfPremiumDays
case  when numberOfPremiumDays = date_part('day',
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then
select product_id,
premium_price,
period_price
from product
where occupation_type_id = 1
group by product_id, occupation_type_id
   else
select product_id,
classic_price,
period_price
from product1
where occupation_type_id = 1
group by product_id, occupation_type_id


Re: Unable to Connect to DB Instance

2018-07-04 Thread Tom Lane
Boblitz John  writes:
> 2.   Users reported messages similar to "could not open file 
> "global/11801": No such file or directory"

I'd try "select relname from pg_class where pg_relation_filenode(oid) =
11801" to see if you can identify the problematic relation that way.

If you're lucky, this is just loss of some system catalog index in which
case reindexing will fix it.  The fact that you're able to get through
connecting, and the errors only show up with queries, is somewhat
promising given that the problem looks like it's related to pg_authid
or pg_db_role_setting.

regards, tom lane



Re: Unable to Connect to DB Instance

2018-07-04 Thread Adrian Klaver

On 07/04/2018 04:08 AM, Boblitz John wrote:



Good Morning,

Beginning yesterday morning, users have been unable to fully connect
to our DB Instance.

1.At the time of the initial report – I was connected to the DB via
pgAdmin and could perform queries without problem.

2.Users reported messages similar to “could not open file
"global/11801": No such file or directory”

3.At that time, connection logging was turned off and there were no
messages in the log files.

4.As this is a development environment, I turned logging on in the
config and restarted the DB

5.After restart, neither I, nor the Users could fully reconnect.

6.I have performed a SYS Level backup (tar of the whole postgres
directory tree)

7.I cannot perform a DB level Backup (same errors occur)

System Details

     Linux Debian  7.11

     Postgres  9.1 (9.1.24lts-0+d)




It appears that we can connect to the DB Server itself as I get
“connection received” and “connection authorized” – but when trying
to access the DB itself, several errors are raised (see below).

I am assuming that some internals are no longer consistent – the
file “global/11801” for instance really does not exist on the system.

Questions:

1.Is there any way to recover from this (backup is unfortunately
rather old) >
2.What are possible causes?  I’d like to prevent this from happening
on my production servers.


Looks like something/someone deleted files from portions of  the $DATA 
directory. In particular from ~/global and ~/pg_tblspc. Without those 
files you can't really proceed.


Can you recover by creating a new instance and restoring from a dump of 
the production server?


As to exact cause the only thing I can think of is to to look at the 
system logs at the time of the initial failure and see if there is 
anything there that would shed light.




** I am aware that we are on older releases, and yes, we plan to
migrate to more current releases “soon” ™ …




Thanks in advance.

John Boblitz

Exceprt from Log:

2018-07-04 09:15:13 CEST 192.168.250.50(28559) [unknown]LOG: 
connection received: host=192.168.250.50 port=28559


2018-07-04 09:15:14 CEST 192.168.250.50(28559) postgresLOG: 
connection authorized: user=dbadmin database=postgres


2018-07-04 09:15:14 CEST 192.168.250.50(28559) postgresERROR:  could
not open file "global/11801": No such file or directory

2018-07-04 09:15:14 CEST 192.168.250.50(28559) postgresSTATEMENT: 
SELECT usecreatedb, usesuper, CASE WHEN usesuper THEN

pg_postmaster_start_time() ELSE NULL END as upsince, CASE WHEN
usesuper THEN pg_conf_load_time() ELSE NULL END as confloadedsince,
CASE WHEN usesuper THEN pg_is_in_recovery() ELSE NULL END as
inrecovery, CASE WHEN usesuper THEN pg_last_xlog_receive_location()
ELSE NULL END as receiveloc, CASE WHEN usesuper THEN
pg_last_xlog_replay_location() ELSE NULL END as replayloc, CASE WHEN
usesuper THEN pg_last_xact_replay_timestamp() ELSE NULL END as
replay_timestamp, CASE WHEN usesuper AND pg_is_in_recovery() THEN
pg_is_xlog_replay_paused() ELSE NULL END as isreplaypaused

       FROM pg_user WHERE usename=current_user

2018-07-04 09:15:19 CEST 192.168.250.50(28559) postgresERROR:  could
not open file "global/11801": No such file or directory

2018-07-04 09:15:19 CEST 192.168.250.50(28559) postgresSTATEMENT: 
SELECT rolcreaterole, rolcreatedb FROM pg_roles WHERE rolname =

current_user;

2018-07-04 09:15:22 CEST 192.168.250.50(28561) [unknown]LOG: 
connection received: host=192.168.250.50 port=28561


2018-07-04 09:15:22 CEST 192.168.250.50(28561) g11BaseLOG: 
connection authorized: user=dbadmin database=g11Base


2018-07-04 09:15:23 CEST 192.168.250.50(28561) g11BaseERROR:  could
not open file "pg_tblspc/24579/PG_9.1_201105231/24580/11866": No
such file or directory

2018-07-04 09:15:23 CEST 192.168.250.50(28561) g11BaseSTATEMENT: 
SELECT CASE WHEN nspname LIKE E'pg\\_temp\\_%' THEN 1


         WHEN (nspname LIKE E'pg\\_%') THEN 0

         ELSE 3 END AS nsptyp,

        nsp.nspname, nsp.oid,
pg_get_userbyid(nspowner) AS namespaceowner, nspacl,
description,   has_schema_privilege(nsp.oid, 'CREATE') as cancreate,

     (SELECT array_agg(label) FROM pg_seclabels sl1
WHERE sl1.objoid=nsp.oid) AS labels,

     (SELECT array_agg(provider) FROM pg_seclabels sl2
WHERE sl2.objoid=nsp.oid) AS providers

       FROM pg_namespace nsp

       LEFT OUTER JOIN pg_description des ON
(des.objoid=nsp.oid AND des.classoid='pg_namespace'::regclass)

     WHERE NOT ((nspname = 'pg_catalog' AND EXISTS
(SELECT 1 FROM pg_class WH

Re: Postgres sometimes stalling on 'percentile_cont'

2018-07-04 Thread Tom Lane
Tom van Tilburg  writes:
> I have a set of relatively complex queries producing tables (including
> postgis and pgpointcloud functions) that I run consecutively, and sometimes
> (depending on the base-data) my 5th query stalls (CPU 100%, runs forever)
> seemingly on the percentile_cont function. *When I replace percentile_cont
> with just a value it passes*.

> The setup roughly looks like this:
> query1 - creates 77 records with polygons
> query2 - creates 89 records with polygons
> query3 - creates ~350k records with points (inside above polygons)
> query4 - creates ~220k records with points clustered by height (from result
> query3)
> query5 - creates ~102k records with point clustered by normal (from result
> query3)

> The odd thing is, when I run query5 directly after query4, it will stall on
> some datasets (always same sets). Though when I cancel the query and run it
> again, it will pass in about 2 seconds.

Hard to say for sure with just this much detail, but what this smells
like is a bad query plan choice based on out-of-date statistics.  The
fact that the query is fast when you retry could then be explained by
supposing that the autovacuum daemon has gotten in there and updated
the stats while you were waiting.  So I'd try inserting a manual ANALYZE
of the table(s) that the earlier queries modify.

If that doesn't fix it, we'd need much more detail to offer help.  See

https://wiki.postgresql.org/wiki/Slow_Query_Questions

It'd be particularly useful to compare EXPLAIN output in both the
"slow" and "fast" states.

regards, tom lane



Re: Return select statement with sql case statement

2018-07-04 Thread legrand legrand
Hello,

sorry your description is not clear ...
why do you use a GROUP BY on product without aggregation function min, max,
sum ?

where is defined numberOfPremiumDays ?


may be using UNION can solve your problem:

select 
numberOfPremiumDays,
product_id,
premium_price,
period_price
from product, PremiumDays
where occupation_type_id = 1
and numberOfPremiumDays = date_part('day',
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp))
UNION ALL
select 
numberOfPremiumDays,
product_id,
classic_price,
period_price
from product1, PremiumDays
where occupation_type_id = 1
and numberOfPremiumDays != date_part('day',
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp))

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



RE: Unable to Connect to DB Instance

2018-07-04 Thread Boblitz John
Hello Tom,

Thanks - I get "pg_db_role_setting" as a response.

I have already attempted to reindex system but get:

NOTICE:  table "pg_catalog.pg_class" was reindexed
NOTICE:  table "pg_catalog.pg_statistic" was reindexed
NOTICE:  table "pg_catalog.pg_type" was reindexed
NOTICE:  table "pg_catalog.pg_attribute" was reindexed
NOTICE:  table "pg_catalog.pg_authid" was reindexed
ERROR:  could not open file "base/11919/11680": No such file or directory


John


> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Mittwoch, 4. Juli 2018 17:50
> To: Boblitz John 
> Cc: pgsql-gene...@postgresql.org
> Subject: Re: Unable to Connect to DB Instance
> 
> Boblitz John  writes:
> > 2.   Users reported messages similar to "could not open file
> "global/11801": No such file or directory"
> 
> I'd try "select relname from pg_class where pg_relation_filenode(oid) =
> 11801" to see if you can identify the problematic relation that way.
> 
> If you're lucky, this is just loss of some system catalog index in which case
> reindexing will fix it.  The fact that you're able to get through connecting,
> and the errors only show up with queries, is somewhat promising given that
> the problem looks like it's related to pg_authid or pg_db_role_setting.
> 
>   regards, tom lane



Re: Cloning schemas

2018-07-04 Thread Łukasz Jarych
Hi Melvin,

thank you very much. Awesome!!!

Best,
Jacek


wt., 3 lip 2018 o 15:34 Melvin Davidson  napisał(a):

>
>
>
>> ERROR: BŁĄD: you cannot add to column "TopoToVersion_ID" DETAIL: Column
>> "TopoToVersion_ID" is an identity column defined as GENERATED ALWAYS. HINT:
>> Use OVERRIDING SYSTEM VALUE to override. CONTEXT: wyrażenie SQL "INSERT
>> INTO version4.t_topotoversions SELECT * FROM public.t_topotoversions;"
>> funkcja PL/pgSQL clone_schema(text,text,boolean), wiersz 212 w EXECUTE
>>
>
> Lukasz,
> That ERROR is occuring because you choose to copy the data (include_recs =
> TRUE).
> I have added OVERRIDING SYSTEM VALUE to the insert statement and attached
> revised version.
> CAUTION: The value of TopoToVersion_ID and any other IDENTITY columns may
> be changed.
>
> The revised version is attached.
>
>


Re: Cloning schemas

2018-07-04 Thread Melvin Davidson
On Wed, Jul 4, 2018 at 2:27 PM, Łukasz Jarych  wrote:

> Hi Melvin,
>
>
>
> Best,
> Jacek
>
>
> wt., 3 lip 2018 o 15:34 Melvin Davidson  napisał(a):
>
>>
>>
>>
>>> ERROR: BŁĄD: you cannot add to column "TopoToVersion_ID" DETAIL: Column
>>> "TopoToVersion_ID" is an identity column defined as GENERATED ALWAYS. HINT:
>>> Use OVERRIDING SYSTEM VALUE to override. CONTEXT: wyrażenie SQL "INSERT
>>> INTO version4.t_topotoversions SELECT * FROM public.t_topotoversions;"
>>> funkcja PL/pgSQL clone_schema(text,text,boolean), wiersz 212 w EXECUTE
>>>
>>
>> Lukasz,
>> That ERROR is occuring because you choose to copy the data (include_recs
>> = TRUE).
>> I have added OVERRIDING SYSTEM VALUE to the insert statement and
>> attached revised version.
>> CAUTION: The value of TopoToVersion_ID and any other IDENTITY columns
>> may be changed.
>>
>> The revised version is attached.
>>
>>
> thank you very much. Awesome!!!

Jacek,
You are quite welcome.



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Cloning schemas

2018-07-04 Thread DiasCosta

Hi Melvin,

I'm new to clone_schema.
Can I use it on PostgreSQL 9.6?

TIA
DCosta



On 03-07-2018 14:34, Melvin Davidson wrote:




ERROR: BŁĄD: you cannot add to column "TopoToVersion_ID" DETAIL:
Column "TopoToVersion_ID" is an identity column defined as
GENERATED ALWAYS. HINT: Use OVERRIDING SYSTEM VALUE to override.
CONTEXT: wyrażenie SQL "INSERT INTO version4.t_topotoversions
SELECT * FROM public.t_topotoversions;" funkcja PL/pgSQL
clone_schema(text,text,boolean), wiersz 212 w EXECUTE


Lukasz,
That ERROR is occuring because you choose to copy the data 
(include_recs = TRUE).
I have added OVERRIDING SYSTEM VALUE to the insert statement and 
attached revised version.
CAUTION: The value of TopoToVersion_ID and any other IDENTITY columns 
may be changed.


The revised version is attached.



--
J. M. Dias Costa
Telef. 214026948 Telem. 939307421

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



Re: Unable to Connect to DB Instance

2018-07-04 Thread Tom Lane
Boblitz John  writes:
> Thanks - I get "pg_db_role_setting" as a response.

Hm ... not its index?  If the table itself is gone, it's surprising
that you can get through session startup.

> I have already attempted to reindex system but get:

I had in mind just reindexing the specific table you're having trouble
with ... but this:

> ERROR:  could not open file "base/11919/11680": No such file or directory

shows that there's another table that also has a problem, and there
may be more :-(.  I don't know what the odds are that you can get out
of this completely.  I would NOT recommend "reindex system" as a
blunderbuss solution.  You do not know how much is corrupted and there's
a significant chance of making things worse by tromping over the whole
database using catalogs of uncertain reliability.

Did you identify which table 11680 is?

In the case of pg_db_role_setting, a possible solution is to "touch" the
missing file so it exists; it'll be empty, which means that you'll have
lost any ALTER DATABASE/ROLE SET settings, but that's better than not
being able to dump at all.  (You might then need to REINDEX
pg_db_role_setting to get its indexes in sync with it being empty.)

Whether an equally drastic answer is tolerable for your other missing
table(s) depends on what they are...

regards, tom lane



Re: Unable to Connect to DB Instance

2018-07-04 Thread Adrian Klaver

On 07/04/2018 12:36 PM, Tom Lane wrote:

Boblitz John  writes:

Thanks - I get "pg_db_role_setting" as a response.


Hm ... not its index?  If the table itself is gone, it's surprising
that you can get through session startup.


I have already attempted to reindex system but get:


I had in mind just reindexing the specific table you're having trouble
with ... but this:


ERROR:  could not open file "base/11919/11680": No such file or directory


shows that there's another table that also has a problem, and there
may be more :-(.  I don't know what the odds are that you can get out
of this completely.  I would NOT recommend "reindex system" as a
blunderbuss solution.  You do not know how much is corrupted and there's
a significant chance of making things worse by tromping over the whole
database using catalogs of uncertain reliability.

Did you identify which table 11680 is?


There is also this from the OP:

2018-07-04 09:15:23 CEST 192.168.250.50(28561) g11BaseERROR:  could not 
open file "pg_tblspc/24579/PG_9.1_201105231/24580/11866": No such file 
or directory


Which failed on a query that references pgAgent and Slony.

So are you using either or both of those programs?



In the case of pg_db_role_setting, a possible solution is to "touch" the
missing file so it exists; it'll be empty, which means that you'll have
lost any ALTER DATABASE/ROLE SET settings, but that's better than not
being able to dump at all.  (You might then need to REINDEX
pg_db_role_setting to get its indexes in sync with it being empty.)

Whether an equally drastic answer is tolerable for your other missing
table(s) depends on what they are...

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Return select statement with sql case statement

2018-07-04 Thread Ron

On 07/04/2018 10:32 AM, hmidi slim wrote:
Actually, I need the use of case because based on the numberOfPremiumDays 
there are different type of treatment:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                premium_price,
    period_price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id
   else
    select product_id,
                classic_price,
    period_price
                from product1
                where occupation_type_id = 1
                group by product_id, occupation_type_id



Then try:
select product_id,
case when numberOfPremiumDays = date_part('day', ('2018-11-05'::timestamp) - 
('2018-11-01'::timestamp)) then

       premium_price
   else
   period_price
   end as the_price
from product
where occupation_type_id = 1
order by product_id, occupation_type_id


--
Angular momentum makes the world go 'round.



Re: Return select statement with sql case statement

2018-07-04 Thread Adrian Klaver

On 07/04/2018 03:03 PM, Ron wrote:

On 07/04/2018 10:32 AM, hmidi slim wrote:
Actually, I need the use of case because based on the 
numberOfPremiumDays there are different type of treatment:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                premium_price,
    period_price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id
   else
    select product_id,
                classic_price,
    period_price
                from product1
                where occupation_type_id = 1
                group by product_id, occupation_type_id



Then try:
select product_id,
case when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

        premium_price
    else
    period_price
    end as the_price
from product
where occupation_type_id = 1
order by product_id, occupation_type_id


The issue with the above is that table changes from product to product1 
in the OP's desired behavior so the price switch alone will not work:(








--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Return select statement with sql case statement

2018-07-04 Thread Ron




On 07/04/2018 05:08 PM, Adrian Klaver wrote:

On 07/04/2018 03:03 PM, Ron wrote:

On 07/04/2018 10:32 AM, hmidi slim wrote:
Actually, I need the use of case because based on the 
numberOfPremiumDays there are different type of treatment:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                premium_price,
    period_price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id
   else
    select product_id,
                classic_price,
    period_price
                from product1
                where occupation_type_id = 1
                group by product_id, occupation_type_id



Then try:
select product_id,
case when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

        premium_price
    else
    period_price
    end as the_price
from product
where occupation_type_id = 1
order by product_id, occupation_type_id


The issue with the above is that table changes from product to product1 in 
the OP's desired behavior so the price switch alone will not work:(


Ah, didn't notice that.  Then... dynamic sql constructed by the programming 
language executing the query?



--
Angular momentum makes the world go 'round.



Re: Cloning schemas

2018-07-04 Thread Melvin Davidson
On Wed, Jul 4, 2018 at 2:48 PM, DiasCosta  wrote:

> Hi Melvin,
>
> I'm new to clone_schema.
> Can I use it on PostgreSQL 9.6?
>
> TIA
> DCostaployment by invitation only!
>

> Can I use it on PostgreSQL 9.6?
Yes, but because the developer(s) once again monkeyed with the system
catalogs, there are now
two versions. One for 10 and one for 9.6 and below. I've attached the 9.6
version for you.
-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
   AND objsubid = 0;
  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;

SET search_path TO dest_schema, source_schema, 'public', pg_catalog;  
SELECT current_schemas(TRUE) INTO v_path;
RAISE NOTICE 'search path = %', v_path;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
IF include_recs 
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || 
', ' || sq_is_called || ');' ; 
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value 
|| ', ' || sq_is_called || ');' ;
END IF;

  -- add sequence comments
SELECT oid INTO tbl_oid
  FROM pg_class 
 WHERE relkind = 'S'
   AND relnamespace = src_oid
   AND relname = quote_ident(object);

SELECT description INTO v_def
  FROM pg_description
 WHERE objoid = tbl_oid
   AND objsubid = 0;

IF FOUND
  THEN  
EXECUTE 'COMMENT ON SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object)
 || ' IS ''' || v_def || ''';';
END IF;


  END LOOP;

-- Create tables 
  FOR object IN
SELECT TABLE_NAME:

Re: Cloning schemas

2018-07-04 Thread Adrian Klaver

On 07/04/2018 03:38 PM, Melvin Davidson wrote:



On Wed, Jul 4, 2018 at 2:48 PM, DiasCosta > wrote:


Hi Melvin,

I'm new to clone_schema.
Can I use it on PostgreSQL 9.6?

TIA
DCostaployment by invitation only!


 > Can I use it on PostgreSQL 9.6?

Yes, but because the developer(s) once again monkeyed with the system 
catalogs, there are now


Well that is one of the things that distinguish a major release so it 
should be no surprise.


two versions. One for 10 and one for 9.6 and below. I've attached the 
9.6 version for you.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Cloning schemas

2018-07-04 Thread Melvin Davidson
On Wed, Jul 4, 2018 at 6:48 PM, Adrian Klaver 
wrote:

> On 07/04/2018 03:38 PM, Melvin Davidson wrote:
>
>>
>>
>> On Wed, Jul 4, 2018 at 2:48 PM, DiasCosta > > wrote:
>>
>> Hi Melvin,
>>
>> I'm new to clone_schema.
>> Can I use it on PostgreSQL 9.6?
>>
>> TIA
>> DCostaployment by invitation only!
>>
>>
>>  > Can I use it on PostgreSQL 9.6?
>>
>> Yes, but because the developer(s) once again monkeyed with the system
>> catalogs, there are now
>>
>
> Well that is one of the things that distinguish a major release so it
> should be no surprise.
>
>
>
The problem is, AFAICS, none of the changes induced were really necessary
or increased performance.


*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Cloning schemas

2018-07-04 Thread Adrian Klaver

On 07/04/2018 03:53 PM, Melvin Davidson wrote:








The problem is, AFAICS, none of the changes induced were really 
necessary or increased performance.


The folks that wanted transactional ALTER SEQUENCE might disagree:):

https://www.postgresql.org/docs/10/static/release-10.html
"Move sequences' metadata fields into a new pg_sequence system catalog 
(Peter Eisentraut)


A sequence relation now stores only the fields that can be modified by 
nextval(), that is last_value, log_cnt, and is_called. Other sequence 
properties, such as the starting value and increment, are kept in a 
corresponding row of the pg_sequence catalog. ALTER SEQUENCE updates are 
now fully transactional, implying that the sequence is locked until 
commit. The nextval() and setval() functions remain nontransactional.


...
"




*Melvin Davidson**
Maj. Database & Exploration Specialist**
Universe Exploration Command – UXC***
Employment by invitation only!



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Cloning schemas

2018-07-04 Thread Melvin Davidson
>The folks that wanted transactional ALTER SEQUENCE might disagree:):
Ah, so you mean the previous version was not working or sufficient?

https://www.postgresql.org/docs/9.6/static/sql-altersequence.html

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Return select statement with sql case statement

2018-07-04 Thread David G. Johnston
On Wednesday, July 4, 2018, Ron  wrote:
>
> Ah, didn't notice that.  Then... dynamic sql constructed by the
> programming language executing the query?
>

That, the UNION idea, or pull the common stuff into the from clause and
write two left joins then coalesce whichever one provided the row.  In
short, the OP cannot do what they thought they needed to do but hasn't
really provided any info for others to make alternative suggestions.

David J.


Re: Cloning schemas

2018-07-04 Thread Łukasz Jarych
Hi Melvin.

folks wrote only that it is old version and didnt care :)

Tahnk you very much,
Best,
Jacek

czw., 5 lip 2018 o 01:09 Melvin Davidson  napisał(a):

>
> >The folks that wanted transactional ALTER SEQUENCE might disagree:):
> Ah, so you mean the previous version was not working or sufficient?
>
> https://www.postgresql.org/docs/9.6/static/sql-altersequence.html
>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>


Re: Cloning schemas

2018-07-04 Thread Łukasz Jarych
>From link function is not working.

czw., 5 lip 2018 o 07:49 Łukasz Jarych  napisał(a):

> Hi Melvin.
>
> folks wrote only that it is old version and didnt care :)
>
> Tahnk you very much,
> Best,
> Jacek
>
> czw., 5 lip 2018 o 01:09 Melvin Davidson 
> napisał(a):
>
>>
>> >The folks that wanted transactional ALTER SEQUENCE might disagree:):
>> Ah, so you mean the previous version was not working or sufficient?
>>
>> https://www.postgresql.org/docs/9.6/static/sql-altersequence.html
>>
>> --
>> *Melvin Davidson*
>> *Maj. Database & Exploration Specialist*
>> *Universe Exploration Command – UXC*
>> Employment by invitation only!
>>
>