Re: [GENERAL] Poor performance on 9.4.4 when matching multiple columns from left side of LATERAL join

2015-11-13 Thread Steven Grimm




David G. Johnston 
November 13, 2015 at 10:48 PM
​
​IN semantics w.r.t NULL can result in atrocious performance in some 
instances.  I cannot speak to this one in particular but I'm curious if

[...]
WHERE setting_id = 1
AND setting_value = 'common_1'
AND (
owner_id = mid.id1
OR
owner_id = mid.id2
OR
owner_id = mid.id3
)​

placed into an EXISTS would work any better.


Better, but still bad. Execution time on my test system goes from 6583ms 
to 4394ms, whereas the version with just one "=" takes 12ms.


  It seems pointless to include a LATERAL if you are not going to 
output any of the fields from the laterally joined relation.


True. Our actual query is more complex than the stripped-down one here. 
I wanted to reduce it to the bare minimum to demonstrate the performance 
problem so as to make it easier to figure out what was going on. 
Including columns from the LATERAL query or leaving them out doesn't 
have much impact on execution time.


-Steve


Re: [GENERAL] Poor performance on 9.4.4 when matching multiple columns from left side of LATERAL join

2015-11-13 Thread David G. Johnston
On Fri, Nov 13, 2015 at 11:25 PM, Steven Grimm 
wrote:

> We want to find all the rows from multi_id where any of the IDs (including
> its primary key) have a certain setting with a certain value.
>
> LATERAL seemed like the tool for the job, so we tried the following:


> -
> SELECT mid.id1
> FROM multi_id AS mid,
> LATERAL (
> SELECT 1
> FROM settings
> WHERE setting_id = 1
> AND setting_value = 'common_1'
> AND owner_id IN (mid.id1, mid.id2, mid.id3)
> ) AS setting_matcher;
> -
>

​
​IN semantics w.r.t NULL can result in atrocious performance in some
instances.  I cannot speak to this one in particular but I'm curious if
[...]
WHERE setting_id = 1
AND setting_value = 'common_1'
AND (
owner_id = mid.id1
OR
owner_id = mid.id2
OR
owner_id = mid.id3
)​


placed into an EXISTS would work any better.  It seems pointless to include
a LATERAL if you are not going to output any of the fields from the
laterally joined relation.  If you want a join I'm not sure that INNER
wouldn't be just as good, with an ON clause of (owner_id = mid.id1 OR
owner_id = mid.id2 OR owner_id = mid.id3)

David J.


[GENERAL] Poor performance on 9.4.4 when matching multiple columns from left side of LATERAL join

2015-11-13 Thread Steven Grimm
We have a table, call it "multi_id", that contains columns with IDs of 
various kinds of objects in my system, and another table that's a 
generic owner/key/value store for object attributes (think configuration 
settings, and I'll refer to this table as "settings"). To wit:


-
CREATE TABLE multi_id (
  id1 INTEGER PRIMARY KEY,
  id2 INTEGER,
  id3 INTEGER
);
CREATE TABLE settings (
  owner_id INTEGER,
  setting_id INTEGER,
  setting_value TEXT,
  PRIMARY KEY (owner_id, setting_id)
);
CREATE UNIQUE INDEX multi_id_idx_id1 ON multi_id (id1, id2);
CREATE UNIQUE INDEX multi_id_idx_id2 ON multi_id (id2, id1);
CREATE INDEX settings_idx_setting_id ON settings (setting_id, 
setting_value);

-

We want to find all the rows from multi_id where any of the IDs 
(including its primary key) have a certain setting with a certain value.


LATERAL seemed like the tool for the job, so we tried the following:

-
SELECT mid.id1
FROM multi_id AS mid,
LATERAL (
SELECT 1
FROM settings
WHERE setting_id = 1
AND setting_value = 'common_1'
AND owner_id IN (mid.id1, mid.id2, mid.id3)
) AS setting_matcher;
-

When we're searching for a common value, this query takes a LONG time. 
It turns out the culprit is the IN clause in the subquery. If I change 
"owner_id IN (mid.id1, mid.id2, mid.id3)" to "owner_id = mid.id1", the 
query executes in about 1/900 the time. It remains that fast if I change 
mid.id1 to mid.id2 or mid.id3, meaning if I do a UNION of those three 
queries to get the same result set as the query above, the whole thing 
is roughly 300x faster.


Execution plan for the IN version followed by the = version (for just 
one of the IDs):


-
Nested Loop  (cost=5.39..8107.18 rows=285 width=4) (actual 
time=1.230..6456.567 rows=4499 loops=1)
   Join Filter: (settings.owner_id = ANY (ARRAY[mid.id1, mid.id2, 
mid.id3]))

   Rows Removed by Join Filter: 22495501
   ->  Seq Scan on multi_id mid  (cost=0.00..78.00 rows=5000 width=12) 
(actual time=0.010..1.385 rows=5000 loops=1)
   ->  Materialize  (cost=5.39..310.66 rows=95 width=4) (actual 
time=0.000..0.263 rows=4500 loops=5000)
 ->  Bitmap Heap Scan on settings  (cost=5.39..310.19 rows=95 
width=4) (actual time=1.207..3.210 rows=4500 loops=1)
   Recheck Cond: ((setting_id = 1) AND (setting_value = 
'common_1'::text))

   Heap Blocks: exact=1405
   ->  Bitmap Index Scan on settings_idx_setting_id  
(cost=0.00..5.37 rows=95 width=0) (actual time=0.930..0.930 rows=4500 
loops=1)
 Index Cond: ((setting_id = 1) AND (setting_value = 
'common_1'::text))

 Planning time: 0.178 ms
 Execution time: 6456.897 ms


 Hash Join  (cost=145.98..472.93 rows=103 width=4) (actual 
time=2.677..6.890 rows=4500 loops=1)

   Hash Cond: (settings.owner_id = mid.id1)
   ->  Bitmap Heap Scan on settings  (cost=5.48..330.50 rows=103 
width=4) (actual time=1.194..3.477 rows=4500 loops=1)
 Recheck Cond: ((setting_id = 1) AND (setting_value = 
'common_1'::text))

 Heap Blocks: exact=1405
 ->  Bitmap Index Scan on settings_idx_setting_id  
(cost=0.00..5.45 rows=103 width=0) (actual time=0.854..0.854 rows=4500 
loops=1)
   Index Cond: ((setting_id = 1) AND (setting_value = 
'common_1'::text))
   ->  Hash  (cost=78.00..78.00 rows=5000 width=4) (actual 
time=1.463..1.463 rows=5000 loops=1)

 Buckets: 1024  Batches: 1  Memory Usage: 176kB
 ->  Seq Scan on multi_id mid  (cost=0.00..78.00 rows=5000 
width=4) (actual time=0.007..0.717 rows=5000 loops=1)

 Planning time: 0.311 ms
 Execution time: 7.166 ms
-

What am I doing wrong in the IN version of the query, if anything?

I wrote a script to populate a test database with a simplified version 
of our real data model and demonstrate the behavior I'm seeing: 
https://gist.github.com/sgrimm-sg/2722068ef844d3e02129


Thanks!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] fast refresh materialized view

2015-11-13 Thread Pradhan, Sabin
Does postgres  has fast refresh materialized view  that supports incremental 
refresh. In oracle  , this is achieve  by materialized view log. Not sure how 
to  implement it in postgres.

Confidentiality Notice::  This email, including attachments, may include 
non-public, proprietary, confidential or legally privileged information.  If 
you are not an intended recipient or an authorized agent of an intended 
recipient, you are hereby notified that any dissemination, distribution or 
copying of the information contained in or transmitted with this e-mail is 
unauthorized and strictly prohibited.  If you have received this email in 
error, please notify the sender by replying to this message and permanently 
delete this e-mail, its attachments, and any copies of it immediately.  You 
should not retain, copy or use this e-mail or any attachment for any purpose, 
nor disclose all or any part of the contents to any other person. Thank you.


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread Adrian Klaver

On 11/13/2015 09:56 AM, db042190 wrote:

I looked at event viewer in both Application and System.  Neither shows
anything unusual around that time, just Volume Shadow Copy service stopping.


Hmm, is something trying to run a backup over your Postgres data 
directory at the same time you are running the Pentaho task?


I am not that much of Windows user anymore, so the below is coming from 
possibly faulty memory.


Check Windows Backup and Scheduler for backup jobs.





--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873841.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread db042190
ok, I poked around and see a reload config option in pg admin iii.  I guess
i'm ready to go.



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873846.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread db042190
I looked at event viewer in both Application and System.  Neither shows
anything unusual around that time, just Volume Shadow Copy service stopping.



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873841.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread db042190
run this on pg admin III to reload   SELECT pg_reload_conf()  ?
Also, I now see the format for multiple  log line prefixes...blank after
each choice including last.  I will try that once i'm comfortable with how
I'll be reloading postgres.  



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873834.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread Adrian Klaver

On 11/13/2015 08:39 AM, db042190 wrote:

currently, log line prefix is log_line_prefix = '%t '.

Not sure what the blank following the t does.


The blank is so this happens:

2015-11-10 00:55:11 EST HINT
   ^

It is a literal space to separate the log prefix from the log message to 
make things clearer.




u probably want me to change it to log_line_prefix = '%t%a%u%d%r%h%p% ' or
something like that?


That may be a bit much. %a probably does not do what you think it does. 
It returns a name that has to be set and may not be set by a connecting 
client. %r and %h are somewhat redundant with %r just adding the port to %h




--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873831.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread db042190
a couple of these say cant be changed after session start. Is that the same
as reload?  Can I change it back if I do a reload? 



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873832.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread db042190
currently, log line prefix is log_line_prefix = '%t '.

Not sure what the blank following the t does.

u probably want me to change it to log_line_prefix = '%t%a%u%d%r%h%p% ' or
something like that?



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873831.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread Adrian Klaver

On 11/13/2015 07:56 AM, db042190 wrote:

pls remember that since the switches didn't run, this all (or mostly) could
be related to an unknown process.  My bad component appears to have started
around 54 seconds into nov 10.  The overall job appears to have ended around


Actually 54 minutes, which makes the below occur over a roughly 7 minute 
time frame.



01:01:44 which makes me wonder if a lot of the EOFs are related to some sort
of cleanup effort between pentaho and postgres.  I'm poking around for info


Well without an indication of what the actual commands are that are 
being run, this is going to be difficult to troubleshoot from this end.


What needs to be sorted out(besides increasing checkpoint_segments) is 
the source of this:


unexpected EOF on client connection
could not receive data from client: No
connection could be made because the target machine actively refused it.

I would spend some time here:

http://www.postgresql.org/docs/9.4/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

Enable log_connections and log_disconnections.

Look at log_line_prefix and add at least %u and %p

Look at log_statement. Though this can fill up a log in a hurry, 
especially if you do all.


You will need to reload(not restart) the server to have the settings 
changes take effect.



on how to run perfmon unattended, how big its footprint would be if
monitoring for an extended period of time unattended , and how to look at
what the os might have told me that day at that time.  I took a peek at
available disk space, and while I don't know what postgres might be allowed
to take, there is a ton of available space.  I closed out about a handful of
idle queries on pg admin iii left by programmers.  I'll post more soon.

2015-11-10 00:54:03 EST LOG:  checkpoints are occurring too frequently (116
seconds apart)
2015-11-10 00:54:03 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:54:24 EST LOG:  checkpoints are occurring too frequently (21
seconds apart)
2015-11-10 00:54:24 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:54:46 EST LOG:  checkpoints are occurring too frequently (22
seconds apart)
2015-11-10 00:54:46 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:55:11 EST LOG:  checkpoints are occurring too frequently (25
seconds apart)
2015-11-10 00:55:11 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:55:36 EST LOG:  checkpoints are occurring too frequently (25
seconds apart)
2015-11-10 00:55:36 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:56:07 EST LOG:  checkpoints are occurring too frequently (31
seconds apart)
2015-11-10 00:56:07 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:56:38 EST LOG:  checkpoints are occurring too frequently (31
seconds apart)
2015-11-10 00:56:38 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:57:15 EST LOG:  checkpoints are occurring too frequently (37
seconds apart)
2015-11-10 00:57:15 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:57:53 EST LOG:  checkpoints are occurring too frequently (38
seconds apart)
2015-11-10 00:57:53 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:58:35 EST LOG:  checkpoints are occurring too frequently (42
seconds apart)
2015-11-10 00:58:35 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:59:12 EST LOG:  checkpoints are occurring too frequently (37
seconds apart)
2015-11-10 00:59:12 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:59:34 EST LOG:  checkpoints are occurring too frequently (22
seconds apart)
2015-11-10 00:59:34 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 01:00:15 EST LOG:  checkpoints are occurring too frequently (41
seconds apart)
2015-11-10 01:00:15 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 01:00:46 EST LOG:  checkpoints are occurring too frequently (31
seconds apart)
2015-11-10 01:00:46 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 01:01:15 EST LOG:  checkpoints are occurring too frequently (29
seconds apart)
2015-11-10 01:01:15 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 01:01:44 EST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2015-11-10 01:01:44 EST LOG:  unexpected EOF on client connection
2015-11-10 01:01:44 EST LOG:  could not receive data from client: No
connection could be made because the target machine actively refu

Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread db042190
it looks like a lot of the log attributes ion the config are defaulting,
probably to NOTICE.  I see a # in front of many.  If the community can
advise on better choices and how bad the footprint would be for me to change
these, pls let me know.  I'm guessing the log excerpt I posted isn't telling
us much. 



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873828.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-13 Thread db042190
pls remember that since the switches didn't run, this all (or mostly) could
be related to an unknown process.  My bad component appears to have started
around 54 seconds into nov 10.  The overall job appears to have ended around
01:01:44 which makes me wonder if a lot of the EOFs are related to some sort
of cleanup effort between pentaho and postgres.  I'm poking around for info
on how to run perfmon unattended, how big its footprint would be if
monitoring for an extended period of time unattended , and how to look at
what the os might have told me that day at that time.  I took a peek at
available disk space, and while I don't know what postgres might be allowed
to take, there is a ton of available space.  I closed out about a handful of
idle queries on pg admin iii left by programmers.  I'll post more soon.

2015-11-10 00:54:03 EST LOG:  checkpoints are occurring too frequently (116
seconds apart)
2015-11-10 00:54:03 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:54:24 EST LOG:  checkpoints are occurring too frequently (21
seconds apart)
2015-11-10 00:54:24 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:54:46 EST LOG:  checkpoints are occurring too frequently (22
seconds apart)
2015-11-10 00:54:46 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:55:11 EST LOG:  checkpoints are occurring too frequently (25
seconds apart)
2015-11-10 00:55:11 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:55:36 EST LOG:  checkpoints are occurring too frequently (25
seconds apart)
2015-11-10 00:55:36 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:56:07 EST LOG:  checkpoints are occurring too frequently (31
seconds apart)
2015-11-10 00:56:07 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:56:38 EST LOG:  checkpoints are occurring too frequently (31
seconds apart)
2015-11-10 00:56:38 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:57:15 EST LOG:  checkpoints are occurring too frequently (37
seconds apart)
2015-11-10 00:57:15 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:57:53 EST LOG:  checkpoints are occurring too frequently (38
seconds apart)
2015-11-10 00:57:53 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:58:35 EST LOG:  checkpoints are occurring too frequently (42
seconds apart)
2015-11-10 00:58:35 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:59:12 EST LOG:  checkpoints are occurring too frequently (37
seconds apart)
2015-11-10 00:59:12 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 00:59:34 EST LOG:  checkpoints are occurring too frequently (22
seconds apart)
2015-11-10 00:59:34 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 01:00:15 EST LOG:  checkpoints are occurring too frequently (41
seconds apart)
2015-11-10 01:00:15 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 01:00:46 EST LOG:  checkpoints are occurring too frequently (31
seconds apart)
2015-11-10 01:00:46 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 01:01:15 EST LOG:  checkpoints are occurring too frequently (29
seconds apart)
2015-11-10 01:01:15 EST HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2015-11-10 01:01:44 EST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2015-11-10 01:01:44 EST LOG:  unexpected EOF on client connection
2015-11-10 01:01:44 EST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2015-11-10 01:01:44 EST LOG:  unexpected EOF on client connection
2015-11-10 01:01:44 EST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2015-11-10 01:01:44 EST LOG:  unexpected EOF on client connection
2015-11-10 01:01:44 EST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2015-11-10 01:01:44 EST LOG:  unexpected EOF on client connection
2015-11-10 01:01:44 EST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2015-11-10 01:01:44 EST LOG:  unexpected EOF on client connection
2015-11-10 01:01:44 EST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2015-11-10 01:01:44 EST LOG:  could not receive data f

Re: [GENERAL]

2015-11-13 Thread Giuseppe Sacco
Hello,

Il giorno ven, 13/11/2015 alle 13.38 +0800, Alex Luya ha scritto:
> Hello,
>        I created a new database by 
>        create database icare;
>        then quit off psql and  run:
>         pg_restore --clean --create --exit-on-error --dbname=icare
> icare-test.tar
>          it complains:
>           pg_restore: [archiver (db)] Error while PROCESSING TOC:
>           pg_restore: [archiver (db)] Error from TOC entry 21; 2615
> 80924 SCHEMA icare icare
>           pg_restore: [archiver (db)] could not execute query: ERROR:
>  permission denied for database icare
>            Command was: CREATE SCHEMA icare;

From what I understand, it means that the postgresql user that is
restoring the dump cannot create a schema on "icare" database. So, is
that user the same that issued che "create database" earlier? If it's
not, then you should grant all required priviledges to that user.

Bye,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2015-11-13 Thread Adrian Klaver

On 11/13/2015 05:19 AM, Melvin Davidson wrote:

It is very simple. If you have already created database icare then
DO NOT use the -create flag!


You can do that, you just cannot do that when the database you are 
connecting to, --dbname=icare, is the one you want to create. I use 
--dbname=postgres to connect to the system database postgres and have 
pg_restore then clean/create the intended database from there. In this 
case the database does not seem to be the issue. The problem is when 
pg_restore starts trying to create SCHEMA in the database.




http://www.postgresql.org/docs/9.2/interactive/app-pgrestore.html

And as requested before, please provide the O/S and PostgreSQL version.

On Fri, Nov 13, 2015 at 12:38 AM, Alex Luya mailto:alexander.l...@gmail.com>> wrote:

Hello,
I created a new database by

create database icare;

then quit off psql and  run:
 pg_restore --clean --create --exit-on-error --dbname=icare
icare-test.tar

  it complains:

   pg_restore: [archiver (db)] Error while PROCESSING TOC:
   pg_restore: [archiver (db)] Error from TOC entry 21; 2615
80924 SCHEMA icare icare
   pg_restore: [archiver (db)] could not execute query:
ERROR:  permission denied for database icare
Command was: CREATE SCHEMA icare;




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2015-11-13 Thread Adrian Klaver

On 11/12/2015 09:38 PM, Alex Luya wrote:

Hello,
I created a new database by

create database icare;

then quit off psql and  run:
 pg_restore --clean --create --exit-on-error --dbname=icare
icare-test.tar


Please show the pg_backup script that created icare-test.tar

Also in the database you are dumping from, in psql, what is the output from:

\l
\dn



  it complains:

   pg_restore: [archiver (db)] Error while PROCESSING TOC:
   pg_restore: [archiver (db)] Error from TOC entry 21; 2615
80924 SCHEMA icare icare
   pg_restore: [archiver (db)] could not execute query: ERROR:
  permission denied for database icare
Command was: CREATE SCHEMA icare;



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Array_to_json remove brackets

2015-11-13 Thread Leonardo M . Ramé

El 13/11/15 a las 10:49, Merlin Moncure escribió:

On Fri, Nov 13, 2015 at 7:20 AM, Leonardo M. Ramé  wrote:

Hi, is there a way to get an array converted to json without brackets?.

I'm getting, for example [{"field": "value"}, {"field": "value"}] and I want
to get this: {"field": "value"}, {"field": "value"}.


please supply an example.  Also,
{"field": "value"}, {"field": "value"}

is not valid json, so, basically, no.  But you could textually remove
them I guess but I'm thinking more context is needed to give a better
answer.

merlin




Thanks, I fixed this with a simple trim command:

select trim('[abcd]', '[]');


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-13 Thread Doiron, Daniel
Thanks for all your quick replies!

Turns out these are created by pg_repack and left in the database if
pg_repack encounters a fatal error.

In this case, the error was a hyphen in the database name which pg_repack
can¹t handle.





On 11/12/15, 5:38 PM, "Tom Lane"  wrote:

>Thomas Kellerer  writes:
>> Doiron, Daniel schrieb am 12.11.2015 um 23:21:
>>> I¹m troubleshooting a schema and found this:
>>> 
>>> Indexes:
>>> "pk_patient_diagnoses" PRIMARY KEY, btree (id)
>>> "index_4341548" UNIQUE, btree (id)
>>> "idx_patient_diagnoses_deleted" btree (deleted)
>>> "idx_patient_diagnoses_diagnosis_type_id" btree (diagnosis_type_id)
>>> "idx_patient_diagnoses_icd10" btree (icd10)
>>> "idx_patient_diagnoses_icd9" btree (diagnosis_code)
>>> "idx_patient_diagnoses_is_unknown" btree (is_unknown)
>>> "idx_patient_diagnoses_modified" btree (modified)
>>> "idx_patient_diagnoses_patient_id" btree (patient_id)
>>> "idx_patient_diagnoses_uuid" btree (uuid)
>>> "index_325532921" btree (modified)
>>> "index_4345603" btree (deleted)
>>> "index_4349516" btree (diagnosis_type_id)
>>> "index_4353417" btree (icd10)
>>> "index_4384754" btree (diagnosis_code)
>>> "index_4418849" btree (is_unknown)
>>> "index_4424101" btree (patient_id)
>>> "index_4428458" btree (uuid)
>
>> So from the list above, only pk_patient_diagnose has (most probably)
>>been created automatically. Everything else was created manually.
>
>Also, *none* of those index names match what Postgres would choose of its
>own accord.  The built-in naming schemes can be exhibited thus:
>
>regression=# create table foo (f1 int primary key, f2 int unique, f3 int);
>CREATE TABLE
>regression=# create index on foo(f3);
>CREATE INDEX
>regression=# \d foo
>  Table "public.foo"
> Column |  Type   | Modifiers
>+-+---
> f1 | integer | not null
> f2 | integer |
> f3 | integer |
>Indexes:
>"foo_pkey" PRIMARY KEY, btree (f1)
>"foo_f2_key" UNIQUE CONSTRAINT, btree (f2)
>"foo_f3_idx" btree (f3)
>
>There's some additional rules for abbreviating very long derived index
>names, and for dealing with index name collisions, but none of those would
>have come into play here.  The index names Daniel shows must all have been
>specified in DDL commands, either as the name of a constraint or as the
>name of an index.
>
>   regards, tom lane
>
>
>-- 
>Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>To make changes to your subscription:
>http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Array_to_json remove brackets

2015-11-13 Thread Merlin Moncure
On Fri, Nov 13, 2015 at 7:20 AM, Leonardo M. Ramé  wrote:
> Hi, is there a way to get an array converted to json without brackets?.
>
> I'm getting, for example [{"field": "value"}, {"field": "value"}] and I want
> to get this: {"field": "value"}, {"field": "value"}.

please supply an example.  Also,
{"field": "value"}, {"field": "value"}

is not valid json, so, basically, no.  But you could textually remove
them I guess but I'm thinking more context is needed to give a better
answer.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Array_to_json remove brackets

2015-11-13 Thread Leonardo M . Ramé

Hi, is there a way to get an array converted to json without brackets?.

I'm getting, for example [{"field": "value"}, {"field": "value"}] and I 
want to get this: {"field": "value"}, {"field": "value"}.


Regards,

--
Leonardo M. Ramé
http://leonardorame.blogspot.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2015-11-13 Thread Melvin Davidson
It is very simple. If you have already created database icare then
DO NOT use the -create flag!

http://www.postgresql.org/docs/9.2/interactive/app-pgrestore.html

And as requested before, please provide the O/S and PostgreSQL version.

On Fri, Nov 13, 2015 at 12:38 AM, Alex Luya 
wrote:

> Hello,
>I created a new database by
>
>create database icare;
>
>then quit off psql and  run:
>
> pg_restore --clean --create --exit-on-error --dbname=icare
> icare-test.tar
>
>  it complains:
>
>   pg_restore: [archiver (db)] Error while PROCESSING TOC:
>   pg_restore: [archiver (db)] Error from TOC entry 21; 2615 80924
> SCHEMA icare icare
>   pg_restore: [archiver (db)] could not execute query: ERROR:
>  permission denied for database icare
>Command was: CREATE SCHEMA icare;
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL]

2015-11-13 Thread James Keener
Who were you logged I to psql as? Does the dump switch users?

On November 13, 2015 12:38:19 AM EST, Alex Luya  
wrote:
>Hello,
>   I created a new database by
>
>   create database icare;
>
>   then quit off psql and  run:
>
>pg_restore --clean --create --exit-on-error --dbname=icare
>icare-test.tar
>
> it complains:
>
>  pg_restore: [archiver (db)] Error while PROCESSING TOC:
>pg_restore: [archiver (db)] Error from TOC entry 21; 2615 80924
>SCHEMA icare icare
>  pg_restore: [archiver (db)] could not execute query: ERROR:
> permission denied for database icare
>   Command was: CREATE SCHEMA icare;

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-13 Thread Albe Laurenz
Jeremy Harris wrote:
> On 13/11/15 10:49, Thomas Kellerer wrote:
>>> These indexes were *not* created by PostgreSQL.
>>> We are not Oracle.
>>
>> Well, Oracle does not create indexes on its own either - it has the same 
>> strategy as Postgres:
>> Indexes are only created automatically for primary keys and unique 
>> constraints.

I know - but I couldn't help commenting on the strange names
it chooses for these, like "SYS43243247".
Sorry for being unclear.

> Given that indices are an implementation wart on the side of the
> relational model, it'd be nice if RDBMS' did create them for one.

That cannot be done without knowing what the queries are going to be.

However, I recently learned that MySQL automatically creates indexes
on columns with a foreign key, and you cannot even drop those.

Maybe that would be a good thing, guessing from the number of cases
where people suffer from the lack of such indexes, but on the other
hand it feels like too much DWIM (there are cases where you do not
need such an index).

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-13 Thread Jeremy Harris
On 13/11/15 10:49, Thomas Kellerer wrote:
>> These indexes were *not* created by PostgreSQL.
>> We are not Oracle.
> 
> Well, Oracle does not create indexes on its own either - it has the same 
> strategy as Postgres:
> Indexes are only created automatically for primary keys and unique 
> constraints.

Given that indices are an implementation wart on the side of the
relational model, it'd be nice if RDBMS' did create them for one.
-- 
Cheers,
  Jeremy




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-13 Thread Thomas Kellerer
Albe Laurenz schrieb am 13.11.2015 um 11:23:
>> My questions is whether these “index_*” indexes could have been created by 
>> postgresql or whether I
>> have an errant developer using some kinda third-party tool?
> 
> These indexes were *not* created by PostgreSQL.
> We are not Oracle.

Well, Oracle does not create indexes on its own either - it has the same 
strategy as Postgres:
Indexes are only created automatically for primary keys and unique constraints.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-13 Thread Albe Laurenz
John McKown wrote:
> All of the above make we curious about using NFS for the data files, but 
> having the WAL files on a
> local, perhaps SSD, device.​ I am not knowledgeable about WAL. Of course, I 
> don't know why the OP wants
> to put the database files on an NFS.

If the data file storage does not keep the promise that synced files are
actually on disk, you'd get in trouble at checkpoint time.

So if you don't trust NFS, that wouldn't be an option.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-13 Thread Albe Laurenz
Doiron, Daniel wrote:
> I’m troubleshooting a schema and found this:
> 
> Indexes:
> "pk_patient_diagnoses" PRIMARY KEY, btree (id)
> "index_4341548" UNIQUE, btree (id)
> "idx_patient_diagnoses_deleted" btree (deleted)
> "idx_patient_diagnoses_diagnosis_type_id" btree (diagnosis_type_id)
> "idx_patient_diagnoses_icd10" btree (icd10)
> "idx_patient_diagnoses_icd9" btree (diagnosis_code)
> "idx_patient_diagnoses_is_unknown" btree (is_unknown)
> "idx_patient_diagnoses_modified" btree (modified)
> "idx_patient_diagnoses_patient_id" btree (patient_id)
> "idx_patient_diagnoses_uuid" btree (uuid)
> "index_325532921" btree (modified)
> "index_4345603" btree (deleted)
> "index_4349516" btree (diagnosis_type_id)
> "index_4353417" btree (icd10)
> "index_4384754" btree (diagnosis_code)
> "index_4418849" btree (is_unknown)
> "index_4424101" btree (patient_id)
> "index_4428458" btree (uuid)
> 
> My questions is whether these “index_*” indexes could have been created by 
> postgresql or whether I
> have an errant developer using some kinda third-party tool?

These indexes were *not* created by PostgreSQL.
We are not Oracle.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general