[GENERAL] foreign keys to foreign tables

2015-06-22 Thread Rick Otten
Hello pgsql-general,

I'd like to set up a foreign key constraint to a foreign table from a local
table.

ie, I have a column in a local table that I'd like to ensure has a value in
the foreign table.

alter mytable
add column  some_column_id   uuid references myforeigntable(some_column_id)
;

Unfortunately I get a not a table error when I try this.

ERROR:  referenced relation myforeigntable is not a table

I'm thinking I'll have to write a function that checks for existance of the
ids in the foreign table, and then put a CHECK constraint on using that
function, but I thought I'd as first if there was a better way.

Thanks.

--
Rick Otten
rottenwindf...@gmail.com


Re: [GENERAL] How to speed up pg_trgm / gin index scan

2015-06-22 Thread Christian Ramseyer
On 22/06/15 19:00, Jeff Janes wrote:

  
 
 A typical query on this table looks like this:
 
 explain analyze
 select log_date, host, msg
 from logs_01 as log   where  log.msg like '%192.23.33.177%'
 and log.log_date = '2015-1-18 1:45:24'
 and log.log_date = '2015-1-19 1:45:24'
 order by log_date asc offset 200 limit 50;
 
 
 I think that trigram indexes are not well-suited to searching IP addresses.
 
 If the typical query is always an IP address for the LIKE, I think you
 would want to build an index specifically tailored to that.  You could
 make a function to parse the IP address out of the msg, and then make a
 functional index, for example. It would require you to write the query
 differently.  Whether it would be a btree index or a gin index would
 depend on whether you can have more than one IP address in a msg.
  

Thanks Jeff, but the IP address was mostly an example... I should have
written this more clearly. Generally the searched string will be a
random short fragment from msg (ip, hostname, some part of an error
message etc.).

It must be matched exactly including all punctuation etc, so trigrams
look very suitable.

Cheers
Christian





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-22 Thread Tom Lane
Albe Laurenz laurenz.a...@wien.gv.at writes:
 Piotr Gackiewicz wrote:
 Douglas Stetner stet...@icloud.com writes:
 Looking for confirmation there is an issue with pg_dump failing after
 upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux.

 I have the same problem with fresh postgresql 9.2.13.
 Started after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64
 
 Since then pg_dump aborts after dumping circa 2GB:
 pg_dump: [archiver (db)] query failed: SSL error: unexpected message
 pg_dump: [archiver (db)] query was: FETCH 100 FROM _pg_dump_cursor

I've been able to reproduce this failure with Postgres HEAD, so whatever
it is, it's pretty much independent of our code version.  It was fine with
openssl-1.0.1e-30.el6_6.9.x86_64
but after updating to
openssl-1.0.1e-30.el6_6.11.x86_64
pg_dump fails after about 2GB worth of data transfer.

I find that setting ssl_renegotiation_limit to 0 in postgresql.conf allows
things to work, so it's got something to do with bad renegotiation.  But
curiously, the amount of data dumped before failing is the same whether
ssl_renegotiation_limit is 512MB (the default) or something much smaller
such as 10MB.  In either case we should have successfully completed
several renegotiations before the failure, so I don't think it's solely
a matter of renegotiation is busted.

 Maybe it has something to do with this OpenSSL bug:
 http://rt.openssl.org/Ticket/Display.html?id=3712user=guestpass=guest

That link doesn't work for me :-(

I'm going to file this as a bug with Red Hat.  In the meantime it looks
like we can suggest ssl_renegotiation_limit = 0 as a temporary workaround.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] foreign keys to foreign tables

2015-06-22 Thread Rick Otten
Thanks Will!

I had been considering setting up replication (using SymmetricDS - which we
already use between other databases in our environment), but decided for
this one check it was too much trouble.  I may change my mind on that point
again after all if I end up with a lot of dependencies like this or run
into performance issues.




On Mon, Jun 22, 2015 at 1:06 PM, William Dunn dunn...@gmail.com wrote:

 Hello Rick,

 As I understand it you are correct. Oracle/DB2/Postgres and I think the
 SQL Standards to not implement constraints against tables on foreign
 servers. Although it would be possible to develop the DBMS to handle such
 constraints in a heterogeneous distributed environment it would be unwise
 because of the poor performance and reliability of data sent over networks
 so DBMSs do not implement it. You would, as you suspected, have to use
 stored procedures to emulate some of the functionality of a foreign key but
 definitely think twice about the performance bottlenecks you would
 introduce. A more clever thing to do is use Slony, BDR, or triggers to
 replicate the foreign table and create the constraint against the local
 copy. In some other DBMSs the clever thing to do is create a materialized
 view and constraints against the materialized view (which achieves the
 same) but Postgres does not yet support such constraints against
 materialized views.

 *Will J. Dunn*
 *willjdunn.com http://willjdunn.com*

 On Mon, Jun 22, 2015 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Rick Otten rottenwindf...@gmail.com writes:
  Hello pgsql-general,
  I'd like to set up a foreign key constraint to a foreign table from a
 local
  table.

  ie, I have a column in a local table that I'd like to ensure has a
 value in
  the foreign table.

  alter mytable
  add column  some_column_id   uuid references
 myforeigntable(some_column_id)
  ;

  Unfortunately I get a not a table error when I try this.

  ERROR:  referenced relation myforeigntable is not a table

  I'm thinking I'll have to write a function that checks for existance of
 the
  ids in the foreign table, and then put a CHECK constraint on using that
  function, but I thought I'd as first if there was a better way.

 What's going to happen when the foreign server decides to delete some rows
 from its table?

 regards, tom lane


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [GENERAL] Re: pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-22 Thread Tom Lane
I wrote:
 I'm going to file this as a bug with Red Hat.  In the meantime it looks
 like we can suggest ssl_renegotiation_limit = 0 as a temporary workaround.

Done at

https://bugzilla.redhat.com/show_bug.cgi?id=1234487

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] foreign keys to foreign tables

2015-06-22 Thread Tom Lane
Rick Otten rottenwindf...@gmail.com writes:
 Hello pgsql-general,
 I'd like to set up a foreign key constraint to a foreign table from a local
 table.

 ie, I have a column in a local table that I'd like to ensure has a value in
 the foreign table.

 alter mytable
 add column  some_column_id   uuid references myforeigntable(some_column_id)
 ;

 Unfortunately I get a not a table error when I try this.

 ERROR:  referenced relation myforeigntable is not a table

 I'm thinking I'll have to write a function that checks for existance of the
 ids in the foreign table, and then put a CHECK constraint on using that
 function, but I thought I'd as first if there was a better way.

What's going to happen when the foreign server decides to delete some rows
from its table?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] foreign keys to foreign tables

2015-06-22 Thread William Dunn
Hello Rick,

As I understand it you are correct. Oracle/DB2/Postgres and I think the SQL
Standards to not implement constraints against tables on foreign servers.
Although it would be possible to develop the DBMS to handle such
constraints in a heterogeneous distributed environment it would be unwise
because of the poor performance and reliability of data sent over networks
so DBMSs do not implement it. You would, as you suspected, have to use
stored procedures to emulate some of the functionality of a foreign key but
definitely think twice about the performance bottlenecks you would
introduce. A more clever thing to do is use Slony, BDR, or triggers to
replicate the foreign table and create the constraint against the local
copy. In some other DBMSs the clever thing to do is create a materialized
view and constraints against the materialized view (which achieves the
same) but Postgres does not yet support such constraints against
materialized views.

*Will J. Dunn*
*willjdunn.com http://willjdunn.com*

On Mon, Jun 22, 2015 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Rick Otten rottenwindf...@gmail.com writes:
  Hello pgsql-general,
  I'd like to set up a foreign key constraint to a foreign table from a
 local
  table.

  ie, I have a column in a local table that I'd like to ensure has a value
 in
  the foreign table.

  alter mytable
  add column  some_column_id   uuid references
 myforeigntable(some_column_id)
  ;

  Unfortunately I get a not a table error when I try this.

  ERROR:  referenced relation myforeigntable is not a table

  I'm thinking I'll have to write a function that checks for existance of
 the
  ids in the foreign table, and then put a CHECK constraint on using that
  function, but I thought I'd as first if there was a better way.

 What's going to happen when the foreign server decides to delete some rows
 from its table?

 regards, tom lane


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] How to speed up pg_trgm / gin index scan

2015-06-22 Thread Jeff Janes
On Mon, Jun 22, 2015 at 4:51 AM, Christian Ramseyer r...@networkz.ch wrote:

 Hi

 I have a pretty large table with syslog messages.

 It is already partitioned by month, and for a single month I have e.g.


 DM=# \d+ logs_01

 Column|Type |
 --+-+
  host | character varying(255)  |
  facility | character varying(10)   |
  priority | character varying(10)   |
  tag  | character varying(255)  |
  log_date | timestamp without time zone |
  program  | character varying(255)  |
  msg  | text|
  seq  | bigint  |

 Indexes:
 logs_01_pkey PRIMARY KEY, btree (seq)
 idx_logs_01_lower_host btree (lower(host::text) varchar_pattern_ops)
 logs_01_date_index btree (log_date)
 tridx_logs_01_msg gin (msg gin_trgm_ops)


 DM=# select count(*) from logs_01;
   count
 --
  83052864


...


 A typical query on this table looks like this:

 explain analyze
 select log_date, host, msg
 from logs_01 as log   where  log.msg like '%192.23.33.177%'
 and log.log_date = '2015-1-18 1:45:24'
 and log.log_date = '2015-1-19 1:45:24'
 order by log_date asc offset 200 limit 50;


I think that trigram indexes are not well-suited to searching IP addresses.

If the typical query is always an IP address for the LIKE, I think you
would want to build an index specifically tailored to that.  You could make
a function to parse the IP address out of the msg, and then make a
functional index, for example. It would require you to write the query
differently.  Whether it would be a btree index or a gin index would depend
on whether you can have more than one IP address in a msg.

Cheers,

Jeff


Re: [GENERAL] foreign keys to foreign tables

2015-06-22 Thread Rick Otten
Obviously the server will be able to delete those rows because it will be
completely unaware of this dependency.

So it is the implied reverse constraint (of sorts) that can't be enforced
which makes an FK based definition impossible.

For my particular use case, this shouldn't be a problem.  The foreign table
is a reference table which does not typically experience deletes.  I'll go
with a function for now.  Since this happens to be a PostgreSQL-PostgreSQL
mapping I'll also consider mapping my table back the other way and then
putting a delete trigger on the foreign reference table to either cascade
or stop the delete once I decide which I'd rather do.

Thanks for the help!





On Mon, Jun 22, 2015 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Rick Otten rottenwindf...@gmail.com writes:
  Hello pgsql-general,
  I'd like to set up a foreign key constraint to a foreign table from a
 local
  table.

  ie, I have a column in a local table that I'd like to ensure has a value
 in
  the foreign table.

  alter mytable
  add column  some_column_id   uuid references
 myforeigntable(some_column_id)
  ;

  Unfortunately I get a not a table error when I try this.

  ERROR:  referenced relation myforeigntable is not a table

  I'm thinking I'll have to write a function that checks for existance of
 the
  ids in the foreign table, and then put a CHECK constraint on using that
  function, but I thought I'd as first if there was a better way.

 What's going to happen when the foreign server decides to delete some rows
 from its table?

 regards, tom lane



Re: [GENERAL] extracting PII data and transforming it across table.

2015-06-22 Thread Suresh Raja
Hi All:

Does postgresql support schema analyze.  I could not find analyze schema
anywhere.  Can we create a function to run analyze and reindex on all
objects in the schema.  Any suggestions or ideas.

Thanks,
-Suresh Raja


Re: [GENERAL] How to speed up pg_trgm / gin index scan

2015-06-22 Thread Jeff Janes
On Mon, Jun 22, 2015 at 10:39 AM, Christian Ramseyer r...@networkz.ch wrote:

 On 22/06/15 19:00, Jeff Janes wrote:

 
 
  A typical query on this table looks like this:
 
  explain analyze
  select log_date, host, msg
  from logs_01 as log   where  log.msg like '%192.23.33.177%'
  and log.log_date = '2015-1-18 1:45:24'
  and log.log_date = '2015-1-19 1:45:24'
  order by log_date asc offset 200 limit 50;
 
 
  I think that trigram indexes are not well-suited to searching IP
 addresses.
 
  If the typical query is always an IP address for the LIKE, I think you
  would want to build an index specifically tailored to that.  You could
  make a function to parse the IP address out of the msg, and then make a
  functional index, for example. It would require you to write the query
  differently.  Whether it would be a btree index or a gin index would
  depend on whether you can have more than one IP address in a msg.
 

 Thanks Jeff, but the IP address was mostly an example... I should have
 written this more clearly. Generally the searched string will be a
 random short fragment from msg (ip, hostname, some part of an error
 message etc.).


What kind of timings do you get if you search on a hostname or part of an
error message?  Is it slow in general, or just when the thing you search on
happens to be an IP address?

Certainly in my hands, trigram index searching with embedded IP addresses
are much worse than on embedded natural language fragments.



 It must be matched exactly including all punctuation etc, so trigrams
 look very suitable.


I believe the default compilation of pg_trgm ignores all punctuation
(converts them to whitespace) in the index.  For a LIKE query, it catches
them when it rechecks the actual tuple in the heap so you still get the
right answer.  But if the query is mostly punctuation and short numbers, it
takes much longer to get that correct answer.

Since the time range you query over is narrow and the rows are probably
well-clustered on it, maybe just using the logs_01_date_index would be
faster and then just filtering the table with the LIKE clause:

explain (analyze, buffers)
select log_date, host, msg
from logs_01 as log   where  (log.msg||'') like '%192.23.33.177%'
and log.log_date = '2015-1-18 1:45:24'
and log.log_date = '2015-1-19 1:45:24'
order by log_date asc offset 200 limit 50;

Other options would be making your partitions smaller, or using btree_gist
and make an index on (log_date, msg gist_trgm_ops).  Unfortunately those
indexes can be awful slow to build.

Cheers,

Jeff


[GENERAL] How to speed up pg_trgm / gin index scan

2015-06-22 Thread Christian Ramseyer
Hi

I have a pretty large table with syslog messages.

It is already partitioned by month, and for a single month I have e.g.


DM=# \d+ logs_01
 
Column|Type |
--+-+
 host | character varying(255)  |
 facility | character varying(10)   |
 priority | character varying(10)   |
 tag  | character varying(255)  |
 log_date | timestamp without time zone |
 program  | character varying(255)  |
 msg  | text|
 seq  | bigint  |

Indexes:
logs_01_pkey PRIMARY KEY, btree (seq)
idx_logs_01_lower_host btree (lower(host::text) varchar_pattern_ops)
logs_01_date_index btree (log_date)
tridx_logs_01_msg gin (msg gin_trgm_ops)


DM=# select count(*) from logs_01;
  count   
--
 83052864


I'd like to provide a fast like %x% search on the msg column, hence I added a 
trigram based gin index on it. It is around 60 GB on the 35 GB table:

DM=# select count(*) from logs_01;
  count   
--
 83052864


DM=# \dt+ logs_01
 List of relations
 Schema |  Name   | Type  |  Owner   | Size  | Description 
+-+---+--+---+-
 public | logs_01 | table | postgres | 35 GB | 

DM=# \di+ tridx_logs_01_msg
   List of relations
 Schema |   Name| Type  |  Owner   |  Table  | Size  | Description 
+---+---+--+-+---+-
 public | tridx_logs_01_msg | index | postgres | logs_01 | 58 GB | 


A typical query on this table looks like this:

explain analyze 
select log_date, host, msg
from logs_01 as log   where  log.msg like '%192.23.33.177%'
and log.log_date = '2015-1-18 1:45:24' 
and log.log_date = '2015-1-19 1:45:24'
order by log_date asc offset 200 limit 50;


It yields a promising explain that shows that the index is used, but 
unfortunately the bitmap index scan on the GIN index takes quite long (40 
seconds)


QUERY PLAN  
  
--
 Limit  (cost=34510.06..34510.06 rows=1 width=195) (actual 
time=42971.002..42971.015 rows=50 loops=1)
   -  Sort  (cost=34509.75..34510.06 rows=124 width=195) (actual 
time=42970.960..42970.990 rows=250 loops=1)
 Sort Key: log_date
 Sort Method: top-N heapsort  Memory: 152kB
 -  Bitmap Heap Scan on logs_01 log  (cost=34009.21..34505.44 rows=124 
width=195) (actual time=42963.969..42969.725 rows=2472 loops=1)
   Recheck Cond: ((msg ~~ '%192.23.33.177%'::text) AND (log_date = 
'2015-01-18 01:45:24'::timestamp without time zone) AND (log_date = 
'2015-01-19 01:45:24'::timestamp without time zone))
   -  BitmapAnd  (cost=34009.21..34009.21 rows=124 width=0) 
(actual time=42962.562..42962.562 rows=0 loops=1)
 -  Bitmap Index Scan on tridx_logs_01_msg  
(cost=0.00..6992.15 rows=8020 width=0) (actual time=42731.145..42731.145 
rows=168489 loops=1)
   Index Cond: (msg ~~ '%192.23.33.177%'::text)
 -  Bitmap Index Scan on logs_01_date_index  
(cost=0.00..27016.75 rows=1287939 width=0) (actual time=180.055..180.055 
rows=1173048 loops=1)
   Index Cond: ((log_date = '2015-01-18 
01:45:24'::timestamp without time zone) AND (log_date = '2015-01-19 
01:45:24'::timestamp without time zone))
 Total runtime: 42971.137 ms

(also on http://explain.depesz.com/s/KpaB)


Any good ideas on how I could speed this up a bit? 

I have already tried to throw quite a bunch of memory at the problem:

shared_buffers = 64GB 
work_mem = 16GB  

but it didn't improve between this and the 32GB shared/ 2GB work GB I had 
before.

This is on Postgres 9.1.15 on Linux. 

Thanks
Christian


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-22 Thread Albe Laurenz
Piotr Gackiewicz wrote:
 Tom Lane t...@sss.pgh.pa.us wrote:
 Douglas Stetner stet...@icloud.com writes:
 Looking for confirmation there is an issue with pg_dump failing after
 upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux.

 Quick thought --- did you restart the Postgres service after upgrading
 openssl?  If not, your server is still using the old library version,
 while pg_dump would be running the new version on the client side.
 I don't know exactly what was done to openssl in the last round of
 revisions, but maybe there is some sort of version compatibility issue.

 Also, you really ought to be running something newer than PG 8.4.9.

 I have the same problem with fresh postgresql 9.2.13.
 Started after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64
 
 Since then pg_dump aborts after dumping circa 2GB:
 
 pg_dump: [archiver (db)] query failed: SSL error: unexpected message
 pg_dump: [archiver (db)] query was: FETCH 100 FROM _pg_dump_cursor
 
 openssl-1.0.1e-30.el6_6.11.x86_64 on both ends (connecting via localhost)
 
 pg_dump via unix socket, without -h localhost - there is no problem.
 
 Fetching 2.5 GB of such text dump via https (apache + mod_ssl +
 openssl-1.0.1e-30.el6_6.11.x86_64) = wget +
 openssl-1.0.1e-30.el6_6.11.x86_64  - there is no problem
 
 Looks like postgresql+ssl issue.
 
 postgres=#  select name,setting,unit from pg_settings where name ~ 'ssl' ;
   name   |  setting  | unit
 -+---+--
  ssl | on|
  ssl_ca_file |   |
  ssl_cert_file   | server.crt|
  ssl_ciphers | ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH |
  ssl_crl_file|   |
  ssl_key_file| server.key|
  ssl_renegotiation_limit | 524288| kB
 
 
 Any thoughts?

Maybe it has something to do with this OpenSSL bug:
http://rt.openssl.org/Ticket/Display.html?id=3712user=guestpass=guest

Basically, OpenSSL fails to handle application data messages during 
renegotiation.

I have only encountered that when using other SSL libraries together with
OpenSSL, but maybe it can also happen with only OpenSSL.

Just to make sure:
Do you have the same version of OpenSSL on both PostgreSQL client and server?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.2 has standby server lost data?

2015-06-22 Thread Albe Laurenz
Paula Price wrote:
 I have Postgresql 9.2.10 streaming replication set up with log shipping in
 case the replication falls behind.  I discovered that the log-shipping had
 been disabled at some point in time.  I enabled the log shipping again.
 
 If at some point in time the streaming replication fell behind and the
 standby server was not able to retrieve the necessary WAL file(s) from the
 primary, would the standby server continue to function normally?  Do I need
 to rebuild the standby server?  I have restarted the standby server and it
 is up and running with no issues.  I need to know if the
 data integrity has been compromised.
 
 I have run this query to determine the lag time for the standby(in case
 this tells me anything):
 SELECT now(), now() - pg_last_xact_replay_timestamp()  AS time_lag;
 RESULT:
 2015-06-19 00:40:48.83701+00;00:00:01.078616

Your were lucky and replication did not fall behind.

If it had, and replication had tried to resort to WAL archives,
replication would have got stuck there.

There is no way that recovery can omit a portion of WAL during replay.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT a number in a column based on other columns OLD INSERTs

2015-06-22 Thread Albe Laurenz
Adrian Klaver wrote:
 On 06/20/2015 12:41 PM, Charles Clavadetscher wrote:
 I just made a short test with the code provided. As Bill mentioned the
 moment when the trigger is fired is essential.
 I made a test with both before (worked) and after (did not work because
 the row was already inserted and the returned new row is ignored).

 The assignment (= or :=) does not seem to play a role, but the correct
 version is as mentioned :=
 
 Yea, I can't seem to remember this part of the docs:
 
  Equal (=) can be used instead of PL/SQL-compliant :=.

This was discussed on -hackers a while ago:
http://www.postgresql.org/message-id/flat/52ef20b2e3209443bc37736d00c3c1380876b...@exadv1.host.magwien.gv.at#52ef20b2e3209443bc37736d00c3c1380876b...@exadv1.host.magwien.gv.at

It's a rather long and rambling thread, but what I got from it
was that = for assignments is something that just works by accident,
is discouraged and left alive only to avoid breaking code that uses it.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-22 Thread Tom Lane
I wrote:
 I'm going to file this as a bug with Red Hat.  In the meantime it looks
 like we can suggest ssl_renegotiation_limit = 0 as a temporary workaround.

 Done at
 https://bugzilla.redhat.com/show_bug.cgi?id=1234487

BTW, we should not feel too awful, because it seems this same update has
also broken sendmail, mysql, and probably other services.  Not for the
same reason, but still ...

Red Hat fell down badly on QA'ing this.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Less is More

2015-06-22 Thread Paul Ramsey
Hey all,
I have an odd performance quirk I wonder if anyone has any theories for… (not a 
one-time result, table all heated up, etc)


spain=# explain analyze select way from planet_osm_point;                       
                                  QUERY PLAN
-
 Seq Scan on planet_osm_point  (cost=0.00..37240.97 rows=1748797 width=32) 
(actual time=0.044..813.521 rows=1748797 loops=1)
 Total runtime: 902.256 ms
(2 rows)

Time: 902.690 ms

spain=# explain analyze select * from planet_osm_point;
                                                          QUERY PLAN
--
 Seq Scan on planet_osm_point  (cost=0.00..37240.97 rows=1748797 width=848) 
(actual time=0.051..241.413 rows=1748797 loops=1)
 Total runtime: 330.271 ms
(2 rows)

Time: 331.869 ms



Thanks,

-- 
http://postgis.net
http://cleverelephant.ca



Re: [GENERAL] Less is More

2015-06-22 Thread Tom Lane
Paul Ramsey pram...@cleverelephant.ca writes:
 I have an odd performance quirk I wonder if anyone has any theories for… 
 (not a one-time result, table all heated up, etc)


 spain=# explain analyze select way from planet_osm_point;              
                                            QUERY PLAN
 -
  Seq Scan on planet_osm_point  (cost=0.00..37240.97 rows=1748797 width=32) 
 (actual time=0.044..813.521 rows=1748797 loops=1)
  Total runtime: 902.256 ms
 (2 rows)

 Time: 902.690 ms

 spain=# explain analyze select * from planet_osm_point;
                                                     
       QUERY PLAN
 --
  Seq Scan on planet_osm_point  (cost=0.00..37240.97 rows=1748797 
 width=848) (actual time=0.051..241.413 rows=1748797 loops=1)
  Total runtime: 330.271 ms
 (2 rows)

 Time: 331.869 ms

SELECT * gets to skip the projection step that is usually needed to
remove unwanted columns from the query result.

Note that neither of these numbers have much to do with the real
time to execute the query and return results to the client.  EXPLAIN
ANALYZE doesn't bother to convert the query results to text, much
less do any network I/O.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general