Re: [GENERAL] Stored procedure version control
On 07/01/2016 06:17 PM, Jim Nasby wrote: On 6/30/16 9:16 AM, Merlin Moncure wrote: It's not really necessary to create version down scripts. In five years of managing complex database environments we've never had to roll a version back and likely never will; in the event of a disaster it's probably better to restore from backup anyways. I'm surprised no one has mentioned http://sqitch.org. It makes it very easy to manage migrations, as well as creating downgrade scripts (if you use rework, and put each object into it's own file). I do agree that down scripts are pretty over-rated as long as you have good test practices (as in, database unit tests). In 9 years in an environment where downtime was 6 figures per hour I only had 1 or 2 deployments that had problems, and never bad enough to consider reverting. I've found down scripts quite useful in development. One does have to know how to neuter them on the way to production however. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Stored procedure version control
On 6/30/16 9:16 AM, Merlin Moncure wrote: It's not really necessary to create version down scripts. In five years of managing complex database environments we've never had to roll a version back and likely never will; in the event of a disaster it's probably better to restore from backup anyways. I'm surprised no one has mentioned http://sqitch.org. It makes it very easy to manage migrations, as well as creating downgrade scripts (if you use rework, and put each object into it's own file). I do agree that down scripts are pretty over-rated as long as you have good test practices (as in, database unit tests). In 9 years in an environment where downtime was 6 figures per hour I only had 1 or 2 deployments that had problems, and never bad enough to consider reverting. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins
OK. Will change our query generation code to not use the view. (I have tried the LEFT JOIN approach, but it just does not seem to perform.) Thanks, Igor PS: Here are the numbers for the real production query (will not provide details): Original query: 300s Query on a manually optimized view: 1ms Using left joins:200s I would have gladly paid a few ms in additional planning time! -Original Message- From: Kevin Grittner [mailto:kgri...@gmail.com] Sent: Friday, July 01, 2016 1:57 PM To: Sfiligoi, IgorCc: pgsql-general@postgresql.org Subject: Re: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins On Fri, Jul 1, 2016 at 3:33 PM, Sfiligoi, Igor wrote: > No, I don't want to use LEFT JOINS. > I want to use regular joins. > > But (as mentioned in my other follow-up), all the fields are not null > (was not in the original email, sorry), and are foreign keys, so it is > guaranteed to always match. In that case there is no difference between the inner join and the left join except that the left join currently supports and optimization that makes your query faster if the optional table is not reference. Whether you want to take advantage of that is up to you. > The key part (in my mind) is that I am not filtering on any of the > useless tables, and I am not returning any columns from those tables > either. > Both is known at planning time. The fact that something can be determined at planning time doesn't mean that checking for it is free. > is my logic still broken? Your logic seems OK with the table definitions you are now showing. Whether we ever decide it is OK to omit tables which use an inner join rather than only considering omitting them when the query specifies that the join is optional is anybody's guess. If it is important enough to you you could submit a patch or fund development of such a feature; but since it would add at least some small amount of planning time to every inner join just to avoid specifying that the join is an optional one when writing the query, it seems to me unlikely to be accepted. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How safe is pg_basebackup + continuous archiving?
On 6/30/16 8:30 AM, Stephen Frost wrote: > How can we check for backup corruption in this case? Thanks you very much. There has been some discussion about a specific tool for checking the checksums throughout the entire system. I don't know of anyone activly working on that, unfortunately. If someone did want that though, it could probably be done as an extension. I believe you just have to pull all of each relation into shared buffers for the checksums to be verified. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 9.6 beta2 win-x64 download links still point to beta1
On Fri, Jul 1, 2016 at 10:12:46PM +0200, Thomas Kellerer wrote: > Bruce Momjian schrieb am 28.06.2016 um 05:36: > >On Fri, Jun 24, 2016 at 07:44:17AM +0200, Thomas Kellerer wrote: > >>the Beta2 downloads on > >> > >> http://www.enterprisedb.com/products-services-training/pgdownload > >> http://www.enterprisedb.com/products-services-training/pgbindownload > >> > >>still lead to Beta1 for the Windows 64bit builds. > >> > >>All others properly link to beta1 > > > >This looks fixed now. > > > > I still get the beta1 packages. > > This > > > http://www.enterprisedb.com/postgresql-960-binaries-win64?ls=Crossover=Crossover > > redirects to > > > http://get.enterprisedb.com/postgresql/postgresql-9.6.0-beta1-windows-x64-binaries.zip > > > and > > > http://www.enterprisedb.com/postgresql-960-beta-installers-win64?ls=Crossover=Crossover > > redirects to > > > http://get.enterprisedb.com/postgresql/postgresql-9.6.0-beta1-windows-x64.exe > > > Replacing beta1 with beta2 leads to a 404 (for both direct download links) Yes, I confirmed in a later email that it is still broken. It was also reported by someone else today: https://www.postgresql.org/message-id/flat/CAJF2B_1ryzDSMuSR23mjBnxGbbSYS-8aSAx-hYtwRR2P5VUTiw%40mail.gmail.com I reported it to EDB a few days ago as well. -- Bruce Momjianhttp://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins
On Fri, Jul 1, 2016 at 3:33 PM, Sfiligoi, Igorwrote: > No, I don't want to use LEFT JOINS. > I want to use regular joins. > > But (as mentioned in my other follow-up), all the fields are not > null (was not in the original email, sorry), and are foreign > keys, so it is guaranteed to always match. In that case there is no difference between the inner join and the left join except that the left join currently supports and optimization that makes your query faster if the optional table is not reference. Whether you want to take advantage of that is up to you. > The key part (in my mind) is that I am not filtering on any of > the useless tables, and I am not returning any columns from those > tables either. > Both is known at planning time. The fact that something can be determined at planning time doesn't mean that checking for it is free. > is my logic still broken? Your logic seems OK with the table definitions you are now showing. Whether we ever decide it is OK to omit tables which use an inner join rather than only considering omitting them when the query specifies that the join is optional is anybody's guess. If it is important enough to you you could submit a patch or fund development of such a feature; but since it would add at least some small amount of planning time to every inner join just to avoid specifying that the join is an optional one when writing the query, it seems to me unlikely to be accepted. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins
Nope, no difference how I express the joins: create view v1 as select c.id, c.a_id, c.b1_id, c.b2_id , c.b3_id, a.name a_name, b1.name b1_name, b2.name b2_name, b3.name b3_name from c join a on (c.a_id=a.id) join b b1 on (c.b1_id=b1.id) join b b2 on (c.b2_id=b2.id) join b b3 on (c.b3_id=b3.id); # explain select id, b1_name from v1; QUERY PLAN Nested Loop (cost=1.02..5.47 rows=1 width=7) Join Filter: (c.b3_id = b3.id) -> Nested Loop (cost=1.02..4.34 rows=1 width=11) Join Filter: (c.a_id = a.id) -> Nested Loop (cost=1.02..3.25 rows=1 width=15) Join Filter: (c.b2_id = b2.id) -> Hash Join (cost=1.02..2.12 rows=1 width=19) Hash Cond: (b1.id = c.b1_id) -> Seq Scan on b b1 (cost=0.00..1.06 rows=6 width=7) -> Hash (cost=1.01..1.01 rows=1 width=20) -> Seq Scan on c (cost=0.00..1.01 rows=1 width=20) -> Seq Scan on b b2 (cost=0.00..1.06 rows=6 width=4) -> Seq Scan on a (cost=0.00..1.04 rows=4 width=4) -> Seq Scan on b b3 (cost=0.00..1.06 rows=6 width=4) (14 rows) Igor PS: Here are the updated table definitions: create table a (id int not null primary key, name varchar(128)); create table b (id int not null primary key, name varchar(128)); create table c (id int not null primary key, a_id int not null references a(id), b1_id int not null references b(id), b2_id int not null references b(id), b3_id int not null references b(id)); -Original Message- From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] Sent: Friday, July 01, 2016 1:38 PM To: Sfiligoi, Igor; Merlin Moncure Cc: pgsql-general@postgresql.org Subject: Re: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins On 07/01/2016 01:28 PM, Sfiligoi, Igor wrote: > Sorry... the example was incomplete. > > All the fields are defined as not-null. > So it is guaranteed to always match the join. > > And PostgreSQL release notes claim that PGSQL can do at least partial join > removal: > https://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.0#Join_Rem > oval Those examples use explicit joins, so you might try that in your view definition. > > I was hoping this use case would fit in. > > Any suggestions? > > Igor > > -Original Message- > From: Merlin Moncure [mailto:mmonc...@gmail.com] > Sent: Friday, July 01, 2016 12:42 PM > To: Sfiligoi, Igor > Cc: pgsql-general@postgresql.org > Subject: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious > useless joins > > On Fri, Jul 1, 2016 at 12:17 PM, Sfiligoi, Igor wrote: >> Hello. >> >> We have a view that is very generic, and we noticed that PostgreSQL >> is not very good at removing useless joins, which makes our queries very >> slow. >> >> We could change our code to avoid the view and write ad-hoc queries >> to the underlying tables, but would prefer not to, if there is a way around >> it. >> >> (BTW: We are currently using psql 9.4) >> >> Here is a simplified implementation: >> >> # create table a (id int primary key, name varchar(128)); >> >> # create table b (id int primary key, name varchar(128)); >> >> # create table c (id int primary key, a_id int references a(id), >> b1_id int references b(id), b2_id int references b(id), b3_id int >> references b(id)); >> >> # create view v as select c.id, c.a_id, c.b1_id, c.b2_id , c.b3_id, >> a.name a_name, b1.name b1_name, b2.name b2_name, b3.name b3_name from >> c, a, b b1, b b2, b b3 where c.a_id=a.id and c.b1_id=b1.id and >> c.b2_id=b2.id and c.b3_id=b3.id; >> >> When I try to get just info from tables c and b1: >> >> # select id, b1_name from v >> >> it still does all the joins (see below). >> >> I would expect just one join (due to the request of columns from the >> two tables), >> >> since all joins are on foreign constrains referencing primary keys, >> >> there are no filters on the other tables, so it is guaranteed that >> the useless joins will always return exactly one answer. > > I think what you're asking for is a lot more complex than it sounds, and > incorrect. The precise state of the data influences how many records come > back (in this case, either 1 or 0), for example if b3_id is null you get zero > rows. More to the point, you *instructed* the server to make the join. > There are strategies to make joins 'optional' at run time with respect to a > query, but they are more complicated than simply withdrawing columns from the > select list. > > Stepping back a bit, the query needs to be planned before peeking at the data > in the tables. The planner is able to make assumptions against a statistical > picture of the data but shouldn't be expected to actually
Re: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins
On 07/01/2016 01:28 PM, Sfiligoi, Igor wrote: Sorry... the example was incomplete. All the fields are defined as not-null. So it is guaranteed to always match the join. And PostgreSQL release notes claim that PGSQL can do at least partial join removal: https://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.0#Join_Removal Those examples use explicit joins, so you might try that in your view definition. I was hoping this use case would fit in. Any suggestions? Igor -Original Message- From: Merlin Moncure [mailto:mmonc...@gmail.com] Sent: Friday, July 01, 2016 12:42 PM To: Sfiligoi, IgorCc: pgsql-general@postgresql.org Subject: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins On Fri, Jul 1, 2016 at 12:17 PM, Sfiligoi, Igor wrote: Hello. We have a view that is very generic, and we noticed that PostgreSQL is not very good at removing useless joins, which makes our queries very slow. We could change our code to avoid the view and write ad-hoc queries to the underlying tables, but would prefer not to, if there is a way around it. (BTW: We are currently using psql 9.4) Here is a simplified implementation: # create table a (id int primary key, name varchar(128)); # create table b (id int primary key, name varchar(128)); # create table c (id int primary key, a_id int references a(id), b1_id int references b(id), b2_id int references b(id), b3_id int references b(id)); # create view v as select c.id, c.a_id, c.b1_id, c.b2_id , c.b3_id, a.name a_name, b1.name b1_name, b2.name b2_name, b3.name b3_name from c, a, b b1, b b2, b b3 where c.a_id=a.id and c.b1_id=b1.id and c.b2_id=b2.id and c.b3_id=b3.id; When I try to get just info from tables c and b1: # select id, b1_name from v it still does all the joins (see below). I would expect just one join (due to the request of columns from the two tables), since all joins are on foreign constrains referencing primary keys, there are no filters on the other tables, so it is guaranteed that the useless joins will always return exactly one answer. I think what you're asking for is a lot more complex than it sounds, and incorrect. The precise state of the data influences how many records come back (in this case, either 1 or 0), for example if b3_id is null you get zero rows. More to the point, you *instructed* the server to make the join. There are strategies to make joins 'optional' at run time with respect to a query, but they are more complicated than simply withdrawing columns from the select list. Stepping back a bit, the query needs to be planned before peeking at the data in the tables. The planner is able to make assumptions against a statistical picture of the data but shouldn't be expected to actually inspect precise result data in order to generate a better plan. merlin -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins
No, I don't want to use LEFT JOINS. I want to use regular joins. But (as mentioned in my other follow-up), all the fields are not null (was not in the original email, sorry), and are foreign keys, so it is guaranteed to always match. The key part (in my mind) is that I am not filtering on any of the useless tables, and I am not returning any columns from those tables either. Both is known at planning time. Or is my logic still broken? Thanks, Igor -Original Message- From: Kevin Grittner [mailto:kgri...@gmail.com] Sent: Friday, July 01, 2016 1:29 PM To: Sfiligoi, IgorCc: pgsql-general@postgresql.org Subject: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins On Fri, Jul 1, 2016 at 12:17 PM, Sfiligoi, Igor wrote: > We have a view that is very generic, and we noticed that PostgreSQL is > not very good at removing useless joins, which makes our queries very slow. > > We could change our code to avoid the view and write ad-hoc queries to > the underlying tables, but would prefer not to, if there is a way around it. If it did not do the joins it could not provide the information you are actually asking to see. Of course, there is a very good chance that what you are asking to see is not what you *want* to see. test=# create table a (id int primary key, name varchar(128)); CREATE TABLE test=# create table b (id int primary key, name varchar(128)); CREATE TABLE test=# create table c (id int primary key, test(# a_id int references a(id), test(# b1_id int references b(id), test(# b2_id int references b(id), test(# b3_id int references b(id)); CREATE TABLE test=# test=# create view v_broken as test-# select test-#c.id, c.a_id, c.b1_id, c.b2_id, c.b3_id, a.name a_name, test-#b1.name b1_name, b2.name b2_name, b3.name b3_name test-# from c, a, b b1, b b2, b b3 test-# where c.a_id=a.id and c.b1_id=b1.id and c.b2_id=b2.id and c.b3_id=b3.id; CREATE VIEW test=# test=# create view v as test-# select test-# c.id, c.a_id, c.b1_id, c.b2_id, c.b3_id, a.name a_name, test-# b1.name b1_name, b2.name b2_name, b3.name b3_name test-# from c test-# left join aon a.id = c.a_id test-# left join b b1 on b1.id = c.b1_id test-# left join b b2 on b2.id = c.b2_id test-# left join b b3 on b3.id = c.b3_id; CREATE VIEW test=# test=# insert into a values (1, 'a1'); INSERT 0 1 test=# insert into b values (1, 'b1'), (2, 'b2'), (3, 'b3'); INSERT 0 3 test=# insert into c values (1, 1, 1, 2, 3), (2, 1, 1, 2, null); INSERT 0 2 test=# test=# select id, b1_name from v_broken; id | b1_name +- 1 | b1 (1 row) test=# explain analyze select id, b1_name from v_broken; QUERY PLAN Hash Join (cost=63.40..183.90 rows=1700 width=278) (actual time=0.049..0.052 rows=1 loops=1) Hash Cond: (c.b3_id = b3.id) -> Hash Join (cost=47.55..144.68 rows=1700 width=282) (actual time=0.030..0.033 rows=2 loops=1) Hash Cond: (c.b2_id = b2.id) -> Hash Join (cost=31.70..105.45 rows=1700 width=286) (actual time=0.018..0.020 rows=2 loops=1) Hash Cond: (c.b1_id = b1.id) -> Hash Join (cost=15.85..66.22 rows=1700 width=16) (actual time=0.010..0.012 rows=2 loops=1) Hash Cond: (c.a_id = a.id) -> Seq Scan on c (cost=0.00..27.00 rows=1700 width=20) (actual time=0.001..0.001 rows=2 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=4) (actual time=0.003..0.003 rows=1 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on a (cost=0.00..12.60 rows=260 width=4) (actual time=0.002..0.003 rows=1 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=278) (actual time=0.005..0.005 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on b b1 (cost=0.00..12.60 rows=260 width=278) (actual time=0.002..0.003 rows=3 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=4) (actual time=0.006..0.006 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on b b2 (cost=0.00..12.60 rows=260 width=4) (actual time=0.004..0.004 rows=3 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=4) (actual time=0.013..0.013 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on b b3 (cost=0.00..12.60 rows=260 width=4) (actual time=0.008..0.009 rows=3 loops=1) Planning time: 0.729 ms Execution time: 0.153 ms (23 rows) test=# select id, b1_name from v; id | b1_name +- 1 | b1 2 | b1 (2 rows) test=# explain analyze select
Re: [GENERAL] PSQL does not remove obvious useless joins
On Fri, Jul 1, 2016 at 12:17 PM, Sfiligoi, Igorwrote: > We have a view that is very generic, and we noticed that PostgreSQL is not > very good at removing useless joins, which makes our queries very slow. > > We could change our code to avoid the view and write ad-hoc queries to the > underlying tables, but would prefer not to, if there is a way around it. If it did not do the joins it could not provide the information you are actually asking to see. Of course, there is a very good chance that what you are asking to see is not what you *want* to see. test=# create table a (id int primary key, name varchar(128)); CREATE TABLE test=# create table b (id int primary key, name varchar(128)); CREATE TABLE test=# create table c (id int primary key, test(# a_id int references a(id), test(# b1_id int references b(id), test(# b2_id int references b(id), test(# b3_id int references b(id)); CREATE TABLE test=# test=# create view v_broken as test-# select test-#c.id, c.a_id, c.b1_id, c.b2_id, c.b3_id, a.name a_name, test-#b1.name b1_name, b2.name b2_name, b3.name b3_name test-# from c, a, b b1, b b2, b b3 test-# where c.a_id=a.id and c.b1_id=b1.id and c.b2_id=b2.id and c.b3_id=b3.id; CREATE VIEW test=# test=# create view v as test-# select test-# c.id, c.a_id, c.b1_id, c.b2_id, c.b3_id, a.name a_name, test-# b1.name b1_name, b2.name b2_name, b3.name b3_name test-# from c test-# left join aon a.id = c.a_id test-# left join b b1 on b1.id = c.b1_id test-# left join b b2 on b2.id = c.b2_id test-# left join b b3 on b3.id = c.b3_id; CREATE VIEW test=# test=# insert into a values (1, 'a1'); INSERT 0 1 test=# insert into b values (1, 'b1'), (2, 'b2'), (3, 'b3'); INSERT 0 3 test=# insert into c values (1, 1, 1, 2, 3), (2, 1, 1, 2, null); INSERT 0 2 test=# test=# select id, b1_name from v_broken; id | b1_name +- 1 | b1 (1 row) test=# explain analyze select id, b1_name from v_broken; QUERY PLAN Hash Join (cost=63.40..183.90 rows=1700 width=278) (actual time=0.049..0.052 rows=1 loops=1) Hash Cond: (c.b3_id = b3.id) -> Hash Join (cost=47.55..144.68 rows=1700 width=282) (actual time=0.030..0.033 rows=2 loops=1) Hash Cond: (c.b2_id = b2.id) -> Hash Join (cost=31.70..105.45 rows=1700 width=286) (actual time=0.018..0.020 rows=2 loops=1) Hash Cond: (c.b1_id = b1.id) -> Hash Join (cost=15.85..66.22 rows=1700 width=16) (actual time=0.010..0.012 rows=2 loops=1) Hash Cond: (c.a_id = a.id) -> Seq Scan on c (cost=0.00..27.00 rows=1700 width=20) (actual time=0.001..0.001 rows=2 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=4) (actual time=0.003..0.003 rows=1 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on a (cost=0.00..12.60 rows=260 width=4) (actual time=0.002..0.003 rows=1 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=278) (actual time=0.005..0.005 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on b b1 (cost=0.00..12.60 rows=260 width=278) (actual time=0.002..0.003 rows=3 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=4) (actual time=0.006..0.006 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on b b2 (cost=0.00..12.60 rows=260 width=4) (actual time=0.004..0.004 rows=3 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=4) (actual time=0.013..0.013 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on b b3 (cost=0.00..12.60 rows=260 width=4) (actual time=0.008..0.009 rows=3 loops=1) Planning time: 0.729 ms Execution time: 0.153 ms (23 rows) test=# select id, b1_name from v; id | b1_name +- 1 | b1 2 | b1 (2 rows) test=# explain analyze select id, b1_name from v; QUERY PLAN --- Hash Left Join (cost=15.85..66.22 rows=1700 width=278) (actual time=0.017..0.018 rows=2 loops=1) Hash Cond: (c.b1_id = b1.id) -> Seq Scan on c (cost=0.00..27.00 rows=1700 width=20) (actual time=0.005..0.005 rows=2 loops=1) -> Hash (cost=12.60..12.60 rows=260 width=278) (actual time=0.006..0.006 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on b b1 (cost=0.00..12.60 rows=260 width=278) (actual time=0.002..0.003 rows=3 loops=1) Planning time: 0.177 ms Execution time: 0.044 ms (8
Re: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins
Sorry... the example was incomplete. All the fields are defined as not-null. So it is guaranteed to always match the join. And PostgreSQL release notes claim that PGSQL can do at least partial join removal: https://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.0#Join_Removal I was hoping this use case would fit in. Any suggestions? Igor -Original Message- From: Merlin Moncure [mailto:mmonc...@gmail.com] Sent: Friday, July 01, 2016 12:42 PM To: Sfiligoi, IgorCc: pgsql-general@postgresql.org Subject: --EXTERNAL--Re: [GENERAL] PSQL does not remove obvious useless joins On Fri, Jul 1, 2016 at 12:17 PM, Sfiligoi, Igor wrote: > Hello. > > We have a view that is very generic, and we noticed that PostgreSQL is > not very good at removing useless joins, which makes our queries very slow. > > We could change our code to avoid the view and write ad-hoc queries to > the underlying tables, but would prefer not to, if there is a way around it. > > (BTW: We are currently using psql 9.4) > > Here is a simplified implementation: > > # create table a (id int primary key, name varchar(128)); > > # create table b (id int primary key, name varchar(128)); > > # create table c (id int primary key, a_id int references a(id), b1_id > int references b(id), b2_id int references b(id), b3_id int references > b(id)); > > # create view v as select c.id, c.a_id, c.b1_id, c.b2_id , c.b3_id, > a.name a_name, b1.name b1_name, b2.name b2_name, b3.name b3_name from > c, a, b b1, b b2, b b3 where c.a_id=a.id and c.b1_id=b1.id and > c.b2_id=b2.id and c.b3_id=b3.id; > > When I try to get just info from tables c and b1: > > # select id, b1_name from v > > it still does all the joins (see below). > > I would expect just one join (due to the request of columns from the > two tables), > > since all joins are on foreign constrains referencing primary keys, > > there are no filters on the other tables, so it is guaranteed that the > useless joins will always return exactly one answer. I think what you're asking for is a lot more complex than it sounds, and incorrect. The precise state of the data influences how many records come back (in this case, either 1 or 0), for example if b3_id is null you get zero rows. More to the point, you *instructed* the server to make the join. There are strategies to make joins 'optional' at run time with respect to a query, but they are more complicated than simply withdrawing columns from the select list. Stepping back a bit, the query needs to be planned before peeking at the data in the tables. The planner is able to make assumptions against a statistical picture of the data but shouldn't be expected to actually inspect precise result data in order to generate a better plan. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 9.6 beta2 win-x64 download links still point to beta1
Bruce Momjian schrieb am 28.06.2016 um 05:36: On Fri, Jun 24, 2016 at 07:44:17AM +0200, Thomas Kellerer wrote: the Beta2 downloads on http://www.enterprisedb.com/products-services-training/pgdownload http://www.enterprisedb.com/products-services-training/pgbindownload still lead to Beta1 for the Windows 64bit builds. All others properly link to beta1 This looks fixed now. I still get the beta1 packages. This http://www.enterprisedb.com/postgresql-960-binaries-win64?ls=Crossover=Crossover redirects to http://get.enterprisedb.com/postgresql/postgresql-9.6.0-beta1-windows-x64-binaries.zip and http://www.enterprisedb.com/postgresql-960-beta-installers-win64?ls=Crossover=Crossover redirects to http://get.enterprisedb.com/postgresql/postgresql-9.6.0-beta1-windows-x64.exe Replacing beta1 with beta2 leads to a 404 (for both direct download links) Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PSQL does not remove obvious useless joins
On Fri, Jul 1, 2016 at 12:17 PM, Sfiligoi, Igorwrote: > Hello. > > We have a view that is very generic, and we noticed that PostgreSQL is not > very good at removing useless joins, which makes our queries very slow. > > We could change our code to avoid the view and write ad-hoc queries to the > underlying tables, but would prefer not to, if there is a way around it. > > (BTW: We are currently using psql 9.4) > > Here is a simplified implementation: > > # create table a (id int primary key, name varchar(128)); > > # create table b (id int primary key, name varchar(128)); > > # create table c (id int primary key, a_id int references a(id), b1_id int > references b(id), b2_id int references b(id), b3_id int references b(id)); > > # create view v as select c.id, c.a_id, c.b1_id, c.b2_id , c.b3_id, a.name > a_name, b1.name b1_name, b2.name b2_name, b3.name b3_name from c, a, b b1, > b b2, b b3 where c.a_id=a.id and c.b1_id=b1.id and c.b2_id=b2.id and > c.b3_id=b3.id; > > When I try to get just info from tables c and b1: > > # select id, b1_name from v > > it still does all the joins (see below). > > I would expect just one join (due to the request of columns from the two > tables), > > since all joins are on foreign constrains referencing primary keys, > > there are no filters on the other tables, > so it is guaranteed that the useless joins will always return exactly one > answer. I think what you're asking for is a lot more complex than it sounds, and incorrect. The precise state of the data influences how many records come back (in this case, either 1 or 0), for example if b3_id is null you get zero rows. More to the point, you *instructed* the server to make the join. There are strategies to make joins 'optional' at run time with respect to a query, but they are more complicated than simply withdrawing columns from the select list. Stepping back a bit, the query needs to be planned before peeking at the data in the tables. The planner is able to make assumptions against a statistical picture of the data but shouldn't be expected to actually inspect precise result data in order to generate a better plan. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Update multiple rows in a table with different values
Hi Tom, Thanks for your suggestion. I got it working: CREATE OR REPLACE FUNCTION s.updatefunc1(BigInt[], BigInt[]) RETURNS void as $$ BEGIN FOR i IN array_lower($1, 1) .. array_upper($1, 1) LOOP update s.t1 SET c3 = $2[i] WHERE c2 = $1[i]; END LOOP; END; $$ LANGUAGE plpgsql; Is there a better way to do it using : unnest. Thanks Shankha Banerjee On Fri, Jul 1, 2016 at 10:59 AM, Tom Lanewrote: > shankha writes: >> PREPARE updatearrayplan(BigInt[], BigInt[]) AS >> for i in size($1) >> DO >> update s.t1 >> SET c3 = $2[$i] >> WHERE c2 = $1[$i] >> END FOR > >> In this prepared statement I am just trying to explain the algorithm. >> I do not know the exact syntax. > > You would need to write a plpgsql function in order to have a loop like > that; there's no loops in bare SQL. > > regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PSQL does not remove obvious useless joins
Hello. We have a view that is very generic, and we noticed that PostgreSQL is not very good at removing useless joins, which makes our queries very slow. We could change our code to avoid the view and write ad-hoc queries to the underlying tables, but would prefer not to, if there is a way around it. (BTW: We are currently using psql 9.4) Here is a simplified implementation: # create table a (id int primary key, name varchar(128)); # create table b (id int primary key, name varchar(128)); # create table c (id int primary key, a_id int references a(id), b1_id int references b(id), b2_id int references b(id), b3_id int references b(id)); # create view v as select c.id, c.a_id, c.b1_id, c.b2_id , c.b3_id, a.name a_name, b1.name b1_name, b2.name b2_name, b3.name b3_name from c, a, b b1, b b2, b b3 where c.a_id=a.id and c.b1_id=b1.id and c.b2_id=b2.id and c.b3_id=b3.id; When I try to get just info from tables c and b1: # select id, b1_name from v it still does all the joins (see below). I would expect just one join (due to the request of columns from the two tables), since all joins are on foreign constrains referencing primary keys, there are no filters on the other tables, so it is guaranteed that the useless joins will always return exactly one answer. Is there a way to tweak the PostgreSQL optimizer to do the proper join removal during the planning? Perhaps tweaking somehow either our schema or our queries (while still keeping a generic view)? Thank you, Igor Sfiligoi # explain select id, b1_name from v; QUERY PLAN Nested Loop (cost=1.02..5.45 rows=1 width=6) Join Filter: (c.b3_id = b3.id) -> Nested Loop (cost=1.02..4.32 rows=1 width=10) Join Filter: (c.a_id = a.id) -> Nested Loop (cost=1.02..3.25 rows=1 width=14) Join Filter: (c.b2_id = b2.id) -> Hash Join (cost=1.02..2.12 rows=1 width=18) Hash Cond: (b1.id = c.b1_id) -> Seq Scan on b b1 (cost=0.00..1.06 rows=6 width=6) -> Hash (cost=1.01..1.01 rows=1 width=20) -> Seq Scan on c (cost=0.00..1.01 rows=1 width=20) -> Seq Scan on b b2 (cost=0.00..1.06 rows=6 width=4) -> Seq Scan on a (cost=0.00..1.03 rows=3 width=4) -> Seq Scan on b b3 (cost=0.00..1.06 rows=6 width=4) (14 rows) PS: The tables were very small in this example, but are quite big in the production environment.
Re: [GENERAL] Log archiving failing. Seems to be wrong timeline
Hi Jeff, Done as you advised and now things are working again. Many thanks Chris On 30/06/16 20:19, Jeff Janes wrote: On Thu, Jun 30, 2016 at 3:53 AM, Chris Lewiswrote: Hello, We have 2 postgresql servers (v 9.4.2) master and slave in streaming replication. The overall cluster is controlled using pacemaker & corosync and the pgsql cluster agent which handles failover to, and promotion of, the slave. Recently a failover occured and I noticed that log archiving was failing on the master: ... Why are we trying to archive logs which belong to an old timeline? Just because the timeline is old doesn't mean we want to destroy it. Afterall, the reason for having timelines in the first place is to preserve, not to destroy. It sounds like someone removed the old timeline's log files from pg_xlog, but did not remove the corresponding .ready files from pg_xlog/archive_status. If the old timeline's files are truly lost, then you will have to carefully remove those corresponding .ready files. Cheers, Jeff -- Chris Lewis Systems Administrator Inview Technology Ltd. T: +44 (0) 1606 812500 M: +44 (0) 7980 446907 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Update multiple rows in a table with different values
shankhawrites: > PREPARE updatearrayplan(BigInt[], BigInt[]) AS > for i in size($1) > DO > update s.t1 > SET c3 = $2[$i] > WHERE c2 = $1[$i] > END FOR > In this prepared statement I am just trying to explain the algorithm. > I do not know the exact syntax. You would need to write a plpgsql function in order to have a loop like that; there's no loops in bare SQL. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Update multiple rows in a table with different values
Hi Adrian, I am using Postgres version 9.3. PREPARE updatearrayplan(BigInt[], BigInt[]) AS for i in size($1) DO update s.t1 SET c3 = $2[$i] WHERE c2 = $1[$i] END FOR In this prepared statement I am just trying to explain the algorithm. I do not know the exact syntax. Sorry for the confusion. Thanks Shankha Banerjee On Fri, Jul 1, 2016 at 10:48 AM, Adrian Klaverwrote: > On 07/01/2016 07:26 AM, shankha wrote: >> >> Greetings, >> I have the following schema: >> >> CREATE TABLE "s"."t1" >> ( >> "c1" BigSerial PRIMARY KEY, >> "c2" BigInt NOT NULL, >> "c3" BigInt >> ) >> WITH (OIDS=FALSE); > > > Unless you have a very old version of Postgres, OIDS=FALSE is the default. > >> >> INSERT INTO s.t1 (c2, c3) VALUES (10, 100); >> INSERT INTO s.t1 (c2, c3) VALUES (20, 200); >> INSERT INTO s.t1 (c2, c3) VALUES (30, 300); >> INSERT INTO s.t1 (c2, c3) VALUES (40, 400); >> >> PREPARE updateplan (BigInt, BigInt) AS >> update s.t1 >> SET c3 = $2 >> WHERE c2 = $1; >> >> EXECUTE updateplan (20, 250); >> *** >> PREPARE updatearrayplan(BigInt[], BigInt[]) AS >> for i in size($1) >> DO >> update s.t1 >> SET c3 = $2[$i] >> WHERE c2 = $1[$i] >> END FOR > > > I am not familiar with the above syntax, are you using a Postgres version > different from the community version? > > >> >> EXECUTE updatearrayplan({20, 30}, {275, 375}) >> *** >> /* 20, 200 -> 20, 275 */ >> /* 30, 300 -> 30, 375 */ >> *** >> >> After execution of updatearrayplan I am expecting the rows to have >> these values 20 -> 275 , 30 -> 375 >> >> Is there a way to update multiple rows with different column values >> passed in as array. Also is there a guarantee that the order of the >> arrays will be maintained. >> >> Thanks >> >> > > > -- > Adrian Klaver > adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Update multiple rows in a table with different values
On Fri, Jul 1, 2016 at 10:26 AM, shankhawrote: > > PREPARE updatearrayplan(BigInt[], BigInt[]) AS > for i in size($1) > DO > update s.t1 > SET c3 = $2[$i] > WHERE c2 = $1[$i] > END FOR > > EXECUTE updatearrayplan({20, 30}, {275, 375}) > > After execution of updatearrayplan I am expecting the rows to have > these values 20 -> 275 , 30 -> 375 > > Have you looked at CREATE FUNCTION? I'd suggest the plpgsql language. > Is there a way to update multiple rows with different column values > passed in as array. No. All rows identified by a single where clause are updated using the same expression. Though I suppose you could try something like: c3 = CASE WHEN c2= 20 THEN 275 WHEN c2= 30 THEN 375 END WHERE c2IN (20, 30) > Also is there a guarantee that the order of the > arrays will be maintained. > That question is too broad. Direct iteration of an array will be done in order. Whether, post-iteration, the resultant records remain in order is not promised. David J.
Re: [GENERAL] Update multiple rows in a table with different values
On 07/01/2016 07:26 AM, shankha wrote: Greetings, I have the following schema: CREATE TABLE "s"."t1" ( "c1" BigSerial PRIMARY KEY, "c2" BigInt NOT NULL, "c3" BigInt ) WITH (OIDS=FALSE); Unless you have a very old version of Postgres, OIDS=FALSE is the default. INSERT INTO s.t1 (c2, c3) VALUES (10, 100); INSERT INTO s.t1 (c2, c3) VALUES (20, 200); INSERT INTO s.t1 (c2, c3) VALUES (30, 300); INSERT INTO s.t1 (c2, c3) VALUES (40, 400); PREPARE updateplan (BigInt, BigInt) AS update s.t1 SET c3 = $2 WHERE c2 = $1; EXECUTE updateplan (20, 250); *** PREPARE updatearrayplan(BigInt[], BigInt[]) AS for i in size($1) DO update s.t1 SET c3 = $2[$i] WHERE c2 = $1[$i] END FOR I am not familiar with the above syntax, are you using a Postgres version different from the community version? EXECUTE updatearrayplan({20, 30}, {275, 375}) *** /* 20, 200 -> 20, 275 */ /* 30, 300 -> 30, 375 */ *** After execution of updatearrayplan I am expecting the rows to have these values 20 -> 275 , 30 -> 375 Is there a way to update multiple rows with different column values passed in as array. Also is there a guarantee that the order of the arrays will be maintained. Thanks -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Update multiple rows in a table with different values
Greetings, I have the following schema: CREATE TABLE "s"."t1" ( "c1" BigSerial PRIMARY KEY, "c2" BigInt NOT NULL, "c3" BigInt ) WITH (OIDS=FALSE); INSERT INTO s.t1 (c2, c3) VALUES (10, 100); INSERT INTO s.t1 (c2, c3) VALUES (20, 200); INSERT INTO s.t1 (c2, c3) VALUES (30, 300); INSERT INTO s.t1 (c2, c3) VALUES (40, 400); PREPARE updateplan (BigInt, BigInt) AS update s.t1 SET c3 = $2 WHERE c2 = $1; EXECUTE updateplan (20, 250); *** PREPARE updatearrayplan(BigInt[], BigInt[]) AS for i in size($1) DO update s.t1 SET c3 = $2[$i] WHERE c2 = $1[$i] END FOR EXECUTE updatearrayplan({20, 30}, {275, 375}) *** /* 20, 200 -> 20, 275 */ /* 30, 300 -> 30, 375 */ *** After execution of updatearrayplan I am expecting the rows to have these values 20 -> 275 , 30 -> 375 Is there a way to update multiple rows with different column values passed in as array. Also is there a guarantee that the order of the arrays will be maintained. Thanks -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] table name size
Sridhar N Bamandlapallywrites: > Is there any specific reason not releasing any version with "NAMEDATALEN 255" > in file src/include/pg_config_manual.h ? It would bloat the system catalogs by a rather substantial amount, as well as in-memory images of those catalogs (catcaches). Seeing that the field demand for this is next to nil, we're not very likely to do it just for nominal standards compliance. But you're free to build your own copy that way if you like --- that's why it's configurable. BTW, you might as well set it to 256 not 255: alignment considerations mean that the odd byte would just be wasted in most or all of the catalogs. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Sub-query having NULL row returning FALSE result
Hi Just for info. Actual query in Oracle (below) -- MERGE INTO relDocumentTypeMetaDataName t USING (SELECT ? as DocumentTypeID, ? as DocumentContextID, ? as MetaDataNameID, ? as DocumentAbstractionIndexID FROM DUAL) s ON (t.MetaDataNameID = s.MetaDataNameID AND t.DocumentTypeID = s.DocumentTypeID) WHEN NOT MATCHED THEN INSERT (DocumentTypeID, DocumentContextID, MetaDataNameID, FlatColumnNo) values (s.DocumentTypeID, s.DocumentContextID, s.MetaDataNameID, (select nvl(max(FlatColumnNo), 0) + 1 FROM relDocumentTypeMetaDataName WHERE DocumentTypeID = ?)); Migrated query in PG (phase 1) - WITH s AS (SELECT ? as DocumentTypeID, ? as DocumentContextID, ? as MetaDataNameID, ? as DocumentAbstractionIndexID) INSERT INTO relDocumentTypeMetaDataName (DocumentTypeID, DocumentContextID, MetaDataNameID, FlatColumnNo, DocumentAbstractionIndexID) SELECT s.DocumentTypeID, s.DocumentContextID, s.MetaDataNameID, (SELECT coalesce(MAX(FlatColumnNo), 0) + 1 FROM relDocumentTypeMetaDataName WHERE DocumentTypeID = ?), s.DocumentAbstractionIndexID FROM s WHERE s.DocumentTypeID NOT IN (SELECT DocumentTypeID FROM relDocumentTypeMetaDataName); Migrated query in PG (phase 2) - after Tom Lane reply WITH s AS (SELECT ? as DocumentTypeID, ? as DocumentContextID, ? as MetaDataNameID, ? as DocumentAbstractionIndexID) INSERT INTO relDocumentTypeMetaDataName (DocumentTypeID, DocumentContextID, MetaDataNameID, FlatColumnNo, DocumentAbstractionIndexID) SELECT s.DocumentTypeID, s.DocumentContextID, s.MetaDataNameID, (SELECT coalesce(MAX(FlatColumnNo), 0) + 1 FROM relDocumentTypeMetaDataName WHERE DocumentTypeID = ?), s.DocumentAbstractionIndexID FROM s WHERE s.DocumentTypeID NOT IN (SELECT DocumentTypeID FROM relDocumentTypeMetaDataName WHERE DocumentTypeID IS NOT NULL); Thanks Sridhar OpenText On Wed, Jun 29, 2016 at 6:04 PM, David G. Johnston < david.g.johns...@gmail.com> wrote: > On Wed, Jun 29, 2016 at 2:07 AM, Sridhar N Bamandlapally < > sridhar@gmail.com> wrote: > >> Hi >> >> Please go through below case >> >> postgres=# CREATE TABLE emp (id INTEGER unique, ename VARCHAR); >> CREATE TABLE >> postgres=# INSERT INTO emp VALUES (null, 'aaa'); >> INSERT 0 1 >> postgres=# INSERT INTO emp VALUES (null, 'bbb'); >> INSERT 0 1 >> postgres=# INSERT INTO emp VALUES (3, 'ccc'); >> INSERT 0 1 >> postgres=# INSERT INTO emp VALUES (4, 'ddd'); >> INSERT 0 1 >> postgres=# SELECT * FROM emp ; >> id | ename >> +--- >> | aaa >> | bbb >> 3 | ccc >> 4 | ddd >> (4 rows) >> >> postgres=# SELECT * FROM (SELECT 5::integer id, 'eee'::varchar ename) nr; >> id | ename >> +--- >> 5 | eee >> (1 row) >> >> postgres=# INSERT INTO emp SELECT * FROM (SELECT 5::integer id, >> 'eee'::varchar ename) nr WHERE id NOT IN (SELECT id FROM emp); >> INSERT 0 0 >> postgres=# SELECT * FROM emp ; >> id | ename >> +--- >> | aaa >> | bbb >> 3 | ccc >> 4 | ddd >> (4 rows) >> >> postgres=# >> >> >> The application is generating SQL-Statement to avoid exception while >> inserting >> >> The expected behavior is to INSERT row if the NEW id is not existing in >> table, but this is doing FALSE >> >> Please advise me if am doing something wrong here or any alternate >> > > Subjectively, you are allowing an ID field to be NULL. That, for me, is > wrong. > > Given this, as a follow-up to what Tom said, you need to decide what you > wish to happen for your NULL IDs. Until you explain that behavior it is > not possible to provide valid alternatives. > > Usually you want to use "EXISTS", not "IN" > > > https://www.postgresql.org/docs/9.6/static/functions-subquery.html#FUNCTIONS-SUBQUERY-EXISTS > > Oh, and try adding "WHERE id IS NOT NULL" > > David J. > >
Re: [GENERAL] Replication with non-read-only standby.
Il 01/07/2016 05:21, Venkata Balaji N ha scritto: On Thu, Jun 30, 2016 at 11:15 PM, Nick Babadzhanianwrote: Setup: 2 PostgreSQL servers are geographically spread. The first one is used for an application that gathers data. It is connected to the second database that is used to process the said data. Connection is not very stable nor is it fast, so using Bidirectional replication is not an option. It is OK if data is shipped in batches rather than streamed. Question: Is there a way to make the standby server non-read-only, so that it can keep getting updates (mostly inserts) from the 'master', but users are able to edit the data stored on 'slave'? Is there some alternative solution to this? You can consider Ruby replication for such a requirement. I think, there is no much development happening around Ruby Replication since long time i believe. This can be used for production environment. http://www.rubyrep.org/ Regards, Venkata B N Fujitsu Australia I'm using rubyrep actively in the last 5 years, and that's what in my experience The replicator is very good and stable, easy as 1-2-3 to configure (if you don't need special features), but the project is almost dead (I've seen no updates since 2009 and no responses in forums since 2011). I've tried many times to email the author because of PG 9.1 changes in bytea management that caused BLOB corruption while replicating, but never had response, so ended hiring a Ruby developer to fix things. One more thing: rubyrep is OK if you want to replicate ONE database, I've never seen it working on more than 1 database or a whole cluster. Of course you can run more than one instance, but will be harder to manage. If replicating on *nix I'd prefer Bucardo or Slony Remember, when using async replication with unreliable network, that your replication can fall far behind "actual" data and this can lead to conflicts, that must be resolved. Not to mention the huge memory consumption when working with large data types and when replication queues get quite big (>300k rows). In this cases, if JVM memory cap is not large enough(I reached 2 GB), rubyrep is likely to stop for OutOfMemoryException My 50 cents Cheers, Moreno.
Re: [GENERAL] table name size
Hi Is there any specific reason not releasing any version with "NAMEDATALEN 255" in file src/include/pg_config_manual.h ? this will really nice if we do with "NAMEDATALEN 255" src/include/pg_config_ manual.h Please Thanks Sridhar OpenText On Mon, Jun 13, 2016 at 5:40 PM, David G. Johnston < david.g.johns...@gmail.com> wrote: > On Mon, Jun 13, 2016 at 7:21 AM, Sridhar N Bamandlapally < > sridhar@gmail.com> wrote: > >> Hi >> >> how to create table name with size, the limitation we are facing is 63 >> length >> >> these are dynamic tables created from application >> >> issue is: >> we cannot suggest/tell client to change NAMEDATALEN constant in >> src/include/pg_config_manual.h >> >> do we have any other option, >> >> > Ensure that the application's algorithm for generating names doesn't > generate names that exceed 63 characters. > > If you're asking if there a runtime setting to control this the answer is > no. > > David J. > >