Re: [GENERAL] What is the best thing to do with PUBLIC schema in Postgresql database

2016-11-04 Thread bto...@computer.org


- Original Message -
> From: "Patricia Hu" 
> Sent: Friday, November 4, 2016 9:58:10 AM
> 
> Since it could potentially be a security loop hole. So far the action taken
> to address it falls into these two categories:
> 
> drop the PUBLIC schema altogether. ...
> keep the PUBLIC schema but revoke all privileges to it from public role,
> then grant as necessity comes up.
> 
> Any feedback and lessons from those who have implemented this?
> 

Admittedly, this may be TMI (...or maybe not enough...), but FWIW (and YMMV), I 
use the PUBLIC schema, along with the PUBLIC role, to expose a very limited 
view into the data base for the purpose of anonymous login and creation of user 
accounts.

There is one view in the PUBLIC schema (and it has appropriate triggers and 
permissions to make the view writeable):

fairwinds=# set search_path to public;
fairwinds=# \d
 List of relations
 Schema |  Name   | Type |  Owner   
+-+--+--
 public | fairian | view | postgres
(1 row)


fairwinds=# \dp public.fairian
Access privileges
 Schema |  Name   | Type |   Access privileges   | Column privileges | Policies 
+-+--+---+---+--
 public | fairian | view | =ar/postgres  |   | 
(1 row)


Then revoke unneeded privilege on the PUBLIC schema, and grant the read and 
write privileges on that one view:

REVOKE CREATE ON SCHEMA public FROM public;
GRANT SELECT,INSERT ON TABLE fairian TO PUBLIC;


The special user role "fairwinds" is allowed trusted login in pg_hba.conf:


# TYPE  DATABASEUSERADDRESS METHOD
hostfairwinds   fairwinds   all trust


In summary, then, new users connect the first time with the "fairwinds" user 
and no password, and then create an account by inserting a row in the "fairian" 
view. Newly-created users subequently login with a password and then have an 
expanded view into the data base by GRANT USAGE on a different schema that 
contains more data base objects.


If that write-up is not clear enough, there is a test server where you can try 
it at http://fairwinds.btober.net and see what I'm talking about.

--B



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CachedPlan logs until full disk

2016-11-04 Thread Tom Lane
Job  writes:
> it is the second time (in two weeks), that  have a very strange Postgresql in 
> a 8.4.22 installation (32 bit still).

You realize, of course, that 8.4.x has been out of support for a couple of
years now.

> Logfile grow up (in few hours) until filling the Whole disk space.
> I can read infinite series of this messages:

> CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
> CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used
> SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
> CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used
> CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928 used
> SPI Plan: 1024 total in 1 blocks; 928 free (0 chunks); 96 used
> CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used

This appears to be a fragment of a memory map that would be produced
in conjunction with an "out of memory" error.  It's difficult to say
much more than that with only this much information, but clearly you
need to do something to prevent recurrent out-of-memory errors.

If looking at the map as a whole makes it clear that it's zillions
of CachedPlans that are chewing up most of the memory, then I would
guess that they are getting leaked as a result of constantly replacing
plpgsql functions --- does your application do a lot of
CREATE OR REPLACE FUNCTION commands?  I don't think plpgsql coped
with that very well before 9.1.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CachedPlan logs until full disk

2016-11-04 Thread Job
Hello guys,

it is the second time (in two weeks), that  have a very strange Postgresql in a 
8.4.22 installation (32 bit still).

Logfile grow up (in few hours) until filling the Whole disk space.
I can read infinite series of this messages:

CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used
SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used
CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928 used
SPI Plan: 1024 total in 1 blocks; 928 free (0 chunks); 96 used
CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used
SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used

I had to stop, delete logs, and then restart again and the problems solved.
But i did not understand why this problem occurred.

Thank you for any help!
Francesco

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication setup: advice needed

2016-11-04 Thread Dmitry Karasik
> You need to look at a replication solution like Slony, which is a trigger
> based replication solution. If you are using PostgreSQL version 9.4 or
> higher, then, you can explore "pglogical" which is WAL based and uses
> logical decoding capability.

I'm using 9.4, and I'm looking at pglogical as well -- thank you!. I'll
try to experiment how much it fits my needs.


-- 
Sincerely,
Dmitry Karasik



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the best thing to do with PUBLIC schema in Postgresql database

2016-11-04 Thread David Steele

On 11/4/16 3:58 PM, Hu, Patricia wrote:

Since it could potentially be a security loop hole. So far the action taken to 
address it falls into these two categories:

drop the PUBLIC schema altogether. One of the concerns is with some of the 
system objects that have been exposed through PUBLIC schema previously, now 
they will need other explicit grants to be accessible to users. e.g 
pg_stat_statements.
keep the PUBLIC schema but revoke all privileges to it from public role, 
then grant as necessity comes up.

Any feedback and lessons from those who have implemented this?


I always drop the public schema as the first step of any build and have 
never seen any ill effects.


Nothing is exposed by default in the public schema unless you install 
extensions into it.


--
-David
da...@pgmasters.net


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recover from corrupted database due to failing disk

2016-11-04 Thread Alban Hertroys
On 4 November 2016 at 11:20, Gionatan Danti  wrote:
> Unfortuntaly I am working with incredible constrains from customer side;
> even buying two SAS disks seems a problem. Moreover, as an external
> consultant, I have basically no decision/buying power :|
> What I can do (and I did) is to raise a very big red flag and let others
> decide what to do.

It seems to me that your customer doesn't realise how expensive it
would be if their server would be unavailable for any length of time
or if they would actually lose the data it contains. That, or the data
of your customer isn't so valuable that it's worth your time.

We've been fighting a somewhat similar fight internally here, where
management wasn't prepared to spend € 30,000 once on a server plus
software licenses, while they pay that to one of our new managers
monthly.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to hint 2 coulms IS NOT DISTINCT FROM each other

2016-11-04 Thread Alban Hertroys
On 4 November 2016 at 14:41, Merlin Moncure  wrote:
> On Fri, Nov 4, 2016 at 8:08 AM, Kim Rose Carlsen  wrote:
>> The nulls are generated by something like this
>> SELECT c.circuit_id,
>>cc.customer_id
>>FROM circuit AS c
>> LEFT JOIN circuit_customer AS cc
>>  ON c.circuit_id = cc.circuit_id
>>
>> To make a magic '0' customer we would be required to use
>>   COALESCE(cc.customer_id, '0')
>> I dont think the optimizer will do anything clever with the '0' we have
>> computed from null.
>
> It would if you explicitly indexed it as such;
> CREATE INDEX ON circuit_customer((COALESCE(customer_id, '0'));

Merlin, it's a LEFT JOIN. There probably are no NULLs in the
circuit_customer.customer_id column, so that COALESCE isn't going to
achieve anything at all.

I haven't been following this particular discussion in detail, so
unfortunately I can't contribute more than that remark at the moment.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recover from corrupted database due to failing disk

2016-11-04 Thread Adrian Klaver

On 11/04/2016 03:20 AM, Gionatan Danti wrote:



On 03/11/2016 14:20, Adrian Klaver wrote:


The above does not make sense. You are having to recover because there
was no backup and now you want to go forward without doing a backup?



Hi Adrian, no, I don't want go forward without backups ;)
Actually, the *first* thing I did after the vacuum completed was a full
cluster backup (via pg_dumpall), and I scheduled nightly backups as well.

Problem is this customer does not have another server were backups can
be restored and the entire production database migrated. In short, the
two possibilities I have are:

1) execute the vacuum (done), schedule regular dumps (done) and, if
something goes wrong, recover from backups;

2) execute the vacuum (done), do a manual backup (done), reinit
(remove/recreate) the entire cluster (not done) and restore from backups
(not done).

I strongly prefer to execute n.2 on another machine, so that production
is not impacted while the recovered backup can be througly tested.
If/when the backups are validated, I want to migrate all clients to the
new server (with RAID1 in place), and dismiss the old one.

Unfortuntaly I am working with incredible constrains from customer side;
even buying two SAS disks seems a problem. Moreover, as an external
consultant, I have basically no decision/buying power :|
What I can do (and I did) is to raise a very big red flag and let others
decide what to do.


Ouch, understood. Good luck!



The good thing is that zero_damaged_pages and vacuum did their works, as
now the database can be dumped and vacuumed with no (apparent) problems.

Thanks.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What is the best thing to do with PUBLIC schema in Postgresql database

2016-11-04 Thread Hu, Patricia
Since it could potentially be a security loop hole. So far the action taken to 
address it falls into these two categories:

drop the PUBLIC schema altogether. One of the concerns is with some of the 
system objects that have been exposed through PUBLIC schema previously, now 
they will need other explicit grants to be accessible to users. e.g 
pg_stat_statements.
keep the PUBLIC schema but revoke all privileges to it from public role, 
then grant as necessity comes up.

Any feedback and lessons from those who have implemented this? 

Confidentiality Notice::  This email, including attachments, may include 
non-public, proprietary, confidential or legally privileged information.  If 
you are not an intended recipient or an authorized agent of an intended 
recipient, you are hereby notified that any dissemination, distribution or 
copying of the information contained in or transmitted with this e-mail is 
unauthorized and strictly prohibited.  If you have received this email in 
error, please notify the sender by replying to this message and permanently 
delete this e-mail, its attachments, and any copies of it immediately.  You 
should not retain, copy or use this e-mail or any attachment for any purpose, 
nor disclose all or any part of the contents to any other person. Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to hint 2 coulms IS NOT DISTINCT FROM each other

2016-11-04 Thread Merlin Moncure
On Fri, Nov 4, 2016 at 8:08 AM, Kim Rose Carlsen  wrote:
>>> It might raise another problem, that the nulls are generated through LEFT
>
>>> JOINS where no rows are defined. Then the 0 or -1 value need to be
>>> a computed value. Won't this throw off index lookups? (I might be
>>> more confused in this area).
>>
>>Not following this.
>
> The nulls are generated by something like this
> SELECT c.circuit_id,
>cc.customer_id
>FROM circuit AS c
> LEFT JOIN circuit_customer AS cc
>  ON c.circuit_id = cc.circuit_id
>
> To make a magic '0' customer we would be required to use
>   COALESCE(cc.customer_id, '0')
> I dont think the optimizer will do anything clever with the '0' we have
> computed from null.

It would if you explicitly indexed it as such;
CREATE INDEX ON circuit_customer((COALESCE(customer_id, '0'));

> I could ofc. by default assign all unassigned circuits to '0' in
> circuit_customer. I'm not a fan though.

hm, why not?  null generally means 'unknown' and that's why it fails
any equality test.

>>BTW, if you want a fast plan over the current
>>data without consideration of aesthetics, try this:
>>
>>CREATE VIEW view_circuit_with_status AS (
>>SELECT r.*,
>>  s.circuit_status,
>>  s.customer_id AS s_customer_id,
>>  p.line_speed,
>>  p.customer_id AS p_customer_id
>> FROM view_circuit r
>> JOIN view_circuit_product_main s
>>  ON r.circuit_id = s.circuit_id
>>  AND r.customer_id, s.customer_id
>> JOIN view_circuit_product p
>>   ON r.circuit_id = p.circuit_id
>>  AND r.customer_id, s.customer_id
>>  UNION ALL SELECT r.*,
>>  s.circuit_status,
>>  s.customer_id AS s_customer_id,
>>  p.line_speed,
>>  p.customer_id AS p_customer_id
>> FROM view_circuit r
>> JOIN view_circuit_product_main s
>>   ON r.circuit_id = s.circuit_id
>>  AND r.customer_id IS NULL
>>  AND  s.customer_id IS NULL
>> JOIN view_circuit_product p
>>   ON r.circuit_id = p.circuit_id>
>
> I will have to figure something out, but this specific case is still
> problematic
> since we would like to filter this view using different criteria's, like
> circuit_no,
> products or customers.

the above is logically equivalent to IS NOT DISTINCT FROM; you should
be able to query it as you would have done the original view.

> But with all these detours, I assume that a change to IS NOT DISTINCT FROM,
> is difficult or not wanted?

Well, not exactly.  In your case you are trying to treat null as a
specific value and pass it through join operations.

TBH, this is a pretty dubious approach: null is not supposed to be
equal to anything and any join vs null should come up empty --
logically at least.  INDF works around this of course but it's not a
recommended approach (my usage is generally restricted to, "has this
value changed since yesterday? etc").

I'm not an expert backend structures for indexing and optimization but
I know enough to suspect that optimizing INDF might cause
implementation headaches in various places, as do other irregular
syntactical approaches in SQL.  I think minimally optimizing INDF
would require converting it to an operator on par with '=' which is a
pretty large infrastructure change for an edge optimization case.
The fact that there are solid optimization strategies already on the
table (UNION ALL, expr index COALESCE()) does not help.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to hint 2 coulms IS NOT DISTINCT FROM each other

2016-11-04 Thread Kim Rose Carlsen
>> It might raise another problem, that the nulls are generated through LEFT

>> JOINS where no rows are defined. Then the 0 or -1 value need to be
>> a computed value. Won't this throw off index lookups? (I might be
>> more confused in this area).
>
>Not following this.

The nulls are generated by something like this
SELECT c.circuit_id,
   cc.customer_id
   FROM circuit AS c
LEFT JOIN circuit_customer AS cc
 ON c.circuit_id = cc.circuit_id

To make a magic '0' customer we would be required to use
  COALESCE(cc.customer_id, '0')
I dont think the optimizer will do anything clever with the '0' we have
computed from null.

I could ofc. by default assign all unassigned circuits to '0' in
circuit_customer. I'm not a fan though.

>BTW, if you want a fast plan over the current
>data without consideration of aesthetics, try this:
>
>CREATE VIEW view_circuit_with_status AS (
>SELECT r.*,
>  s.circuit_status,
>  s.customer_id AS s_customer_id,
>  p.line_speed,
>  p.customer_id AS p_customer_id
> FROM view_circuit r
> JOIN view_circuit_product_main s
>  ON r.circuit_id = s.circuit_id
>  AND r.customer_id, s.customer_id
> JOIN view_circuit_product p
>   ON r.circuit_id = p.circuit_id
>  AND r.customer_id, s.customer_id
>  UNION ALL SELECT r.*,
>  s.circuit_status,
>  s.customer_id AS s_customer_id,
>  p.line_speed,
>  p.customer_id AS p_customer_id
> FROM view_circuit r
> JOIN view_circuit_product_main s
>   ON r.circuit_id = s.circuit_id
>  AND r.customer_id IS NULL
>  AND  s.customer_id IS NULL
> JOIN view_circuit_product p
>   ON r.circuit_id = p.circuit_id>

I will have to figure something out, but this specific case is still problematic
since we would like to filter this view using different criteria's, like 
circuit_no,
products or customers.

But with all these detours, I assume that a change to IS NOT DISTINCT FROM,
is difficult or not wanted?


Re: [GENERAL] Recover from corrupted database due to failing disk

2016-11-04 Thread Gionatan Danti



On 03/11/2016 14:20, Adrian Klaver wrote:


The above does not make sense. You are having to recover because there
was no backup and now you want to go forward without doing a backup?



Hi Adrian, no, I don't want go forward without backups ;)
Actually, the *first* thing I did after the vacuum completed was a full 
cluster backup (via pg_dumpall), and I scheduled nightly backups as well.


Problem is this customer does not have another server were backups can 
be restored and the entire production database migrated. In short, the 
two possibilities I have are:


1) execute the vacuum (done), schedule regular dumps (done) and, if 
something goes wrong, recover from backups;


2) execute the vacuum (done), do a manual backup (done), reinit 
(remove/recreate) the entire cluster (not done) and restore from backups 
(not done).


I strongly prefer to execute n.2 on another machine, so that production 
is not impacted while the recovered backup can be througly tested. 
If/when the backups are validated, I want to migrate all clients to the 
new server (with RAID1 in place), and dismiss the old one.


Unfortuntaly I am working with incredible constrains from customer side; 
even buying two SAS disks seems a problem. Moreover, as an external 
consultant, I have basically no decision/buying power :|
What I can do (and I did) is to raise a very big red flag and let others 
decide what to do.


The good thing is that zero_damaged_pages and vacuum did their works, as 
now the database can be dumped and vacuumed with no (apparent) problems.


Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication setup: advice needed

2016-11-04 Thread Venkata B Nagothi
On Thu, Nov 3, 2016 at 8:17 PM, Dmitry Karasik 
wrote:

> Dear all,
>
> I'd like to ask for help or advice with choosing the best replication
> setup for
> my task.
>
> I need to listen to continuous inserts/deletes/updates over a set of
> tables,
> and serve them over http, so I would like to off-load this procedure to a
> separate slave machine.  I thought that logical master-slave replication
> could
> be the best match here, but I couldn't find enough details in the
> documentation
> which implementation would match my needs best.
>

Which version of PostgreSQL are you using ?


>
> Basically, I need to:
>
> a) replicate selected tables to a hot standby slave
> b) on the slave, listen for the insert/update/delete events (either
> through triggers or logical decoder plugin)
>
> While I see that a) should be feasible, I can't see if it's possible to do
> b) at all.
> Also, with so many replication solutions, I don't want to test them all
> one by one, but
> rather would like to ask for help choosing the one goes best here -- and
> if there's none,
> an alternative setup then.
>

You need to look at a replication solution like Slony, which is a trigger
based replication solution. If you are using PostgreSQL version 9.4 or
higher, then, you can explore "pglogical" which is WAL based and uses
logical decoding capability.

If you are just looking at replicating specific tables, then either of the
above solutions would work fine.

Regards,

Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] High load average every 105 minutes

2016-11-04 Thread John R Pierce

On 11/4/2016 1:45 AM, Nhan Nguyen wrote:


The load average of the instance increase every 105 minutes even 
without any database. I’ve checked the scheduled jobs but couldn’t 
find anything suspicious. When the load average was at peak, I 
couldn’t see any process consuming resources. This happens on all my 
servers that has postgresql installed, even after I purge postgresql. 
Has anyone seen this before


load average doesn't mean much other than the number of processes 
waiting for a resource.   does the IO latency spike at this time, too?


with AWS, your system is sharing the vendors virtual machine environment 
with other customers, and performance is pretty much out of your control.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] High load average every 105 minutes

2016-11-04 Thread Nhan Nguyen
I have Postgresql 9.3 installed on AWS instance ubuntu 14.04

The load average of the instance increase every 105 minutes even without any 
database. I’ve checked the scheduled jobs but couldn’t find anything 
suspicious. When the load average was at peak, I couldn’t see any process 
consuming resources. This happens on all my servers that has postgresql 
installed, even after I purge postgresql. Has anyone seen this before?

Here’s my config:
postgresql.conf

data_directory = '/var/lib/postgresql/9.3/main' 
hba_file = '/etc/postgresql/9.3/main/pg_hba.conf'   
ident_file = '/etc/postgresql/9.3/main/pg_ident.conf'   
external_pid_file = '/var/run/postgresql/9.3-main.pid'  
listen_addresses = '*'  
port = 5432 
max_connections = 100   
unix_socket_directories = '/var/run/postgresql' 
ssl = true  
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'  
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key' 
shared_buffers = 128MB  
shared_preload_libraries = 'pg_stat_statements'
wal_level = hot_standby 
checkpoint_segments = 8 
max_wal_senders = 3 
wal_keep_segments = 8   
log_min_duration_statement = 300
log_line_prefix = '%m ' 
log_timezone = 'UTC'
datestyle = 'iso, mdy'
timezone = 'UTC'
lc_messages = 'en_US.UTF-8' 
lc_monetary = 'en_US.UTF-8' 
lc_numeric = 'en_US.UTF-8'  
lc_time = 'en_US.UTF-8' 
default_text_search_config = 'pg_catalog.english'

Nhan Nguyen
System Engineer

MB: (+84) 934 008 031
Skype: live:ducnhan813



Re: [GENERAL] Exclude pg_largeobject form pg_dump

2016-11-04 Thread Guillaume Lelarge
Hi Amul,

2016-11-04 7:52 GMT+01:00 amul sul :

> Hi Guillaume,
>
> I found following issues with this patch, sorry missed in previous post:
>
>
You don't have to be sorry for me doing shitty things :)


> #1 :
>  43 @@ -392,6 +393,10 @@ main(int argc, char **argv)
>  44 dopt.outputBlobs = true;
>  45 break;
>  46
>  47 +   case 'B':   /* Don't dump blobs */
>  48 +   dopt.include_everything = false;
>  49 +   break;
>  50 +
>
> Touching dopt.include_everything flag does not seems to be a good idea
> for '--no-blobs' option, our intension is to exclude blob only, but
> this excluds other dump too (e.g COMMENT ON DATABASE, CREATE
> EXTENSION, COMMENT ON EXTENSION, .., etc)  that what we don't want,
> right?
>
>
Agreed. I was afraid of that, but for some reason, didn't find that. I'll
fix this.


> #2 :
> We should add note for default behaviour if --no-blobs & --blobs both
> are specified.
>
>
Right. I don't know how I will handle this, but you're right that the
behaviour should be specified. I'll also fix this.

I'll try to work on this today but as I'm in pgconf.eu 2016, it may be only
for tomorrow.

Thank you.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] Exclude pg_largeobject form pg_dump

2016-11-04 Thread amul sul
Hi Guillaume,

I found following issues with this patch, sorry missed in previous post:

#1 :
 43 @@ -392,6 +393,10 @@ main(int argc, char **argv)
 44 dopt.outputBlobs = true;
 45 break;
 46
 47 +   case 'B':   /* Don't dump blobs */
 48 +   dopt.include_everything = false;
 49 +   break;
 50 +

Touching dopt.include_everything flag does not seems to be a good idea
for '--no-blobs' option, our intension is to exclude blob only, but
this excluds other dump too (e.g COMMENT ON DATABASE, CREATE
EXTENSION, COMMENT ON EXTENSION, .., etc)  that what we don't want,
right?

#2 :
We should add note for default behaviour if --no-blobs & --blobs both
are specified.

Regards,
Amul Sul


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general