Re: difference between checkpoint_segments and wal_keep_segments in postgres configuration file

2018-04-20 Thread Raghavendra Rao J S V
Thanks a lot.

On Wed 11 Apr, 2018, 9:07 AM Michael Paquier,  wrote:

> On Tue, Apr 10, 2018 at 11:06:54PM +0530, Raghavendra Rao J S V wrote:
> > I am not clear the difference between checkpoint_segments and
> > wal_keep_segments .
> >
> > I would like to now below things. Please explain.Thanks in advance.
> >
> >- Difference  between *checkpoint_segments *and *wal_keep_segments *
> >value
> >- Role  of  *checkpoint_segments *and *wal_keep_segments *
> >- Which one should has higher value.
>
> Documentation is king here.  For checkpoint_segments:
>
> https://www.postgresql.org/docs/9.4/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-CHECKPOINTS
>
>   Maximum number of log file segments between automatic WAL
>   checkpoints (each segment is normally 16 megabytes). The default is
>   three segments. Increasing this parameter can increase the amount of
>   time needed for crash recovery. This parameter can only be set in
>   the postgresql.conf file or on the server command line.
>
> For wal_keep_segments:
>
> https://www.postgresql.org/docs/9.4/static/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER
>
>   Specifies the minimum number of past log file segments kept in the
>   pg_xlog directory, in case a standby server needs to fetch them for
>   streaming replication. Each segment is normally 16 megabytes. If a
>   standby server connected to the sending server falls behind by more than
>   wal_keep_segments segments, the sending server might remove a WAL
>   segment still needed by the standby, in which case the replication
>   connection will be terminated. Downstream connections will also
>   eventually fail as a result. (However, the standby server can recover by
>   fetching the segment from archive, if WAL archiving is in use.)
>
> Mentioning checkpoint_segments implies that you are using PostgreSQL 9.4
> or older versions as this has been removed and replaced by max_wal_size
> in 9.5.  You should consider upgrading to a newer version.
>
> Hence the first is used in the context of normal operations to decide
> the frequency of checkpoints when those are triggered by volume.  The
> second can be used with streaming replication to give a standby a higher
> catchup window.  Giving value to one or the other depends on the
> context, and both are usable in completely different circumstances.
> --
> Michael
>


Re: Strange error in Windows 10 Pro

2018-04-20 Thread Dale Seaburg
Oops, my mistake.  I'll let this serve the list with what I've tried so 
far.  Thanks, Adrian for the reminder.


I hope tomorrow to visit the customer and try the Admin user method of 
installing.


Dale


On 4/20/2018 11:03 PM, Adrian Klaver wrote:

On 04/20/2018 07:52 PM, Dale Seaburg wrote:

Please also reply to list.
Ccing list to put it front of more eyes.


Thanks, Adrian, for suggestion(s).


On 4/20/2018 9:35 PM, Adrian Klaver wrote:

On 04/20/2018 07:16 PM, Dale Seaburg wrote:
I am attempting to install a fresh copy of 
postgresql-9.6.8-2-windows-x86 on a new DELL PC with Windows 10 
Pro.  It 


This was downloaded from where?
downloaded from https://www.postgresql.org/download/windows/, 
selecting to use the installer pointed to near the beginning of that 
page.  The actual website that contained the installer file was: 
"https://www.enterprisedb.com/thank-you-downloading-postgresql?anid=209611";. 



gets near the end of the install when the message says it is 
attempting to start the the database server.   There's a long 
pause, followed by an error message: "Failed to load SQL modules 
into the database cluster". Using File Explorer, i notice the 
9.6\base\ folder is empty?  Has anyone else seen this before?  I 
have no clue where to look for the issue.


You are running as Admin user?
I can't say that I was.  Will check this next time (maybe tomorrow) 
when I am customer's site.  I didn't even think about right-clicking 
on the installer and selecting run-as-admin.  I know I did not 
deliberately use Admin user on the test PC mentioned below, and it 
installed with no problems.





As a double-check on a different PC with Windows 10 Pro, I get no 
error message, and the database is installed correctly.


Dale Seaburg

















Re: Strange error in Windows 10 Pro

2018-04-20 Thread Adrian Klaver

On 04/20/2018 07:52 PM, Dale Seaburg wrote:

Please also reply to list.
Ccing list to put it front of more eyes.


Thanks, Adrian, for suggestion(s).


On 4/20/2018 9:35 PM, Adrian Klaver wrote:

On 04/20/2018 07:16 PM, Dale Seaburg wrote:
I am attempting to install a fresh copy of 
postgresql-9.6.8-2-windows-x86 on a new DELL PC with Windows 10 Pro.  It 


This was downloaded from where?
downloaded from https://www.postgresql.org/download/windows/, selecting 
to use the installer pointed to near the beginning of that page.  The 
actual website that contained the installer file was: 
"https://www.enterprisedb.com/thank-you-downloading-postgresql?anid=209611";. 



gets near the end of the install when the message says it is 
attempting to start the the database server.   There's a long pause, 
followed by an error message: "Failed to load SQL modules into the 
database cluster". Using File Explorer, i notice the 9.6\base\ folder 
is empty?  Has anyone else seen this before?  I have no clue where to 
look for the issue.


You are running as Admin user?
I can't say that I was.  Will check this next time (maybe tomorrow) when 
I am customer's site.  I didn't even think about right-clicking on the 
installer and selecting run-as-admin.  I know I did not deliberately use 
Admin user on the test PC mentioned below, and it installed with no 
problems.





As a double-check on a different PC with Windows 10 Pro, I get no 
error message, and the database is installed correctly.


Dale Seaburg












--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Strange error in Windows 10 Pro

2018-04-20 Thread Adrian Klaver

On 04/20/2018 07:16 PM, Dale Seaburg wrote:
I am attempting to install a fresh copy of 
postgresql-9.6.8-2-windows-x86 on a new DELL PC with Windows 10 Pro.  It 


This was downloaded from where?

gets near the end of the install when the message says it is attempting 
to start the the database server.   There's a long pause, followed by an 
error message: "Failed to load SQL modules into the database cluster". 
Using File Explorer, i notice the 9.6\base\ folder is empty?  Has anyone 
else seen this before?  I have no clue where to look for the issue.


You are running as Admin user?




As a double-check on a different PC with Windows 10 Pro, I get no error 
message, and the database is installed correctly.


Dale Seaburg







--
Adrian Klaver
adrian.kla...@aklaver.com



Strange error in Windows 10 Pro

2018-04-20 Thread Dale Seaburg
I am attempting to install a fresh copy of 
postgresql-9.6.8-2-windows-x86 on a new DELL PC with Windows 10 Pro.  It 
gets near the end of the install when the message says it is attempting 
to start the the database server.   There's a long pause, followed by an 
error message: "Failed to load SQL modules into the database cluster".  
Using File Explorer, i notice the 9.6\base\ folder is empty?  Has anyone 
else seen this before?  I have no clue where to look for the issue.



As a double-check on a different PC with Windows 10 Pro, I get no error 
message, and the database is installed correctly.


Dale Seaburg





Re: Postgresql database encryption

2018-04-20 Thread Ozz Nixon
Thanks Ron, I was trying to find that -- memory had it down as "Persona"
and I could not find that, haha.

On Fri, Apr 20, 2018 at 8:39 PM Ron  wrote:

>
> Also, Percona (a MySQL fork) 5.7.
>
> On 04/20/2018 07:31 PM, Ozz Nixon wrote:
>
> PS. the following database servers do offer internal encryption on a
> page/block oriented read/write (for encrypted data at rest security
> requirements)
>
> PremierSQL TDE
> MariaDB 10.1.3+
> *MySQL* 5.7.11+
> Microsoft uses TDE
> Oracle AdvSec uses TDE
> DB2 v7.2 UDB
> MangoDB uses AES-256
> PostgreSQL does - but the key is public (dumb)
> https://www.postgresql.org/message-id/ca%2bcsw_tb3bk5i7if6inzfc3yyf%2b9hevnty51qfboeuk7ue_v%...@mail.gmail.com
>
> Just because you do not see the reason for it, does not make the reason a
> bad idea.
>
> On Fri, Apr 20, 2018 at 8:19 PM Ozz Nixon  wrote:
>
>> Well, actually since 2003, this has been a standard requirement from the
>> Credit Card industry. And it does make sense in the field of "while at
>> rest" the data still cannot be accessed.
>>
>> Requirement 1. No NPI data should be displayed without controls - e.g.
>> reports, PDF, etc.
>> Requirement 2. Same data, must be secured during transmission - fetching
>> to client screen etc.
>> Requirement 3. NPI data should not be logged nor stored on a physical
>> device in non-encrypted mode.
>>
>> There are more steps to this, but, to chalk it off as another half-assed
>> required is typical. Hashing is a useful one-way technique, however,
>> trapping the hash made using a hash useless! When I worked for the credit
>> bureaus we ran encrypted drive arrays, DB/2 encrypted, SSL/TLS encryption
>> over P2P VPN connections, and masked output fields when the data would go
>> to reports or screens to PCs outside our control.
>>
>> Anyone with Linux and use LUKS encryption on an LVM partition to achieve
>> security where the database may not, or logs or something may exist where
>> NPI might be see. Oh yeah, NPI (Non-Pubic Information, like your social,
>> you bank account, you paycheck information, etc. things that should not
>> exist outside of controls)...
>>
>> PS. You cannot simply take a drive from one machine to another, when
>> doing proper RAID and LUKS encryption.
>>
>> Ozz
>> 15 years experience with federal data security requirements.
>>
>> On Fri, Apr 20, 2018 at 7:55 PM Tim Cross  wrote:
>>
>>>
>>> Vikas Sharma  writes:
>>>
>>> > Hello Guys,
>>> >
>>> > Could someone throw light on the postgresql instance wide or database
>>> wide
>>> > encryption please? Is this possible in postgresql and been in use in
>>> > production?.
>>> >
>>> > This is a requirement in our production implementation.
>>> >
>>>
>>> This sounds like a lazy management requirement specified for 'security'
>>> purposes by people with little understanding of either technology or
>>> security. I suspect it comes form a conversation that went along the
>>> lines of 
>>>
>>> "There has been lots in the news about cyber threats"
>>>
>>> "Yes, we need our system to be secure"
>>>
>>> "I know, lets make one of the requirements that everything must be
>>> encrypted, that will stop them"
>>>
>>> "Great idea, I'll add it as requirement 14".
>>>
>>> This is a very poor requirement because it is not adequately specified,
>>> but more critically, because it is specifying a 'solution' rather than
>>> articulating the requirement in a way which would allow those with the
>>> necessary expertise to derive an appropriate solution - one which may or
>>> may not involve encryption or hashing of data and which may or may not
>>> be at the database level.
>>>
>>> What you really need to do is go back to your stakeholders and ask them
>>> a lot of questions to extract what the real requirement is. Try to find
>>> out what risk they are trying to mitigate with encryption. Once this is
>>> understood, then look at what the technology can do and work out the
>>> design/implementation from there.
>>>
>>> It is extremely unlikely you just want all the data in the database
>>> encrypted. When you think about it, such an approach really doesn't make
>>> sense. In basic terms, if the data is encrypted, the database engine
>>> will need to be able to decrypt it in order to operate (consider how a
>>> where clause needs to be able to interpret actions etc). If the db can
>>> read the data, the keys must be in the database. If the keys are in the
>>> database and your database is compromised, then your keys are
>>> compromised. So provided you protect your database from compromise, you
>>> achieve the same level of security as you do with full data encryption
>>> EXCEPT for access to the underlying data files outside of the database
>>> system. For this, you will tend to use some sort of file system
>>> encryption, which is typically managed at the operating system
>>> level. Again, for the operating system to be able to read the file
>>> system, the OS must have access to the decryption keys, so if your OS is
>>> compromised, then t

Re: Postgresql database encryption

2018-04-20 Thread Ron


Also, Percona (a MySQL fork) 5.7.

On 04/20/2018 07:31 PM, Ozz Nixon wrote:
PS. the following database servers do offer internal encryption on a 
page/block oriented read/write (for encrypted data at rest security 
requirements)


PremierSQL TDE
MariaDB 10.1.3+
*MySQL* 5.7.11+
Microsoft uses TDE
Oracle AdvSec uses TDE
DB2 v7.2 UDB
MangoDB uses AES-256
PostgreSQL does - but the key is public (dumb) 
https://www.postgresql.org/message-id/ca%2bcsw_tb3bk5i7if6inzfc3yyf%2b9hevnty51qfboeuk7ue_v%...@mail.gmail.com


Just because you do not see the reason for it, does not make the reason a 
bad idea.


On Fri, Apr 20, 2018 at 8:19 PM Ozz Nixon > wrote:


Well, actually since 2003, this has been a standard requirement from
the Credit Card industry. And it does make sense in the field of
"while at rest" the data still cannot be accessed.

Requirement 1. No NPI data should be displayed without controls - e.g.
reports, PDF, etc.
Requirement 2. Same data, must be secured during transmission -
fetching to client screen etc.
Requirement 3. NPI data should not be logged nor stored on a physical
device in non-encrypted mode.

There are more steps to this, but, to chalk it off as another
half-assed required is typical. Hashing is a useful one-way technique,
however, trapping the hash made using a hash useless! When I worked
for the credit bureaus we ran encrypted drive arrays, DB/2 encrypted,
SSL/TLS encryption over P2P VPN connections, and masked output fields
when the data would go to reports or screens to PCs outside our control.

Anyone with Linux and use LUKS encryption on an LVM partition to
achieve security where the database may not, or logs or something may
exist where NPI might be see. Oh yeah, NPI (Non-Pubic Information,
like your social, you bank account, you paycheck information, etc.
things that should not exist outside of controls)...

PS. You cannot simply take a drive from one machine to another, when
doing proper RAID and LUKS encryption.

Ozz
15 years experience with federal data security requirements.

On Fri, Apr 20, 2018 at 7:55 PM Tim Cross mailto:theophil...@gmail.com>> wrote:


Vikas Sharma mailto:shavi...@gmail.com>> writes:

> Hello Guys,
>
> Could someone throw light on the postgresql instance wide or
database wide
> encryption please? Is this possible in postgresql and been in use in
> production?.
>
> This is a requirement in our production implementation.
>

This sounds like a lazy management requirement specified for
'security'
purposes by people with little understanding of either technology or
security. I suspect it comes form a conversation that went along the
lines of 

"There has been lots in the news about cyber threats"

"Yes, we need our system to be secure"

"I know, lets make one of the requirements that everything must be
encrypted, that will stop them"

"Great idea, I'll add it as requirement 14".

This is a very poor requirement because it is not adequately
specified,
but more critically, because it is specifying a 'solution' rather than
articulating the requirement in a way which would allow those with the
necessary expertise to derive an appropriate solution - one which
may or
may not involve encryption or hashing of data and which may or may not
be at the database level.

What you really need to do is go back to your stakeholders and ask
them
a lot of questions to extract what the real requirement is. Try to
find
out what risk they are trying to mitigate with encryption. Once
this is
understood, then look at what the technology can do and work out the
design/implementation from there.

It is extremely unlikely you just want all the data in the database
encrypted. When you think about it, such an approach really
doesn't make
sense. In basic terms, if the data is encrypted, the database engine
will need to be able to decrypt it in order to operate (consider how a
where clause needs to be able to interpret actions etc). If the db can
read the data, the keys must be in the database. If the keys are
in the
database and your database is compromised, then your keys are
compromised. So provided you protect your database from
compromise, you
achieve the same level of security as you do with full data encryption
EXCEPT for access to the underlying data files outside of the database
system. For this, you will tend to use some sort of file system
encryption, which is typically managed at the operating system
level. Again, for the operating system to 

Re: Postgresql database encryption

2018-04-20 Thread Ozz Nixon
PS. the following database servers do offer internal encryption on a
page/block oriented read/write (for encrypted data at rest security
requirements)

PremierSQL TDE
MariaDB 10.1.3+
*MySQL* 5.7.11+
Microsoft uses TDE
Oracle AdvSec uses TDE
DB2 v7.2 UDB
MangoDB uses AES-256
PostgreSQL does - but the key is public (dumb)
https://www.postgresql.org/message-id/ca%2bcsw_tb3bk5i7if6inzfc3yyf%2b9hevnty51qfboeuk7ue_v%...@mail.gmail.com

Just because you do not see the reason for it, does not make the reason a
bad idea.

On Fri, Apr 20, 2018 at 8:19 PM Ozz Nixon  wrote:

> Well, actually since 2003, this has been a standard requirement from the
> Credit Card industry. And it does make sense in the field of "while at
> rest" the data still cannot be accessed.
>
> Requirement 1. No NPI data should be displayed without controls - e.g.
> reports, PDF, etc.
> Requirement 2. Same data, must be secured during transmission - fetching
> to client screen etc.
> Requirement 3. NPI data should not be logged nor stored on a physical
> device in non-encrypted mode.
>
> There are more steps to this, but, to chalk it off as another half-assed
> required is typical. Hashing is a useful one-way technique, however,
> trapping the hash made using a hash useless! When I worked for the credit
> bureaus we ran encrypted drive arrays, DB/2 encrypted, SSL/TLS encryption
> over P2P VPN connections, and masked output fields when the data would go
> to reports or screens to PCs outside our control.
>
> Anyone with Linux and use LUKS encryption on an LVM partition to achieve
> security where the database may not, or logs or something may exist where
> NPI might be see. Oh yeah, NPI (Non-Pubic Information, like your social,
> you bank account, you paycheck information, etc. things that should not
> exist outside of controls)...
>
> PS. You cannot simply take a drive from one machine to another, when doing
> proper RAID and LUKS encryption.
>
> Ozz
> 15 years experience with federal data security requirements.
>
> On Fri, Apr 20, 2018 at 7:55 PM Tim Cross  wrote:
>
>>
>> Vikas Sharma  writes:
>>
>> > Hello Guys,
>> >
>> > Could someone throw light on the postgresql instance wide or database
>> wide
>> > encryption please? Is this possible in postgresql and been in use in
>> > production?.
>> >
>> > This is a requirement in our production implementation.
>> >
>>
>> This sounds like a lazy management requirement specified for 'security'
>> purposes by people with little understanding of either technology or
>> security. I suspect it comes form a conversation that went along the
>> lines of 
>>
>> "There has been lots in the news about cyber threats"
>>
>> "Yes, we need our system to be secure"
>>
>> "I know, lets make one of the requirements that everything must be
>> encrypted, that will stop them"
>>
>> "Great idea, I'll add it as requirement 14".
>>
>> This is a very poor requirement because it is not adequately specified,
>> but more critically, because it is specifying a 'solution' rather than
>> articulating the requirement in a way which would allow those with the
>> necessary expertise to derive an appropriate solution - one which may or
>> may not involve encryption or hashing of data and which may or may not
>> be at the database level.
>>
>> What you really need to do is go back to your stakeholders and ask them
>> a lot of questions to extract what the real requirement is. Try to find
>> out what risk they are trying to mitigate with encryption. Once this is
>> understood, then look at what the technology can do and work out the
>> design/implementation from there.
>>
>> It is extremely unlikely you just want all the data in the database
>> encrypted. When you think about it, such an approach really doesn't make
>> sense. In basic terms, if the data is encrypted, the database engine
>> will need to be able to decrypt it in order to operate (consider how a
>> where clause needs to be able to interpret actions etc). If the db can
>> read the data, the keys must be in the database. If the keys are in the
>> database and your database is compromised, then your keys are
>> compromised. So provided you protect your database from compromise, you
>> achieve the same level of security as you do with full data encryption
>> EXCEPT for access to the underlying data files outside of the database
>> system. For this, you will tend to use some sort of file system
>> encryption, which is typically managed at the operating system
>> level. Again, for the operating system to be able to read the file
>> system, the OS must have access to the decryption keys, so if your OS is
>> compromised, then that level of protection is lost as well (well, that
>> is over simplified, but you get the idea). What this level of protection
>> does give you is data at rest protection - if someone is able to access
>> hour disks through some other means, they cannot read the data. This is
>> the same principal most people should be using with their
>> laptops. Protect

Re: Postgresql database encryption

2018-04-20 Thread Ozz Nixon
Well, actually since 2003, this has been a standard requirement from the
Credit Card industry. And it does make sense in the field of "while at
rest" the data still cannot be accessed.

Requirement 1. No NPI data should be displayed without controls - e.g.
reports, PDF, etc.
Requirement 2. Same data, must be secured during transmission - fetching to
client screen etc.
Requirement 3. NPI data should not be logged nor stored on a physical
device in non-encrypted mode.

There are more steps to this, but, to chalk it off as another half-assed
required is typical. Hashing is a useful one-way technique, however,
trapping the hash made using a hash useless! When I worked for the credit
bureaus we ran encrypted drive arrays, DB/2 encrypted, SSL/TLS encryption
over P2P VPN connections, and masked output fields when the data would go
to reports or screens to PCs outside our control.

Anyone with Linux and use LUKS encryption on an LVM partition to achieve
security where the database may not, or logs or something may exist where
NPI might be see. Oh yeah, NPI (Non-Pubic Information, like your social,
you bank account, you paycheck information, etc. things that should not
exist outside of controls)...

PS. You cannot simply take a drive from one machine to another, when doing
proper RAID and LUKS encryption.

Ozz
15 years experience with federal data security requirements.

On Fri, Apr 20, 2018 at 7:55 PM Tim Cross  wrote:

>
> Vikas Sharma  writes:
>
> > Hello Guys,
> >
> > Could someone throw light on the postgresql instance wide or database
> wide
> > encryption please? Is this possible in postgresql and been in use in
> > production?.
> >
> > This is a requirement in our production implementation.
> >
>
> This sounds like a lazy management requirement specified for 'security'
> purposes by people with little understanding of either technology or
> security. I suspect it comes form a conversation that went along the
> lines of 
>
> "There has been lots in the news about cyber threats"
>
> "Yes, we need our system to be secure"
>
> "I know, lets make one of the requirements that everything must be
> encrypted, that will stop them"
>
> "Great idea, I'll add it as requirement 14".
>
> This is a very poor requirement because it is not adequately specified,
> but more critically, because it is specifying a 'solution' rather than
> articulating the requirement in a way which would allow those with the
> necessary expertise to derive an appropriate solution - one which may or
> may not involve encryption or hashing of data and which may or may not
> be at the database level.
>
> What you really need to do is go back to your stakeholders and ask them
> a lot of questions to extract what the real requirement is. Try to find
> out what risk they are trying to mitigate with encryption. Once this is
> understood, then look at what the technology can do and work out the
> design/implementation from there.
>
> It is extremely unlikely you just want all the data in the database
> encrypted. When you think about it, such an approach really doesn't make
> sense. In basic terms, if the data is encrypted, the database engine
> will need to be able to decrypt it in order to operate (consider how a
> where clause needs to be able to interpret actions etc). If the db can
> read the data, the keys must be in the database. If the keys are in the
> database and your database is compromised, then your keys are
> compromised. So provided you protect your database from compromise, you
> achieve the same level of security as you do with full data encryption
> EXCEPT for access to the underlying data files outside of the database
> system. For this, you will tend to use some sort of file system
> encryption, which is typically managed at the operating system
> level. Again, for the operating system to be able to read the file
> system, the OS must have access to the decryption keys, so if your OS is
> compromised, then that level of protection is lost as well (well, that
> is over simplified, but you get the idea). What this level of protection
> does give you is data at rest protection - if someone is able to access
> hour disks through some other means, they cannot read the data. This is
> the same principal most people should be using with their
> laptops. Protect the OS with a password and have the data on disk
> encrypted. Provided nobody can login to your laptop, they cannot read
> your data. Without this encryption, you can just take the disk out of
> the laptop, mount it on another system and you have full access. With
> disk encryption, you cannot do that. Same basic principal with the
> server.
>
> At the database level, a more typical approach is to use one way hashing
> for some sensitive data (i.e. passwords) and possibly column level
> encryption on a specific column (much rarer) or just well structured
> security policies and user roles that restrict who has access to various
> tables/columns. To implement this successfully, 

Re: Postgresql database encryption

2018-04-20 Thread Ron

On 04/20/2018 10:24 AM, Vikas Sharma wrote:

Hello Guys,

Could someone throw light on the postgresql instance wide or database wide 
encryption please? Is this possible in postgresql and been in use in 
production?.


What about encrypted backups?

--
Angular momentum makes the world go 'round.



Re: Postgresql database encryption

2018-04-20 Thread Tim Cross

Ron  writes:

> On 04/20/2018 06:11 PM, Stephen Frost wrote:
>> Greetings,
>>
>> * Ron (ronljohnso...@gmail.com) wrote:
>>> On 04/20/2018 03:55 PM, Vick Khera wrote:
 On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma >>> For anyone to offer a proper solution, you need to say what purpose your
 encryption will serve. Does the data need to be encrypted at rest? Does it
 need to be encrypted in memory? Does it need to be encrypted at the
 database level or at the application level? Do you need to be able to
 query the data? There are all sorts of scenarios and use cases, and you
 need to be more specific.

 For me, using whole-disk encryption solved my need, which was to ensure
 that the data on disk cannot be read once removed from the server.
>>> Someone really needs to explain that to me. My company-issued laptop has
>>> WDE, and that's great for when the machine is shut down and I'm carrying it
>>> from place to place, but when it's running, all the data is transparently
>>> decrypted for every process that wants to read the data, including malware,
>>> industrial spies,
>>>
>>> Thus, unless you move your DB server on a regular basis, I can't see the
>>> usefulness of WDE on a static machine.
>> The typical concern (aka, attack vector) isn't around moving the DB
>> server on a regular basis or about someone breaking into your data
>> center and stealing your drives, it's making sure that disposal of
>> equipment doesn't result in valuable data being retained on the
>> drives when they leave the data center for replacement or disposal.
>
> That makes some sense, but years of added CPU overhead to mitigate a problem 
> that could be solved by writing zeros to the disk as a step in the decomm 
> process seems more than a bit wasteful.

Problem is that decomm process relies on someone actually following the
process. Too often, this part fails. The overhead with WDE these days is
minimal anyway. Good security is always about layers of protection and
should never just rely on a single control.

Tim
-- 
Tim Cross



Re: Postgresql database encryption

2018-04-20 Thread Tim Cross

Ron  writes:

> On 04/20/2018 03:55 PM, Vick Khera wrote:
>> On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma > > wrote:
>>
>
> Someone really needs to explain that to me. My company-issued laptop has 
> WDE, and that's great for when the machine is shut down and I'm carrying it 
> from place to place, but when it's running, all the data is transparently 
> decrypted for every process that wants to read the data, including malware, 
> industrial spies,
>

It really depends on the architecture. In many server environments these
days, some sort of network storage is used. Having the 'disk' associated
with a specific server encrypted can provide some level of protection from 
another
machine which also has access to the underlying infrastructure from
being able to access that data.

The other level of protection is for when disks are disposed of. There
have been many cases where data has been retrieved off disks which have
been sent for disposal.

Finally, the basic physical protection. Someone cannot just access your
data centre, remove a disk from the SAN and then access the data. 

Then of course there is the bureaucratic protection - "Yes boss, all our
data is encrypted on disk."

Tim
" 
-- 
Tim Cross



Re: Postgresql database encryption

2018-04-20 Thread Tim Cross

Vikas Sharma  writes:

> Hello Guys,
>
> Could someone throw light on the postgresql instance wide or database wide
> encryption please? Is this possible in postgresql and been in use in
> production?.
>
> This is a requirement in our production implementation.
>

This sounds like a lazy management requirement specified for 'security'
purposes by people with little understanding of either technology or
security. I suspect it comes form a conversation that went along the
lines of 

"There has been lots in the news about cyber threats"

"Yes, we need our system to be secure"

"I know, lets make one of the requirements that everything must be
encrypted, that will stop them"

"Great idea, I'll add it as requirement 14".

This is a very poor requirement because it is not adequately specified,
but more critically, because it is specifying a 'solution' rather than
articulating the requirement in a way which would allow those with the
necessary expertise to derive an appropriate solution - one which may or
may not involve encryption or hashing of data and which may or may not
be at the database level.

What you really need to do is go back to your stakeholders and ask them
a lot of questions to extract what the real requirement is. Try to find
out what risk they are trying to mitigate with encryption. Once this is
understood, then look at what the technology can do and work out the
design/implementation from there.

It is extremely unlikely you just want all the data in the database
encrypted. When you think about it, such an approach really doesn't make
sense. In basic terms, if the data is encrypted, the database engine
will need to be able to decrypt it in order to operate (consider how a
where clause needs to be able to interpret actions etc). If the db can
read the data, the keys must be in the database. If the keys are in the
database and your database is compromised, then your keys are
compromised. So provided you protect your database from compromise, you
achieve the same level of security as you do with full data encryption
EXCEPT for access to the underlying data files outside of the database
system. For this, you will tend to use some sort of file system
encryption, which is typically managed at the operating system
level. Again, for the operating system to be able to read the file
system, the OS must have access to the decryption keys, so if your OS is
compromised, then that level of protection is lost as well (well, that
is over simplified, but you get the idea). What this level of protection
does give you is data at rest protection - if someone is able to access
hour disks through some other means, they cannot read the data. This is
the same principal most people should be using with their
laptops. Protect the OS with a password and have the data on disk
encrypted. Provided nobody can login to your laptop, they cannot read
your data. Without this encryption, you can just take the disk out of
the laptop, mount it on another system and you have full access. With
disk encryption, you cannot do that. Same basic principal with the
server.

At the database level, a more typical approach is to use one way hashing
for some sensitive data (i.e. passwords) and possibly column level
encryption on a specific column (much rarer) or just well structured
security policies and user roles that restrict who has access to various
tables/columns. To implement this successfully, you need details
regarding the domain, sensitivity of various data elements and the
threats you need to protect against. If you cannot get this information
because your stakeholders don't really know what their risks are and
have not done a proper assessment and what you are really dealing with
is bureaucracy which just as a dumb "data must be encrypted" policy,
just use full disk encryption and state that all data is encrypted on
disk" and your done.

Tim 


-- 
Tim Cross



Re: Postgresql database encryption

2018-04-20 Thread Stephen Frost
Greetings,

* Ron (ronljohnso...@gmail.com) wrote:
> On 04/20/2018 06:11 PM, Stephen Frost wrote:
> >* Ron (ronljohnso...@gmail.com) wrote:
> >>On 04/20/2018 03:55 PM, Vick Khera wrote:
> >>>On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma  >>>For anyone to offer a proper solution, you need to say what purpose your
> >>>encryption will serve. Does the data need to be encrypted at rest? Does it
> >>>need to be encrypted in memory? Does it need to be encrypted at the
> >>>database level or at the application level? Do you need to be able to
> >>>query the data? There are all sorts of scenarios and use cases, and you
> >>>need to be more specific.
> >>>
> >>>For me, using whole-disk encryption solved my need, which was to ensure
> >>>that the data on disk cannot be read once removed from the server.
> >>Someone really needs to explain that to me. My company-issued laptop has
> >>WDE, and that's great for when the machine is shut down and I'm carrying it
> >>from place to place, but when it's running, all the data is transparently
> >>decrypted for every process that wants to read the data, including malware,
> >>industrial spies,
> >>
> >>Thus, unless you move your DB server on a regular basis, I can't see the
> >>usefulness of WDE on a static machine.
> >The typical concern (aka, attack vector) isn't around moving the DB
> >server on a regular basis or about someone breaking into your data
> >center and stealing your drives, it's making sure that disposal of
> >equipment doesn't result in valuable data being retained on the
> >drives when they leave the data center for replacement or disposal.
> 
> That makes some sense, but years of added CPU overhead to mitigate a problem
> that could be solved by writing zeros to the disk as a step in the decomm
> process seems more than a bit wasteful.

This presumes that the drive is still functional enough to be able to
overwrite it with zeros, and that overwriting it with zeros would be
sufficient.  Neither are, necessairly, accurate.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Postgresql database encryption

2018-04-20 Thread Rob Sargent



On 04/20/2018 05:43 PM, Ron wrote:



On 04/20/2018 06:11 PM, Stephen Frost wrote:

Greetings,

* Ron (ronljohnso...@gmail.com) wrote:

On 04/20/2018 03:55 PM, Vick Khera wrote:

On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma For anyone to offer a proper solution, you need to say what purpose 
your
encryption will serve. Does the data need to be encrypted at rest? 
Does it

need to be encrypted in memory? Does it need to be encrypted at the
database level or at the application level? Do you need to be able to
query the data? There are all sorts of scenarios and use cases, and 
you

need to be more specific.

For me, using whole-disk encryption solved my need, which was to 
ensure

that the data on disk cannot be read once removed from the server.
Someone really needs to explain that to me. My company-issued laptop 
has
WDE, and that's great for when the machine is shut down and I'm 
carrying it
from place to place, but when it's running, all the data is 
transparently
decrypted for every process that wants to read the data, including 
malware,

industrial spies,

Thus, unless you move your DB server on a regular basis, I can't see 
the

usefulness of WDE on a static machine.

The typical concern (aka, attack vector) isn't around moving the DB
server on a regular basis or about someone breaking into your data
center and stealing your drives, it's making sure that disposal of
equipment doesn't result in valuable data being retained on the
drives when they leave the data center for replacement or disposal.


That makes some sense, but years of added CPU overhead to mitigate a 
problem that could be solved by writing zeros to the disk as a step in 
the decomm process seems more than a bit wasteful.



Well you probably need to drive a nail through the drive but that's a 
technical detail :)





Re: Postgresql database encryption

2018-04-20 Thread Ron



On 04/20/2018 06:11 PM, Stephen Frost wrote:

Greetings,

* Ron (ronljohnso...@gmail.com) wrote:

On 04/20/2018 03:55 PM, Vick Khera wrote:

On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma 
Someone really needs to explain that to me. My company-issued laptop has
WDE, and that's great for when the machine is shut down and I'm carrying it
from place to place, but when it's running, all the data is transparently
decrypted for every process that wants to read the data, including malware,
industrial spies,

Thus, unless you move your DB server on a regular basis, I can't see the
usefulness of WDE on a static machine.

The typical concern (aka, attack vector) isn't around moving the DB
server on a regular basis or about someone breaking into your data
center and stealing your drives, it's making sure that disposal of
equipment doesn't result in valuable data being retained on the
drives when they leave the data center for replacement or disposal.


That makes some sense, but years of added CPU overhead to mitigate a problem 
that could be solved by writing zeros to the disk as a step in the decomm 
process seems more than a bit wasteful.



--
Angular momentum makes the world go 'round.



Re: Postgresql database encryption

2018-04-20 Thread Stephen Frost
Greetings,

* Ron (ronljohnso...@gmail.com) wrote:
> On 04/20/2018 03:55 PM, Vick Khera wrote:
> >On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma  >For anyone to offer a proper solution, you need to say what purpose your
> >encryption will serve. Does the data need to be encrypted at rest? Does it
> >need to be encrypted in memory? Does it need to be encrypted at the
> >database level or at the application level? Do you need to be able to
> >query the data? There are all sorts of scenarios and use cases, and you
> >need to be more specific.
> >
> >For me, using whole-disk encryption solved my need, which was to ensure
> >that the data on disk cannot be read once removed from the server.
> 
> Someone really needs to explain that to me. My company-issued laptop has
> WDE, and that's great for when the machine is shut down and I'm carrying it
> from place to place, but when it's running, all the data is transparently
> decrypted for every process that wants to read the data, including malware,
> industrial spies,
> 
> Thus, unless you move your DB server on a regular basis, I can't see the
> usefulness of WDE on a static machine.

The typical concern (aka, attack vector) isn't around moving the DB
server on a regular basis or about someone breaking into your data
center and stealing your drives, it's making sure that disposal of
equipment doesn't result in valuable data being retained on the
drives when they leave the data center for replacement or disposal.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Postgresql database encryption

2018-04-20 Thread Ron

On 04/20/2018 03:55 PM, Vick Khera wrote:
On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma > wrote:


Hello Guys,

Could someone throw light on the postgresql instance wide or database
wide encryption please? Is this possible in postgresql and been in use
in production?.


For anyone to offer a proper solution, you need to say what purpose your 
encryption will serve. Does the data need to be encrypted at rest? Does it 
need to be encrypted in memory? Does it need to be encrypted at the 
database level or at the application level? Do you need to be able to 
query the data? There are all sorts of scenarios and use cases, and you 
need to be more specific.


For me, using whole-disk encryption solved my need, which was to ensure 
that the data on disk cannot be read once removed from the server.


Someone really needs to explain that to me. My company-issued laptop has 
WDE, and that's great for when the machine is shut down and I'm carrying it 
from place to place, but when it's running, all the data is transparently 
decrypted for every process that wants to read the data, including malware, 
industrial spies,


Thus, unless you move your DB server on a regular basis, I can't see the 
usefulness of WDE on a static machine.


For certain fields in one table, I use application level encryption so 
only the application itself can see the original data. Anyone else 
querying that table sees the encrypted blob, and it was not searchable.


--
Angular momentum makes the world go 'round.


Re: New website

2018-04-20 Thread Adrian Klaver

On 04/20/2018 02:10 PM, Jonathan S. Katz wrote:





While we are on the subject of Mailing Lists, it would help new users if they 
could see the list choices and what each list covers. Basically what you see 
when you go to the Archive section. As it is now going here:

https://lists.postgresql.org/

is a leap into the unknown:)


Yeah…this may comes as a surprise, but we’ve had some off-list discussions
about that ;-)

This may be something to move to -www but we now have a situation where we


I can take a hint:)


have information about mailing lists in 3 separate apps:

pgweb
pgarchives
pglister

In which all 3 serve different purposes and we need to iron out the workflow
a bit more between all 3.

Jonathan





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Doubts about replication..

2018-04-20 Thread Laurentius Purba
I use Barman, and I am happy with it.

Just in case you want to try different tools,
https://github.com/dhamaniasad/awesome-postgres.

On Thu, Apr 19, 2018 at 2:48 PM, Edmundo Robles 
wrote:

> Yes, you are right, the replication is not  a backup ;)  , actually   i
> backup  database  daily at  3:00 am but if database crash,  the amount  of
> data is  big!  that is the reason i want to  replicate to reduce  the data
> loss. By the way  a few days ago a job partner did a delete with no where.
>
> On Thu, Apr 19, 2018 at 1:33 PM, Andreas Kretschmer <
> andr...@a-kretschmer.de> wrote:
>
>>
>>
>> Am 19.04.2018 um 19:57 schrieb Edmundo Robles:
>>
>>> I will use replication as simple backup.
>>>
>>
>> please keep in mind, replication is not a backup. All logical errors on
>> the master (delete from table and forgot the where-condition) will
>> replicated to the standby.
>>
>>
>> Andreas
>>
>> --
>> 2ndQuadrant - The PostgreSQL Support Company.
>> www.2ndQuadrant.com
>>
>>
>>
>
>
> --
>
>


Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread Adrian Klaver

On 04/20/2018 01:30 PM, PegoraroF10 wrote:

Well, talking about responsabilities, I think one of responsabilities of a
mature database is that it can only accept data it was configured for. If
you try to store a timestamp in a integer field or a huge numeric value in a


Actually there have been examples on this list where folks have stored a 
timestamp as seconds from an epoch in an integer field. Of course then 
someone has to know what that field really represents. This is not nit 
picking on my part so much as an example of end user inventiveness. To 
that end Postgres has many ways of coming to a solution for a problem. 
Unfortunately, there are paths to a solution can trip you up. This means 
there is often no simple answer to a problem. Basically, more choices 
means more pre-thinking, testing, re-thinking, repeat as needed.



smallint field, Postgres will block you because that operation is not
acceptable.
So, it's not acceptable to break referential integrity, is it ?


That is a maybe:

https://www.postgresql.org/docs/10/static/sql-altertable.html

"
DISABLE TRIGGER [ trigger_name | ALL | USER ]

ALL

Disable or enable all triggers belonging to the table. (This 
requires superuser privilege if any of the triggers are internally 
generated constraint triggers such as those that are used to implement 
foreign key constraints or deferrable uniqueness and exclusion constraints.)



ADD table_constraint [ NOT VALID ]

This form adds a new constraint to a table using the same syntax as 
CREATE TABLE, plus the option NOT VALID, which is currently only allowed 
for foreign key and CHECK constraints. If the constraint is marked NOT 
VALID, the potentially-lengthy initial check to verify that all rows in 
the table satisfy the constraint is skipped. The constraint will still 
be enforced against subsequent inserts or updates (that is, they'll fail 
unless there is a matching row in the referenced table, in the case of 
foreign keys; and they'll fail unless the new row matches the specified 
check constraints). But the database will not assume that the constraint 
holds for all rows in the table, until it is validated by using the 
VALIDATE CONSTRAINT option.

"

https://www.postgresql.org/docs/10/static/sql-createtrigger.html

"SQL specifies that BEFORE DELETE triggers on cascaded deletes fire 
after the cascaded DELETE completes. The PostgreSQL behavior is for 
BEFORE DELETE to always fire before the delete action, even a cascading 
one. This is considered more consistent. There is also nonstandard 
behavior if BEFORE triggers modify rows or prevent updates during an 
update that is caused by a referential action. This can lead to 
constraint violations or stored data that does not honor the referential 
constraint."


https://www.postgresql.org/docs/10/static/plpgsql-trigger.html

"Row-level triggers fired BEFORE can return null to signal the trigger 
manager to skip the rest of the operation for this row (i.e., subsequent 
triggers are not fired, and the INSERT/UPDATE/DELETE does not occur for 
this row). "


A certain amount of this came about because folks are dealing with messy 
data and want to get it into the database first, do the clean up there 
and then apply the RI constraints. Other folks have different ways of 
doing it. It comes done to personal choice. That means though you have 
to know that dangerous paths exist and how to avoid them. A lot of this 
is ingrained in the code and in use in the wild, so I would not expect 
there would be major changes in how things work. Instead as has already 
been indicated there maybe better documentation on the way detailing all 
the above.






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: New website

2018-04-20 Thread Jonathan S. Katz

> On Apr 20, 2018, at 2:57 PM, Adrian Klaver  wrote:
> 
> On 04/20/2018 11:33 AM, Jonathan S. Katz wrote:
>>> On Apr 18, 2018, at 8:38 PM, Adrian Klaver >> > wrote:
>>> 
>>> Clicking on the Mailing List link on front page should take you to page 
>>> that includes the archives.
>> We’ve added a button that brings you to the archives from the homepage.
> 
> Thanks.
> 
> While we are on the subject of Mailing Lists, it would help new users if they 
> could see the list choices and what each list covers. Basically what you see 
> when you go to the Archive section. As it is now going here:
> 
> https://lists.postgresql.org/
> 
> is a leap into the unknown:)

Yeah…this may comes as a surprise, but we’ve had some off-list discussions
about that ;-)

This may be something to move to -www but we now have a situation where we
have information about mailing lists in 3 separate apps:

pgweb
pgarchives
pglister

In which all 3 serve different purposes and we need to iron out the workflow
a bit more between all 3.

Jonathan




Re: Postgresql database encryption

2018-04-20 Thread Vick Khera
On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma  wrote:

> Hello Guys,
>
> Could someone throw light on the postgresql instance wide or database wide
> encryption please? Is this possible in postgresql and been in use in
> production?.
>

For anyone to offer a proper solution, you need to say what purpose your
encryption will serve. Does the data need to be encrypted at rest? Does it
need to be encrypted in memory? Does it need to be encrypted at the
database level or at the application level? Do you need to be able to query
the data? There are all sorts of scenarios and use cases, and you need to
be more specific.

For me, using whole-disk encryption solved my need, which was to ensure
that the data on disk cannot be read once removed from the server. For
certain fields in one table, I use application level encryption so only the
application itself can see the original data. Anyone else querying that
table sees the encrypted blob, and it was not searchable.


Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread PegoraroF10
Well, talking about responsabilities, I think one of responsabilities of a
mature database is that it can only accept data it was configured for. If
you try to store a timestamp in a integer field or a huge numeric value in a
smallint field, Postgres will block you because that operation is not
acceptable. 
So, it's not acceptable to break referential integrity, is it ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: New website

2018-04-20 Thread Adrian Klaver

On 04/20/2018 11:33 AM, Jonathan S. Katz wrote:


On Apr 18, 2018, at 8:38 PM, Adrian Klaver > wrote:


Clicking on the Mailing List link on front page should take you to 
page that includes the archives.


We’ve added a button that brings you to the archives from the homepage.


Thanks.

While we are on the subject of Mailing Lists, it would help new users if 
they could see the list choices and what each list covers. Basically 
what you see when you go to the Archive section. As it is now going here:


https://lists.postgresql.org/

is a leap into the unknown:)



Thanks,

Jonathan





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: New website

2018-04-20 Thread Jonathan S. Katz

> On Apr 18, 2018, at 8:38 PM, Adrian Klaver  wrote:
> 
> Clicking on the Mailing List link on front page should take you to page that 
> includes the archives.

We’ve added a button that brings you to the archives from the homepage.

Thanks,

Jonathan




Re: Dynamically filtering a CTE?

2018-04-20 Thread W. Trevor King
On Fri, Apr 20, 2018 at 09:33:22AM -0700, David G. Johnston wrote:
> On Fri, Apr 20, 2018 at 9:22 AM, W. Trevor King wrote:
> > format('
> > WITH RECURSIVE _ancestors(id, ancestors) AS (
> > SELECT
> >   item.id AS id,
> >   ARRAY[item.ancestor_id] AS ancestors
> > FROM items AS item
> > %s
> > ​[...]​
> >
> > ', condition);
> >
> >   SELECT * FROM ancestors('WHERE item.id = 62324721');
> 
> ​Just keep in mind that this opens up a huge SQL-injection hole in
> your database.  Depending on how its called you might want to
> validation the input text for both whitelist and blacklist items
> before executing it.

I'm not calling it on user-supplied conditions, but yeah, if I were it
would certainly need some guards.  Unfortunately, neither format [1]
nor USING [2,3] seem to have auto-quoting for “make sure this is just
a WHERE condition [4] without side-effects” ;).  I think we'd need a
WHERE-condition data type to support that, just like we'd need a
WHERE-condition data type (or a function data type) to support my
initial idea [5]:

  CREATE OR REPLACE FUNCTION ancestors(condition WHERE-condition-type)
  RETURNS TABLE(id integer, ancestors integer[]) AS
$$
  WITH RECURSIVE _ancestors(id, ancestors) AS (
  SELECT
item.id AS id,
ARRAY[item.ancestor_id] AS ancestors
  FROM items AS item
  WHERE condition -- or, with a function type, condition(item)
UNION ALL
  …
  )
  …
$$ LANGUAGE SQL;

And even if you had a WHERE-condition data type, enforcing the
no-side-effects constraint would be tricky.

Things like blacklisting condition text with semicolons, etc. might
help against unintentional typos, although they seem too easily
avoided to be relied on against potentially malicious user input.
Parsing the condition text as WHERE-clause SQL to look for dangerous
constructs might be strong enough, but seems like a lot of work and
something I'm likely to get wrong if I tried ;).  For now, I'm just
making sure I'm not allowing untrusted users to provide condition
text.

Thanks,
Trevor

[1]: 
https://www.postgresql.org/docs/10/static/functions-string.html#FUNCTIONS-STRING-FORMAT
[2]: 
https://www.postgresql.org/docs/10/static/plpgsql-control-structures.html#id-1.8.8.8.3.4
[3]: 
https://www.postgresql.org/docs/10/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
[4]: https://www.postgresql.org/docs/10/static/sql-select.html#SQL-WHERE
[5]: https://www.postgresql.org/message-id/2018042055.GL27577%40valgrind.us

-- 
This email may be signed or encrypted with GnuPG (http://www.gnupg.org).
For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy


signature.asc
Description: OpenPGP digital signature


Re: Doubts about replication..

2018-04-20 Thread Francisco Olarte
On Thu, Apr 19, 2018 at 8:48 PM, Edmundo Robles  wrote:
> Yes, you are right, the replication is not  a backup ;)  , actually   i 
> backup  database  daily at  3:00 am but if database crash,  the amount  of 
> data is  big!  that is the reason i want to  replicate to reduce  the data 
> loss. By the way  a few days ago a job partner did a delete with no where.

Replication will happily replicate a delete with no where. Backup will
at least have the day starting point.

For this kind of things you need to do log-archiving + base backups.
With this you can use PITR to roll forward from the last full backup
to just before the delete.

Francisco Olarte.



Re: Postgresql database encryption

2018-04-20 Thread James Keener
What requirements do you have? Would enabling full disk encryption suite your 
needs?

On April 20, 2018 11:14:30 AM CDT, Tatsuo Ishii  wrote:
>> Could someone throw light on the postgresql instance wide or database
>wide
>> encryption please? Is this possible in postgresql and been in use in
>> production?.
>
>As far as I know there's no open source solution for instance or
>database wide encryption.
>
>If commercial solutions are ok for you, there are some. Please ask me
>in a private email (I don't want to spam the list).
>
>Best regards,
>--
>Tatsuo Ishii
>SRA OSS, Inc. Japan
>English: http://www.sraoss.co.jp/index_en.php
>Japanese:http://www.sraoss.co.jp

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Dynamically filtering a CTE?

2018-04-20 Thread David G. Johnston
On Fri, Apr 20, 2018 at 9:22 AM, W. Trevor King  wrote:

> format('
> WITH RECURSIVE _ancestors(id, ancestors) AS (
> SELECT
>   item.id AS id,
>   ARRAY[item.ancestor_id] AS ancestors
> FROM items AS item
> %s
> ​[...]​
>
> ', condition);
>
>   SELECT * FROM ancestors('WHERE item.id = 62324721');
>

​Just keep in mind that this opens up a huge SQL-injection hole in your
database.  Depending on how its called you might want to validation the
input text for both whitelist and blacklist items before executing it.

David J.
​


Re: Dynamically filtering a CTE?

2018-04-20 Thread W. Trevor King
On Thu, Apr 19, 2018 at 05:28:00PM -0700, David G. Johnston wrote:
> On Thursday, April 19, 2018, W. Trevor King wrote:
> > Is there an idiomatic way to approach this problem?
>
> I would use pl/pgsql as the language and build a query using a
> combination of text literals and the format() function - invoking
> via pl/pgsql's EXECUTE command.

That works.  I've ended up with:

  CREATE OR REPLACE FUNCTION ancestors(condition text)
RETURNS TABLE(id integer, ancestors integer[]) AS
  $$
  BEGIN
  RETURN QUERY EXECUTE format('
WITH RECURSIVE _ancestors(id, ancestors) AS (
SELECT
  item.id AS id,
  ARRAY[item.ancestor_id] AS ancestors
FROM items AS item
%s
  UNION ALL
SELECT
  descendant.id AS id,
  descendant.ancestors || ancestor.ancestor_id AS ancestors
FROM _ancestors AS descendant
JOIN items as ancestor
  ON descendant.ancestors[array_length(descendant.ancestors, 1)] = 
ancestor.id
)
SELECT
  id,
  ancestors[1:array_length(ancestors, 1) - 1] AS ancestors -- drop the 
trailing NULL
FROM _ancestors
WHERE ancestors[array_length(ancestors, 1)] IS NULL -- remove non-terminal 
recursion
', condition);
  END
  $$ LANGUAGE plpgsql STABLE;

which you can use like:

  SELECT * FROM ancestors('WHERE item.id = 62324721');

or (without filtering, for the full, slow CTE):

  SELECT * FROM ancestors('');

Thanks,
Trevor

-- 
This email may be signed or encrypted with GnuPG (http://www.gnupg.org).
For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy


signature.asc
Description: OpenPGP digital signature


Re: Postgresql database encryption

2018-04-20 Thread Tatsuo Ishii
> Could someone throw light on the postgresql instance wide or database wide
> encryption please? Is this possible in postgresql and been in use in
> production?.

As far as I know there's no open source solution for instance or
database wide encryption.

If commercial solutions are ok for you, there are some. Please ask me
in a private email (I don't want to spam the list).

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



Re: Postgresql database encryption

2018-04-20 Thread Igor Korot
Hi, Vikas,

On Fri, Apr 20, 2018 at 10:24 AM, Vikas Sharma  wrote:
> Hello Guys,
>
> Could someone throw light on the postgresql instance wide or database wide
> encryption please? Is this possible in postgresql and been in use in
> production?.
>
> This is a requirement in our production implementation.

Yes, it is possible.
We have a PostgreSQL DB encrypted in our project.

I'm not sure what was used though - OS or DB implementation. We use RHEL6.

Thank you.

>
> Many Thanks
> Vikas Sharma



Postgresql database encryption

2018-04-20 Thread Vikas Sharma
Hello Guys,

Could someone throw light on the postgresql instance wide or database wide
encryption please? Is this possible in postgresql and been in use in
production?.

This is a requirement in our production implementation.

Many Thanks
Vikas Sharma


Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread Adrian Klaver

On 04/20/2018 07:21 AM, David G. Johnston wrote:

On 04/19/2018 06:49 PM, PegoraroF10 wrote:
On Fri, Apr 20, 2018 at 6:55 AM, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>>wrote:


I know I did that trigger incorrectly but referential integrity is
obligatory.


I would agree if the FK relationship was entirely driven by the
system trigger e.g:

alter table Detail add constraint FKMasterDetail foreign key (Master_ID)
references Master(ID) on update cascade on delete cascade;

As soon as you added your UPDATE/DELETE trigger you took on
responsibility for how the data was passed around.


Such responsibility is an artifact of our specific implementation and 
not an inherent property of writing triggers in the presence of FK 
constraints.


https://en.wikipedia.org/wiki/Foreign_key#Triggers



We've left a foot-gun laying around and should not be surprised when 
less experienced users pick it up and shoot themselves in the foot.


IOW, I do agree with the OP - its just an unfortunate reality that this 
isn't how things work today.  Whether one can accept and work within 
this reality is a personal decision.


This does reinforce that testing the restoration of ones backups is 
important.


David J.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: A couple of pg_dump questions

2018-04-20 Thread Vick Khera
On Thu, Apr 19, 2018 at 6:39 PM, Ron  wrote:

>
> $ pg_dump --host=farawaysrvr -Fc $REMOTEDB > /local/disk/backups/$REMOTEDB.
> dump
>
> Is the data compressed on the remote server (thus minimizing traffic on
> the wire), or locally?  (I'd test this myself, but the company has really
> strict firewall rules in place.)
>
>
The compression is done locally. If your network is slower than
compression, then you could tunnel it through an SSH connection with
compression enabled.

As for your parallel dump, you cannot do that to stdout and "c" format. You
have to tell pg_dump the directory name to write, as that is the only
format that supports parallel dumps.


Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread David G. Johnston
On 04/19/2018 06:49 PM, PegoraroF10 wrote:
On Fri, Apr 20, 2018 at 6:55 AM, Adrian Klaver 
wrote:

> I know I did that trigger incorrectly but referential integrity is
>> obligatory.
>>
>
> I would agree if the FK relationship was entirely driven by the system
> trigger e.g:
>
> alter table Detail add constraint FKMasterDetail foreign key (Master_ID)
> references Master(ID) on update cascade on delete cascade;
>
> As soon as you added your UPDATE/DELETE trigger you took on responsibility
> for how the data was passed around.


Such responsibility is an artifact of our specific implementation and not
an inherent property of writing triggers in the presence of FK constraints.

We've left a foot-gun laying around and should not be surprised when less
experienced users pick it up and shoot themselves in the foot.

IOW, I do agree with the OP - its just an unfortunate reality that this
isn't how things work today.  Whether one can accept and work within this
reality is a personal decision.

This does reinforce that testing the restoration of ones backups is
important.

David J.


Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread Adrian Klaver

On 04/19/2018 06:49 PM, PegoraroF10 wrote:

Correct, that delete done a partial commit. And this is absolutely
unacceptable.


Yet a known possible outcome. See the section on Triggers towards bottom 
of page:


https://en.wikipedia.org/wiki/Foreign_key


I know I did that trigger incorrectly but referential integrity is
obligatory.


I would agree if the FK relationship was entirely driven by the system 
trigger e.g:


alter table Detail add constraint FKMasterDetail foreign key (Master_ID)
references Master(ID) on update cascade on delete cascade;

As soon as you added your UPDATE/DELETE trigger you took on 
responsibility for how the data was passed around. I understand that 
this was not communicated as well as it should be in the docs.



Imagine if I have a database crash and need to restore as soon as possible.
How much time I´ll spend removing those records from a backup to get entire
database restored properly.


Myself, having written more then my fair share of poorly thought out 
trigger  functions, I test new ones extensively before I release them 
into the wild.




Well, I´ll review all my triggers. And I have hundreds of them.







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: postgres on physical replica crashes

2018-04-20 Thread Hannes Erven

Hi Greig,


just last week I experienced the same situation as you on a 10.3 
physical replica (it even has checksums activated), and a few months ago 
on 9.6 .
We used the same resolution as you we, and so far we haven't noticed any 
problems with data integrity on the replicas.




The logs were as follows:
2018-04-13 06:31:16.947 CEST [15603] FATAL:  WAL-Receiver-Prozess wird 
abgebrochen wegen Zeitüberschreitung
2018-04-13 06:31:16.948 CEST [15213] FATAL:  invalid memory alloc 
request size 4280303616
2018-04-13 06:31:16.959 CEST [15212] LOG:  Startprozess (PID 15213) 
beendete mit Status 1
2018-04-13 06:31:16.959 CEST [15212] LOG:  aktive Serverprozesse werden 
abgebrochen
2018-04-13 06:31:16.959 CEST [19838] user@db WARNUNG:  Verbindung wird 
abgebrochen wegen Absturz eines anderen Serverprozesses
2018-04-13 06:31:16.959 CEST [19838] user@db DETAIL:  Der Postmaster hat 
diesen Serverprozess angewiesen, die aktuelle Transaktion zurückzurollen 
und die Sitzung zu beenden, weil ein anderer Serverprozess abnormal 
beendet wurde und möglicherweise das Shared Memory verfälscht hat.
2018-04-13 06:31:16.959 CEST [19838] user@db TIPP:  In einem Moment 
sollten Sie wieder mit der Datenbank verbinden und Ihren Befehl 
wiederholen können.



This replica then refused to start up:
2018-04-13 09:25:15.941 CEST [1957] LOG:  Standby-Modus eingeschaltet
2018-04-13 09:25:15.947 CEST [1957] LOG:  Redo beginnt bei 1C/69C0FF30
2018-04-13 09:25:15.951 CEST [1957] LOG:  konsistenter 
Wiederherstellungszustand erreicht bei 1C/69D9A9C0
2018-04-13 09:25:15.952 CEST [1956] LOG:  Datenbanksystem ist bereit, um 
lesende Verbindungen anzunehmen
2018-04-13 09:25:15.953 CEST [1957] FATAL:  invalid memory alloc request 
size 4280303616
2018-04-13 09:25:15.954 CEST [1956] LOG:  Startprozess (PID 1957) 
beendete mit Status 1



... until the WAL files from the hot standby's pg_wal were manually 
removed and re-downloaded from the primary.


Unfortunately I did not collect hard evidence, but I think I saw the 
primary's replication slot's restart point was set to a position /after/ 
the standby's actual restart location. This time, the error was noticed 
immediately and the required WAL was still present on the master.



A few months ago I experienced the same situation on a 9.6 cluster, but 
that was not noticed for a long time, and - despite using a replication 
slot! - the primary had already removed required segments. Fortunately I 
could get them from a tape backup...




Best regards,

-hannes




Am 2018-04-18 um 18:16 schrieb greigwise:

Hello.  I've had several instances where postgres on my physical replica
under version 9.6.6 is crashing with messages like the following in the
logs:

2018-04-18 05:43:26 UTC dbname 5acf5e4a.6918 dbuser DETAIL:  The postmaster
has commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory.
2018-04-18 05:43:26 UTC dbname 5acf5e4a.6918 dbuser HINT:  In a moment you
should be able to reconnect to the database and repeat your command.
2018-04-18 05:43:26 UTC dbname 5acf5e39.68e5 dbuser WARNING:  terminating
connection because of crash of another server process
2018-04-18 05:43:26 UTC dbname 5acf5e39.68e5 dbuser DETAIL:  The postmaster
has commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory.
2018-04-18 05:43:26 UTC dbname 5acf5e39.68e5 dbuser HINT:  In a moment you
should be able to reconnect to the database and repeat your command.
2018-04-18 05:43:27 UTC  5acf5e12.6819  LOG:  database system is shut down

When this happens, what I've found is that I can go into the pg_xlog
directory on the replica, remove all the log files and the postgres will
restart and things seem to come back up normally.

So, the question is what's going on here... is the log maybe getting corrupt
in transmission somehow?  Should I be concerned about the viability of my
replica after having restarted in the described fashion?

Thanks,
Greig Wise



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






Re: pg_upgrade help

2018-04-20 Thread Fabio Pardi
Hi Akshay,

I m glad it worked. 

* Your new data folder will be on /var/ericsson/esm-data/postgresql-data-9.4/ 
therefore you should set PGDATA accordingly

* analyze_new_cluster.sh runs on the new cluster, 9.4. Indeed you should start 
the db first, as mentioned in the upgrade message.


If you are happy with your upgrade, you can cleanup the leftovers running:

 delete_old_cluster.sh



regards,

fabio pardi

On 04/20/2018 11:24 AM, Akshay Ballarpure wrote:
> Hi Fabio,
> *Thanks so much for figuring out an issue..!!! much appreciated.*
> i have stopped both postgres version (8.4 and 9.4)
> 
> -bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data   - 
> postgresql 8.4
> -bash-4.2$ pg_ctl stop -mfast
> waiting for server to shut down done
> server stopped
> 
> 
> -bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data-9.4/   - 
> postgresql 9.4
> -bash-4.2$ ps -eaf | grep postgre^C
> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl stop -mfast
> waiting for server to shut down done
> server stopped
> 
> 
> And set below environment variables on terminal where i ran pg_upgrade. 
> and*its working fine. thanks so much for figuring out an issue..!!! much 
> appreciated.*
> 
> -bash-4.2$ echo $PGDATA
> /var/ericsson/esm-data/postgresql-data  - postgresql 8.4
> -bash-4.2$ echo $PGHOST
> /var/run/postgresql
> 
> 
> -bash-4.2$ env | grep PG
> PGHOST=/var/run/postgresql
> PGDATA=/var/ericsson/esm-data/postgresql-data
> 
> 
> /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin 
> --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin 
> --old-datadir=/var/ericsson/esm-data/postgresql-data 
> --new-datadir=/var/ericsson/esm-data/postgresql-data-9.4
> 
> 
> 
> Performing Consistency Checks
> -
> Checking cluster versions                                   ok
> Checking database user is a superuser                       ok
> Checking database connection settings                       ok
> Checking for prepared transactions                          ok
> Checking for reg* system OID user data types                ok
> Checking for contrib/isn with bigint-passing mismatch       ok
> Checking for invalid "line" user columns                    ok
> Checking for large objects                                  ok
> Creating dump of global objects                             ok
> Creating dump of database schemas
>                                                             ok
> Checking for presence of required libraries                 ok
> Checking database user is a superuser                       ok
> Checking for prepared transactions                          ok
> 
> If pg_upgrade fails after this point, you must re-initdb the
> new cluster before continuing.
> 
> Performing Upgrade
> --
> Analyzing all rows in the new cluster                       ok
> Freezing all rows on the new cluster                        ok
> Deleting files from new pg_clog                             ok
> Copying old pg_clog to new server                           ok
> Setting next transaction ID and epoch for new cluster       ok
> Deleting files from new pg_multixact/offsets                ok
> Setting oldest multixact ID on new cluster                  ok
> Resetting WAL archives                                      ok
> Setting frozenxid and minmxid counters in new cluster       ok
> Restoring global objects in the new cluster                 ok
> Adding support functions to new cluster                     ok
> Restoring database schemas in the new cluster
>                                                             ok
> Setting minmxid counter in new cluster                      ok
> Removing support functions from new cluster                 ok
> Copying user relation files
>                                                             ok
> Setting next OID for new cluster                            ok
> Sync data directory to disk                                 ok
> Creating script to analyze new cluster                      ok
> Creating script to delete old cluster                       ok
> Checking for large objects                                  ok
> 
> Upgrade Complete
> 
> Optimizer statistics are not transferred by pg_upgrade so,
> once you start the new server, consider running:
>     analyze_new_cluster.sh
> 
> Running this script will delete the old cluster's data files:
>     delete_old_cluster.sh
> 
> 
> 
> Now few more questions..
> 
> I migrated  export PGDATA=/var/ericsson/esm-data/postgresql-data - postgresql 
> 8.4
> I can start 9.4 with above PGDATA right ?
> analyze_new_cluster.sh  -- is this script will be from 9.4 ?
> 
> 
> 
> 
> 
> 
> 
> With Best Regards
> Akshay
> Ericsson OSS MON
> Tata Consultancy Services
> Mailto: akshay.ballarp...@tcs.com
> Website: http://www.tcs.com 
> 
> Experience certainty.        IT Services
>                        

Re: pg_upgrade help

2018-04-20 Thread Akshay Ballarpure
Hi Fabio,
Thanks so much for figuring out an issue..!!! much appreciated.
i have stopped both postgres version (8.4 and 9.4) 

-bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data   - 
postgresql 8.4
-bash-4.2$ pg_ctl stop -mfast
waiting for server to shut down done
server stopped


-bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data-9.4/   - 
postgresql 9.4
-bash-4.2$ ps -eaf | grep postgre^C
-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl stop -mfast
waiting for server to shut down done
server stopped


And set below environment variables on terminal where i ran pg_upgrade. 
and its working fine. thanks so much for figuring out an issue..!!! much 
appreciated.

-bash-4.2$ echo $PGDATA
/var/ericsson/esm-data/postgresql-data  - postgresql 8.4
-bash-4.2$ echo $PGHOST
/var/run/postgresql


-bash-4.2$ env | grep PG
PGHOST=/var/run/postgresql
PGDATA=/var/ericsson/esm-data/postgresql-data


/opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin 
--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin 
--old-datadir=/var/ericsson/esm-data/postgresql-data 
--new-datadir=/var/ericsson/esm-data/postgresql-data-9.4



Performing Consistency Checks
-
Checking cluster versions   ok
Checking database user is a superuser   ok
Checking database connection settings   ok
Checking for prepared transactions  ok
Checking for reg* system OID user data typesok
Checking for contrib/isn with bigint-passing mismatch   ok
Checking for invalid "line" user columnsok
Checking for large objects  ok
Creating dump of global objects ok
Creating dump of database schemas
ok
Checking for presence of required libraries ok
Checking database user is a superuser   ok
Checking for prepared transactions  ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
--
Analyzing all rows in the new cluster   ok
Freezing all rows on the new clusterok
Deleting files from new pg_clog ok
Copying old pg_clog to new server   ok
Setting next transaction ID and epoch for new cluster   ok
Deleting files from new pg_multixact/offsetsok
Setting oldest multixact ID on new cluster  ok
Resetting WAL archives  ok
Setting frozenxid and minmxid counters in new cluster   ok
Restoring global objects in the new cluster ok
Adding support functions to new cluster ok
Restoring database schemas in the new cluster
ok
Setting minmxid counter in new cluster  ok
Removing support functions from new cluster ok
Copying user relation files
ok
Setting next OID for new clusterok
Sync data directory to disk ok
Creating script to analyze new cluster  ok
Creating script to delete old cluster   ok
Checking for large objects  ok

Upgrade Complete

Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
delete_old_cluster.sh



Now few more questions..

I migrated  export PGDATA=/var/ericsson/esm-data/postgresql-data - 
postgresql 8.4 
I can start 9.4 with above PGDATA right ?
analyze_new_cluster.sh  -- is this script will be from 9.4 ?







With Best Regards
Akshay
Ericsson OSS MON
Tata Consultancy Services
Mailto: akshay.ballarp...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting





From:   Akshay Ballarpure/HYD/TCS
To: Fabio Pardi 
Cc: pgsql-general@lists.postgresql.org
Date:   04/19/2018 06:24 PM
Subject:Re: pg_upgrade help


Hi Fabio,
I think you have found the problem. Please find o/p below.


-bash-4.2$ ps -aef | grep postgres
postgres   478 1  0 13:40 ?00:00:00 /usr/bin/postgres -p 50432 
-D /var/ericsson/esm-data/postgresql-data/
postgres   490   478  0 13:40 ?00:00:00 postgres: logger process
postgres   492   478  0 13:40 ?00:00:00 postgres: writer process
postgres   493   478  0 13:40 ?00:00:00 postgres: wal writer 
process
postgres   494

Re: Doubts about replication..

2018-04-20 Thread Stefan Fercot
Hi,

If you want to reduce the amount of data lost, you might want to
consider another type of backup :
https://www.postgresql.org/docs/current/static/continuous-archiving.html

That awsome tool can also help you with it : https://pgbackrest.org/


On 04/19/2018 08:48 PM, Edmundo Robles wrote:
> Yes, you are right, the replication is not  a backup ;)  , actually 
>  i backup  database  daily at  3:00 am but if database crash,  the
> amount  of data is  big!  that is the reason i want to  replicate to
> reduce  the data loss. By the way  a few days ago a job partner did a
> delete with no where.
>
> On Thu, Apr 19, 2018 at 1:33 PM, Andreas Kretschmer
> mailto:andr...@a-kretschmer.de>> wrote:
>
>
>
> Am 19.04.2018 um 19:57 schrieb Edmundo Robles:
>
> I will use replication as simple backup.
>
>
> please keep in mind, replication is not a backup. All logical
> errors on the master (delete from table and forgot the
> where-condition) will replicated to the standby.
>
>
> Andreas
>
> -- 
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com 
>
>
>
>
>
> -- 
>

Kind regards,

-- 
Stefan FERCOT
http://dalibo.com - http://dalibo.org



signature.asc
Description: OpenPGP digital signature