RLS and Table Inheritance

2024-09-12 Thread Sanjay Minni
Hi,

Do RLS policies defined at the parent, work on the child (in Table
inheritance).
At the parent I have a column 'site_id' with an RLS policy that only rows
with site_id = current_setting(curr_site_id) would be accessible.
However the policy defined at the parent does not work for me in inherited
tables. Is it intended that way ?
Is there a solution to reflect the RLS policy along the line of
inheritance, else i have to redefine the RLS policy at each inherited table
(a significant maintenance effort).

In another post I see conflicting views on this
https://www.postgresql.org/message-id/d094a87d-9d63-46c9-8c27-631f881b8...@supportex.net
.
While the original post seemed to get a prototype working,  the response
says that RLS policy will not be inherited and needs to be reimplemented at
each table.

thanks
Sanjay


Re: efficiency random values / sequential ID values in indexes

2024-04-16 Thread Sanjay Minni
Thanks

On Mon, Apr 15, 2024 at 6:19 PM Ron Johnson  wrote:

> On Mon, Apr 15, 2024 at 6:05 AM Sanjay Minni 
> wrote:
>
>> Hi
>>
>> Is there any appreciable difference in using random values or sequential
>> values in indexes
>>
>> in a multi tenanted application there is a choice that the single field
>> ID's value is totally random / UUID or the numbers are created with a
>> prefix of the tenant. Since all access will be within a tenant only, will
>> it make any performance difference between using purely random values vs
>> +.
>>
>
> Two benefits of +:
>
> 1. In a non-partitioned table, it gives your index "locality of data":
> all of customer X's record pointers are in *This* subtree.  Makes buffers
> more efficient when a customer runs reports.  Bonus points if you then
> regularly CLUSTER using that table.
> 2. Makes table partitioning by  much easier.  That also
> enhances locality of data.
>
> Just make sure that the field ID is BIGINT...
>


efficiency random values / sequential ID values in indexes

2024-04-15 Thread Sanjay Minni
Hi

Is there any appreciable difference in using random values or sequential
values in indexes

in a multi tenanted application there is a choice that the single field
ID's value is totally random / UUID or the numbers are created with a
prefix of the tenant. Since all access will be within a tenant only, will
it make any performance difference between using purely random values vs
+.

thanks
Sanjay


Re: Multiple connections over VPN password fail error

2024-02-09 Thread Sanjay Minni
Hi,

for the second user/role over vpn
even with md5 I get the error:
P3Error PostgreSQL password authentication failed for user ""
am only able to connect if method is "trust". Why is this so

  # IPv4 external connections thru VPN
  #TYPE   DATABASE  USER   ADDRESS  METHOD
  host   all   all scram-sha-256   (or "md5" -
second user is only able to connect if this is "trust"
regards
Sanjay




On Fri, Feb 9, 2024 at 2:44 PM Sanjay Minni  wrote:

> so why do I get a password error when i try to connect 2 users over VPN
> from the same machine to the same host with the following settings in
> pg_dba.conf - how to find the issue
> ( user1:user1pwd@ & user2:user2pwd@ )
>
>   # IPv4 external connections thru VPN
>   #TYPE   DATABASE  USER   ADDRESS  METHOD
>   host   all   all scram-sha-256
> and whats the best option keeping security in mind
>
> regards
> Sanjay
>
>
> On Fri, Feb 9, 2024 at 1:26 PM Daniel Gustafsson  wrote:
>
>> > On 9 Feb 2024, at 08:41, Sanjay Minni  wrote:
>>
>> > while trying to make multiple connects with different role names to a
>> single database over VPN i faced a password error issue when trying to
>> connect a send user
>> > It seems I had to change this line in pg_hba.conf and it worked:
>> >
>> >`# IPv4 external connections thru VPN
>> > #TYPE   DATABASE  USER   ADDRESS  METHOD
>> > host   all   all  trust `<=(from the
>> earlier scram-sha-256)
>> >
>> > is this the way and is this correct from a security point of view ?
>>
>> While correctness and security always needs to be evaluated from the
>> specific
>> needs of an installation, the odds are pretty good that "No" is the
>> correct
>> answer here.  To quote the documentation on the "trust" setting:
>>
>> "Allow the connection unconditionally.  This method allows anyone
>> that
>> can connect to the PostgreSQL database server to login as any
>> PostgreSQL user they wish, without the need for a password or any
>> other
>> authentication."
>>
>> I would recommend immediately reverting back to the scram-sha-256 setting
>> and
>> figuring out why you were unable to login.
>>
>> --
>> Daniel Gustafsson
>>
>>


Re: Multiple connections over VPN password fail error

2024-02-09 Thread Sanjay Minni
so why do I get a password error when i try to connect 2 users over VPN
from the same machine to the same host with the following settings in
pg_dba.conf - how to find the issue
( user1:user1pwd@ & user2:user2pwd@ )

  # IPv4 external connections thru VPN
  #TYPE   DATABASE  USER   ADDRESS  METHOD
  host   all   all scram-sha-256
and whats the best option keeping security in mind

regards
Sanjay


On Fri, Feb 9, 2024 at 1:26 PM Daniel Gustafsson  wrote:

> > On 9 Feb 2024, at 08:41, Sanjay Minni  wrote:
>
> > while trying to make multiple connects with different role names to a
> single database over VPN i faced a password error issue when trying to
> connect a send user
> > It seems I had to change this line in pg_hba.conf and it worked:
> >
> >`# IPv4 external connections thru VPN
> > #TYPE   DATABASE  USER   ADDRESS  METHOD
> > host   all   all  trust `<=(from the earlier
> scram-sha-256)
> >
> > is this the way and is this correct from a security point of view ?
>
> While correctness and security always needs to be evaluated from the
> specific
> needs of an installation, the odds are pretty good that "No" is the correct
> answer here.  To quote the documentation on the "trust" setting:
>
> "Allow the connection unconditionally.  This method allows anyone
> that
> can connect to the PostgreSQL database server to login as any
> PostgreSQL user they wish, without the need for a password or any
> other
> authentication."
>
> I would recommend immediately reverting back to the scram-sha-256 setting
> and
> figuring out why you were unable to login.
>
> --
> Daniel Gustafsson
>
>


Multiple connections over VPN password fail error

2024-02-08 Thread Sanjay Minni
Hi All

while trying to make multiple connects with different role names to a
single database over VPN i faced a password error issue when trying to
connect a send user
It seems I had to change this line in pg_hba.conf and it worked:

   `# IPv4 external connections thru VPN
#TYPE   DATABASE  USER   ADDRESS  METHOD
host   all   all  trust `<=(from the earlier
scram-sha-256)

is this the way and is this correct from a security point of view ?

regards
Sanjay Minni


Re: timestamp default current_timestamp not working

2021-12-14 Thread Sanjay Minni
Its an insert and my insert SQL contains the column timestamp and value
nil.
Will it work if the SQL contains timestamp through value is nil

On Tue, 14 Dec, 2021, 9:20 pm Adrian Klaver, 
wrote:

> On 12/14/21 07:36, Sanjay Minni wrote:
> > Hi
> >
> > I am not getting the timestamp value which i am expecting to be
> > automatically inserted in the column defined:
> > ...
> > "timestamp" timestamp with time zone DEFAULT CURRENT_TIMESTAMP,
> > ...
> > (I see the above definition in pgadmin4 in the tab SQL.
> > I have set it thru pgadmin4 table->properties->columns by entering
> > CURRENT_TIMESTAMP in the default column)
> >
> > why is the timestamp value not being automatically inserted by the system
>
> 1) It is a DEFAULT value so if a value is provided in the INSERT that
> will be used.
>
> 2) DEFAULT values only apply to INSERTs not UPDATEs.
>
> >
> > with warm regards
> > Sanjay Minni
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


timestamp default current_timestamp not working

2021-12-14 Thread Sanjay Minni
Hi

I am not getting the timestamp value which i am expecting to be
automatically inserted in the column defined:
...
"timestamp" timestamp with time zone DEFAULT CURRENT_TIMESTAMP,
...
(I see the above definition in pgadmin4 in the tab SQL.
I have set it thru pgadmin4 table->properties->columns by entering
CURRENT_TIMESTAMP in the default column)

why is the timestamp value not being automatically inserted by the system

with warm regards
Sanjay Minni


any default columns for tracking / auditing purpose

2021-11-20 Thread Sanjay Minni
In Postgres 12 and up, are there any default columns available which could
be useful for tracking / auditing purposes ?
e.g.
- real user who entered / updated the data
- timestamp when data was updated / entered
where can i get a list of such default columns (if any) - which could be
accessed easily in user applications.

also what is the usual practice in creating userID's in a large
multi-tenanted applications with highly dynamically changing users (e.g
students who enroll and go off after some sometime), with all users having
the same access -
is a different userID created in the system for each user or is a common
postgresID used by the application and the application is responsible for
tracking individual users with its own userID. (this will reduce system
overhead)

with warm regards
Sanjay


Re: historical log of data records

2021-11-16 Thread Sanjay Minni
Alban,

Its a simple financial transaction processing application, the application
permits editing / updating / deleting of entered data even multiple times
but audit trail of the data tracing through all versions to its original
must be preserved.
(as outlined - Programmatically i could approach it by keeping a parallel
set of tables and copying the row being replaced into the parallel table
set, or, keeping all record versions in a single table only and a flag to
indicate the final / current version)
I am looking is there are better ways to do it


with warm regards
Sanjay Minni
+91-9900-902902



On Tue, 16 Nov 2021 at 15:57, Alban Hertroys  wrote:

>
> > On 16 Nov 2021, at 10:20, Laurenz Albe  wrote:
> >
> > On Tue, 2021-11-16 at 13:56 +0530, Sanjay Minni wrote:
> >> I need to keep a copy of old data as the rows are changed.
> >>
> >> For a general RDBMS I could think of keeping all the data in the same
> table with a flag
> >> to indicate older copies of updated /  deleted rows or keep a parallel
> table and copy
> >> these rows into the parallel data under program / trigger control. Each
> has its pros and cons.
> >>
> >> In Postgres would i have to follow the same methods or are there any
> features / packages available ?
> >
> > Yes, I would use one of these methods.
> >
> > The only feature I can think of that may help is partitioning: if you
> have one partition
> > for the current data and one for the deleted data, then updating the
> flag would
> > automatically move the row between partitions, so you don't need a
> trigger.
>
> Are you building (something like) a data-vault? If so, keep in mind that
> you will have a row for every update, not just a single deleted row.
> Enriching the data can be really useful in such cases.
>
> For a data-vault at a previous employer, we determined how to treat new
> rows by comparing a (md5) hash of the new and old rows, adding the hash and
> a validity interval to the stored rows. Historic data went to a separate
> table for each respective current table.
>
> The current tables “inherited” the PK’s from the tables on the source
> systems (this was a data-warehouse DB). Obviously that same PK can not be
> applied to the historic tables where there _will_ be duplicates, although
> they should be at non-overlapping validity intervals.
>
> Alternatively, since this is time-series data, it would probably be a good
> idea to store that in a way optimised for that. TimescaleDB comes to mind,
> or arrays as per Pavel’s suggestion at
> https://stackoverflow.com/questions/68440130/time-series-data-on-postgresql
> .
>
> Regards,
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>


Re: historical log of data records

2021-11-16 Thread Sanjay Minni
Are you referring to Table Partitioning ?
https://www.postgresql.org/docs/14/ddl-partitioning.html

with warm regards
Sanjay Minni
+91-9900-902902
http://planage.com
https://www.linkedin.com/in/sanjayminni/



On Tue, 16 Nov 2021 at 14:50, Laurenz Albe  wrote:

> On Tue, 2021-11-16 at 13:56 +0530, Sanjay Minni wrote:
> > I need to keep a copy of old data as the rows are changed.
> >
> > For a general RDBMS I could think of keeping all the data in the same
> table with a flag
> > to indicate older copies of updated /  deleted rows or keep a parallel
> table and copy
> > these rows into the parallel data under program / trigger control. Each
> has its pros and cons.
> >
> > In Postgres would i have to follow the same methods or are there any
> features / packages available ?
>
> Yes, I would use one of these methods.
>
> The only feature I can think of that may help is partitioning: if you have
> one partition
> for the current data and one for the deleted data, then updating the flag
> would
> automatically move the row between partitions, so you don't need a trigger.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


reading this group other than thru mails

2021-11-16 Thread Sanjay Minni
Is there any way to read this group other than thru mails or search old
mails other than thru the interface at
https://www.postgresql.org/list/pgsql-general. I dont seem to find these
very friendly.

Sanjay Minni
+91-9900-902902


historical log of data records

2021-11-16 Thread Sanjay Minni
I need to keep a copy of old data as the rows are changed.

For a general RDBMS I could think of keeping all the data in the same table
with a flag to indicate older copies of updated /  deleted rows or keep a
parallel table and copy these rows into the parallel data under program /
trigger control. Each has its pros and cons.

In Postgres would i have to follow the same methods or are there any
features / packages available ?

regards
Sanjay Minni


Approach to creating users in Database

2021-04-27 Thread Sanjay Minni
Hi,

what is the usual approach in creating users / role in a postgresql
database serving as the data repository to a hosted multi-tenanted
application with a large number of users.

the 2 approaches I can think of is
A. The user stays and is controlled only in the application level, the
application in turn connects with the postgresql database as a single user
name. Any userid to be stored as part of the data is injected by the
application
B. Each user is also created in the postgresql database with grant or
required rights to the tables.

what is the usual practice ?
I am unable to see any benefit in (B) above - i.e. recreating each user in
the postgres database

So what am i missing. A change in approach later can mean a huge rework
later

with warm regards
Sanjay Minni
+91-9900-902902
http://planage.com
https://www.linkedin.com/in/sanjayminni/