Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

2016-12-30 Thread Amitabh Kant
On Fri, Dec 30, 2016 at 12:06 PM, ajmcello  wrote:

> Reducing worker mem shaved about 12 minutes off the query time.. Thanks
> for the suggestion. I lowered it to 10MB instead of 100MB
>
> [SNIP]
>
> >>> [postgresql.conf]
> >>> max_connections = 10
> >>> max_files_per_process = 100
> >>> shared_buffers = 24GB
> >>> max_locks_per_transaction  = 1000
> >>> effective_cache_size = 50GB
> >>> work_mem = 100MB
> >>> maintenance_work_mem = 2GB
> >>> log_min_duration_statement = 1
> >>> checkpoint_completion_target = 0.9
> >>> wal_buffers = 32MB
> >>> default_statistics_target = 100
> >>> listen_addresses = '*'
> >>> port = 5432
> >>> ssl = off
> >>> wal_sync_method = fdatasync
> >>> synchronous_commit = on
> >>> fsync = off
> >>> wal_level = minimal
> >>> #client_min_messages = fatal
> >>> #log_min_messages = fatal
> >>> #log_min_error_statement = fatal
> >>> datestyle = 'iso, mdy'
> >>> debug_pretty_print = off
> >>> debug_print_parse = off
> >>> debug_print_plan = off
> >>> debug_print_rewritten = off
> >>> default_text_search_config = 'pg_catalog.english'
> >>> enable_bitmapscan = on
> >>> enable_hashagg = on
> >>> enable_hashjoin = on
> >>> enable_indexonlyscan = on
> >>> enable_indexscan = on
> >>> enable_material = on
> >>> enable_mergejoin = on
> >>> enable_nestloop = on
> >>> enable_seqscan = on
> >>> enable_sort = on
> >>> enable_tidscan = on
> >>> from_collapse_limit = 8
> >>> geqo = on
> >>> geqo_threshold = 12
> >>> log_checkpoints = off
> >>>
> >>> log_connections = off
> >>> log_disconnections = off
> >>> log_duration = off
> >>> log_executor_stats = off
> >>> log_hostname = off
> >>> log_parser_stats = off
> >>> log_planner_stats = off
> >>> log_replication_commands = off
> >>> log_statement_stats = off
> >>> log_timezone = 'UTC'
> >>> max_wal_size = 1GB
> >>> min_wal_size = 80MB
> >>> shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen'
> >>> stats_temp_directory = 'pg_stat_tmp'
> >>> timezone = 'US/Pacific'
> >>> track_activities = on
> >>> track_counts = on
> >>> track_io_timing = off
> >>>
> >>>
> >>> Thanks in advance.
> >>>
> >>>
> >>>
> >>>
> >>
> >>
>


The number of connections that you are attempting from Postgres is way too
high. You should be using a connection pooler like pgbouncer, and reduce
the number of connections at  postgres level.

Amitabh


Re: [GENERAL] Default column value [ANSWERED]

2016-12-30 Thread Adrian Klaver

On 12/30/2016 08:51 AM, Rich Shepard wrote:

On Fri, 30 Dec 2016, David G. Johnston wrote:


"The CHECK clause specifies an expression producing a Boolean result
which
new or updated rows must satisfy for an insert or update operation to
succeed. Expressions evaluating to TRUE or UNKNOWN succeed."

NULL == "UNKNOWN"


David,

  I forgot about that. Thanks for pointing it out to me.


What it comes down to is if you do not want NULL values in a column then 
specify NOT NULL on the column.




Regards,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default column value

2016-12-30 Thread Joshua D. Drake

On 12/30/2016 06:46 AM, Adrian Klaver wrote:

On 12/30/2016 06:38 AM, Rich Shepard wrote:



test=> \d default_test
  Table "public.default_test"
 Column |   Type| Modifiers
+---+---
 id | integer   |
 fld_1  | character varying |





To further illustrate this, NULL means UNKNOWN, not DEFAULT. Using 
Adrian's example:


postgres=# create table default_test(id int, fld_1 varchar DEFAULT NULL);
CREATE TABLE
postgres=# INSERT into default_test VALUES(1,NULL);
INSERT 0 1
postgres=# INSERT into default_test VALUES(1,DEFAULT);
INSERT 0 1
postgres=# select * from default_test ;
 id | fld_1
+---
  1 |
  1 |
(2 rows)

postgres=# alter table default_test alter column fld_1 set default now();
ALTER TABLE
postgres=# INSERT into default_test VALUES(1,DEFAULT);
INSERT 0 1
postgres=# INSERT into default_test VALUES(1,NULL);
INSERT 0 1
postgres=# select * from default_test ;
 id | fld_1
+---
  1 |
  1 |
  1 | 2016-12-30 09:11:11.170948-08
  1 |
(4 rows)

Sincerely,

JD

--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LYDB: Feasible to use PG roles instead of application-level security?

2016-12-30 Thread Karsten Hilbert
On Fri, Dec 30, 2016 at 10:23:44AM -0500, Stephen Frost wrote:

> One area that isn't fully addressed with the PG auth model today is
> partial access to a certain column.  Consider a table where you want
> users to have access to all of the rows and all of the columns *except*
> for column X for rows where ID is > 1000.  The PG auth model today can
> be used to say "you can't access column X" or to say "you can't access
> rows where ID > 1000" but you can't combine those, yet.

Do you mean that there is currently no way to say:

if special_column is NOT in the SELECT list:
show all rows
if special_column IS in the SELECT list:
show only those rows where special_column > 1000

?

Thanks,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default column value [ANSWERED]

2016-12-30 Thread Rich Shepard

On Fri, 30 Dec 2016, David G. Johnston wrote:


"The CHECK clause specifies an expression producing a Boolean result which
new or updated rows must satisfy for an insert or update operation to
succeed. Expressions evaluating to TRUE or UNKNOWN succeed."

NULL == "UNKNOWN"


David,

  I forgot about that. Thanks for pointing it out to me.

Regards,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default column value

2016-12-30 Thread David G. Johnston
On Fri, Dec 30, 2016 at 9:19 AM, Rich Shepard 
wrote:

> On Fri, 30 Dec 2016, Adrian Klaver wrote:
>
> DEFAULT is what is the column is set to if the user does not specify a
>> value. As shown above a user can supply a NULL value. To guard against
>> that the NOT NULL constraint is required.
>>
>
>   One more case I'd appreciate being clarified: when the column's value has
> a check constraint with acceptable values in a list. For example,
>
> param_units VARCHAR(8) DEFAULT 'mg/L'
> CONSTRAINT param_units
>   CHECK (param_units IN ('ppm', 'mg/L', 'ug/L', 'umho/cm', 'percent',
> 'cfm', 'gpm')),
>
>   Seems to me that if values are constrained by a list a NULL cannot be
> entered by the user. Is this correct?
>
>
​https://www.postgresql.org/docs/9.6/static/sql-createtable.html​

"The CHECK clause specifies an expression producing a Boolean result which
new or updated rows must satisfy for an insert or update operation to
succeed. Expressions evaluating to TRUE or UNKNOWN succeed."

NULL == "UNKNOWN"

David J.


Re: [GENERAL] Default column value

2016-12-30 Thread Rich Shepard

On Fri, 30 Dec 2016, Adrian Klaver wrote:


DEFAULT is what is the column is set to if the user does not specify a
value. As shown above a user can supply a NULL value. To guard against
that the NOT NULL constraint is required.


  One more case I'd appreciate being clarified: when the column's value has
a check constraint with acceptable values in a list. For example,

param_units VARCHAR(8) DEFAULT 'mg/L'
CONSTRAINT param_units
  CHECK (param_units IN ('ppm', 'mg/L', 'ug/L', 'umho/cm', 'percent', 
'cfm', 'gpm')),

  Seems to me that if values are constrained by a list a NULL cannot be
entered by the user. Is this correct?

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LYDB: Feasible to use PG roles instead of application-level security?

2016-12-30 Thread Melvin Davidson
On Fri, Dec 30, 2016 at 10:23 AM, Stephen Frost  wrote:

> Greetings,
>
> * Guyren Howe (guy...@gmail.com) wrote:
> > it occurs to me to wonder whether it is practical to use PG’s own roles
> and security model in lieu of using an application-level one.
>
> The short answer is yes.
>
> > It seems that the role system in PG is sufficient for most general
> purposes. One could presumably also have a table with role names and
> associated metainformation (email address etc) as needed.
>
> Yup.  That can get a bit awkward if you have multiple databases inside
> of a single cluster, as you would have to pick which database to put
> that metainformation in, but that isn't a very big issue.
>
> > If I have a system with many thousands of users, is it practical to
> manage these users’ authentication and authorization using *just* Postgres?
>
> For this, it really depends on if the PG authorization model matches the
> requirements you have.  The PG auth model, particularly with RLS, is
> extremely flexible but you would really need to evaluate what the exact
> requirements are and how you would handle that with the PG auth model.
> Of course, if there are just a few exceptions or complicated cases that
> can't be satisfied directly with PG today, you could use security
> definer functions.
>
> One area that isn't fully addressed with the PG auth model today is
> partial access to a certain column.  Consider a table where you want
> users to have access to all of the rows and all of the columns *except*
> for column X for rows where ID is > 1000.  The PG auth model today can
> be used to say "you can't access column X" or to say "you can't access
> rows where ID > 1000" but you can't combine those, yet.
>
> I'm hopeful that we'll get there as there are definitely use-cases for
> that kind of access control, but it's unlikely to happen for PG10.
>
> > It occurs to me that some client frameworks might have issues with their
> connection pools if those connections keep switching users, assuming they
> even can, but let’s set that aside for now. Or perhaps every connection
> could immediately do a SET USER before executing its connection?
>
> Again, yes, connection poolers can be an issue, but it's possible to use
> the role system and do a 'set role X' after having connected as some
> user that has very little access.  The issue here is controlling that
> role change- there's no direct way in PG today to require a password to
> be provided when doing the role change, which is unfortunate.  One
> approach to solving that with RLS is to use a security definer function
> to change a record in a table that is then used in all RLS policies.
> It's a bit complicated and doesn't involve doing 'set role' though, so
> there are some trade-offs there.
>
> If you really want connection pooling and independent users in PG's role
> system then you'll end up having to have the app code do the
> authentication (or maybe auth to PG as the user and, if successful,
> reconnect as the regular user and set role...  that's pretty awkward
> though) and then connect and do the 'set role'.
>
> One big question here, however, is if you're going to have thousands of
> *concurrently connected* users.  Thousands of users shouldn't be too
> much of an issue, but if they're all connected using PG's main auth
> system then you'll have thousands of backend processes running.  That'll
> end up causing some amount of overhead even if they're mostly idle.  If
> your application can handle connecting/disconnecting pretty easily and
> you have a relativly short timeout (though, ideally, not too short) then
> perhaps your number of concurrent connections won't be too bad.
>
> > This seems an attractive proposition from a security standpoint: if I
> use row-level security pervasively, I can have a security system that’s
> nestled nice and close to the data and presumably tricky to work around
> from a hacker given direct access only to the client application.
>
> If that's the threat model you want to address then you'll have to work
> out the concurrent connections question.  One thing which can help is to
> use a common user for 'read-only/public-access (or at least low-value)'
> queries from the app, if there are such.
>
> > Is this practical? Has anyone here done it? What might the caveats be?
>
> Yes, yes, see above.
>
> Thanks!
>
> Stephen
>


*>Postgres roles are global to the cluster,*


*Well, that is true by default, however, you  can make roles database
specific:https://www.postgresql.org/docs/9.4/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SECURITY
*


*db_user_namespace = on*

*That being said, there is a trade off of managing multiple users &
passwords VS simple access roles. *
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Book or other resource on Postgres-local code?

2016-12-30 Thread Melvin Davidson
On Fri, Dec 30, 2016 at 2:50 AM, Pavel Stehule 
wrote:

>
>
> 2016-12-30 8:04 GMT+01:00 Guyren Howe :
>
>>
>> > On Dec 29, 2016, at 23:01 , Regina Obe  wrote:
>> >
>> >
>> >> As an aside from my last question about my LYDB effort:
>> >
>> >> https://medium.com/@gisborne/love-your-database-lydb-23c69f4
>> 80a1d#.4jngp2rcb
>> >
>> >> I would like to find a book or other resource about SQL server-side
>> programming (stored procedures etc) best practices in general and for
>> Postgres in particular.
>> >
>> > Shameless plug
>> >
>> > Have you checked out our book?  The 2nd edition covered PostgreSQL 9.2
>> - 9.4.
>> > http://shop.oreilly.com/product/0636920052715.do
>> >
>> > We are working on the 3rd edition which is a bit fatter (probably will
>> be about 50 pages fatter when we are done) than the 2nd edition.
>> >
>> > http://shop.oreilly.com/product/0636920052715.do
>> >
>> > The 3rd focuses on PostgreSQL 9.5-9.6 (and is in prerelease sale at
>> moment).  By the time of release, we'll probably have some PostgreSQL 10
>> content in there as well.
>> >
>> > It covers fancy SQL constructs and data types (both ANSI ones and ones
>> unique to PostgreSQL), general administration, and writing stored functions
>> with SQL, PLPGSQL and PL/V8.
>> >
>> > In the 3rd we are adding an additional PL/V8 example how to build a
>> window function in PL/V8 (aka PL/JavaScript)
>>
>> I’m sure the book is great. But it looks like much of the material I can
>> find about Postgres: how to write a function, how to write a query, etc.
>>
>> What I’m more looking for is “System Design with Postgres”: *when* to
>> write a function, *when* to use a stored procedure over a client-side
>> function.
>>
>
> Lot of Oracle's books related to this topic is valid for PostgreSQL too.
> The design of stored procedures in PostgreSQL is conceptually similar to
> Oracle.
>
> The theme "stored procedures" is strongly controversial -  from "stored
> procedures are evil" to "do all in procedures".
>
> I like the strategy - what you can do easy in database, do it there - the
> client should to get a results. But don't do communication server from
> database. PostgreSQL is ACID, stored procedures are ACID. Outer world is
> not ACID - and interface ACID/NOACID is better to implement outside
> database.
>
> Regards
>
> Pavel
>
>
>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>



















*> would like to find a book or other resource about SQL server-side
programming (stored procedures etc) best practices in general and for
Postgres in particular.I'll start off with Enumerated types are evil. You
are much better off with Foriegn Key Constraints.That being said, there are
four excellent books that I always recommend to my clients:PostgreSQL
Development
Essentials:https://www.amazon.com/PostgreSQL-Development-Essentials-Manpreet-Kaur/dp/1783989009/ref=sr_1_2?s=books&ie=UTF8&qid=1483111853&sr=1-2&keywords=PostgreSQL#reader_B01LFAN8B6
PostgreSQL
9 Administration
Cookbookhttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-9-administration-cookbook-second-edition
PostgreSQL
Server
Programminghttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-server-programming-second-edition
PostgreSQL
9.0 High
Performancehttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-90-high-performance
In
addition, I've attached my own controversial PostgreSQL Developer Best
Practices*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


PostgreSQL Developer Best Practices.doc
Description: MS-Word document

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LYDB: Feasible to use PG roles instead of application-level security?

2016-12-30 Thread Stephen Frost
Greetings,

* Guyren Howe (guy...@gmail.com) wrote:
> it occurs to me to wonder whether it is practical to use PG’s own roles and 
> security model in lieu of using an application-level one.

The short answer is yes.

> It seems that the role system in PG is sufficient for most general purposes. 
> One could presumably also have a table with role names and associated 
> metainformation (email address etc) as needed.

Yup.  That can get a bit awkward if you have multiple databases inside
of a single cluster, as you would have to pick which database to put
that metainformation in, but that isn't a very big issue.

> If I have a system with many thousands of users, is it practical to manage 
> these users’ authentication and authorization using *just* Postgres?

For this, it really depends on if the PG authorization model matches the
requirements you have.  The PG auth model, particularly with RLS, is
extremely flexible but you would really need to evaluate what the exact
requirements are and how you would handle that with the PG auth model.
Of course, if there are just a few exceptions or complicated cases that
can't be satisfied directly with PG today, you could use security
definer functions.

One area that isn't fully addressed with the PG auth model today is
partial access to a certain column.  Consider a table where you want
users to have access to all of the rows and all of the columns *except*
for column X for rows where ID is > 1000.  The PG auth model today can
be used to say "you can't access column X" or to say "you can't access
rows where ID > 1000" but you can't combine those, yet.

I'm hopeful that we'll get there as there are definitely use-cases for
that kind of access control, but it's unlikely to happen for PG10.

> It occurs to me that some client frameworks might have issues with their 
> connection pools if those connections keep switching users, assuming they 
> even can, but let’s set that aside for now. Or perhaps every connection could 
> immediately do a SET USER before executing its connection?

Again, yes, connection poolers can be an issue, but it's possible to use
the role system and do a 'set role X' after having connected as some
user that has very little access.  The issue here is controlling that
role change- there's no direct way in PG today to require a password to
be provided when doing the role change, which is unfortunate.  One
approach to solving that with RLS is to use a security definer function
to change a record in a table that is then used in all RLS policies.
It's a bit complicated and doesn't involve doing 'set role' though, so
there are some trade-offs there.

If you really want connection pooling and independent users in PG's role
system then you'll end up having to have the app code do the
authentication (or maybe auth to PG as the user and, if successful,
reconnect as the regular user and set role...  that's pretty awkward
though) and then connect and do the 'set role'.

One big question here, however, is if you're going to have thousands of
*concurrently connected* users.  Thousands of users shouldn't be too
much of an issue, but if they're all connected using PG's main auth
system then you'll have thousands of backend processes running.  That'll
end up causing some amount of overhead even if they're mostly idle.  If
your application can handle connecting/disconnecting pretty easily and
you have a relativly short timeout (though, ideally, not too short) then
perhaps your number of concurrent connections won't be too bad.

> This seems an attractive proposition from a security standpoint: if I use 
> row-level security pervasively, I can have a security system that’s nestled 
> nice and close to the data and presumably tricky to work around from a hacker 
> given direct access only to the client application.

If that's the threat model you want to address then you'll have to work
out the concurrent connections question.  One thing which can help is to
use a common user for 'read-only/public-access (or at least low-value)'
queries from the app, if there are such.

> Is this practical? Has anyone here done it? What might the caveats be?

Yes, yes, see above.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Default column value

2016-12-30 Thread Rich Shepard

On Fri, 30 Dec 2016, Adrian Klaver wrote:


DEFAULT is what is the column is set to if the user does not specify a
value. As shown above a user can supply a NULL value. To guard against
that the NOT NULL constraint is required.


  Thanks, Adrian. This was not clear to me when I read the manual.

Happy New Year,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default column value

2016-12-30 Thread Rich Shepard

On Fri, 30 Dec 2016, Tom Lane wrote:


No, because you can explicitly insert a null. DEFAULT only controls what
happens when you omit the column in an INSERT command.


tom,

  Thanks for clarifying. I did not pick this up from reading the manual and
knew that NULL could be an explicitly-defined default value.

Much appreciated,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default column value

2016-12-30 Thread Adrian Klaver
On 12/30/2016 06:38 AM, Rich Shepard wrote:
>   Reading the 9.6 docs suggests an answer to my question, but does not
> explicitly answer it, so I ask here.
> 
>   If a column has a default value specified does this mean the column
> cannot
> contain a NULL value? In other words, is DEFAULT  NOT NULL
> redundant?

Another way of looking at it:

test=> create table default_test_2(id int, fld_1 varchar DEFAULT 'test');
CREATE TABLE

test=> insert into default_test_2 values (1);
INSERT 0 1

test=> insert into default_test_2 values (2, NULL);
INSERT 0 1

test=> \pset
null 'NULL'

test=> select * from default_test_2 ;
 id | fld_1 
+---
  1 | test
  2 | NULL
(2 rows)


DEFAULT is what is the column is set to if the user does not specify a value. 
As shown above a user can supply a NULL value. To guard against that the NOT 
NULL 
constraint is required.

> 
> TIA,
> 
> Rich
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default column value

2016-12-30 Thread Tom Lane
Rich Shepard  writes:
>If a column has a default value specified does this mean the column cannot
> contain a NULL value? In other words, is DEFAULT  NOT NULL
> redundant?

No, because you can explicitly insert a null.  DEFAULT only controls
what happens when you omit the column in an INSERT command.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LYDB: Feasible to use PG roles instead of application-level security?

2016-12-30 Thread bto...@computer.org


- Original Message -
> From: "Alban Hertroys" 
> To: "Guyren Howe" 
> Cc: "pgsql-general" 
> Sent: Friday, December 30, 2016 6:23:27 AM
> Subject: Re: [GENERAL] LYDB: Feasible to use PG roles instead of 
> application-level security?
> 
> 
> > On 30 Dec 2016, at 4:19, Guyren Howe  wrote:
> > 
> > 
> > ... wonder whether it is practical to use PG’s own roles and
> > security model in lieu of using an application-level one.
> > 
> > It seems that the role system in PG is sufficient for most general
> > purposes. One could presumably also have a table with role names and
> > associated metainformation (email address etc) as needed.
> > 
> > If I have a system with many thousands of users, is it practical to manage
> > these users’ authentication and authorization using *just* Postgres?
> 
> Postgres roles are global to the cluster, so you would end up with multiple
> thousands of roles if you have multiple databases in your cluster with
> different users on each. Which roles each user is allowed to have becomes
> quite the nightmare for the administrators, I suspect.
> 
> For a web-application facing the internet, I'd say no, don't do that. You're
> dealing with far too many users to be maintainable.
> 
> For an intranet database in a not-too-large company with a fixed set of
> users, it could be a good solution, especially if those roles can be linked
> to the company's LDAP server (assuming that's possible, I don't know).
> Multiple intranet applications on that same database can use the same users
> and roles.
> 
> Someone needs to do the administration though; with volumes (of users) like
> that and the database knowledge level of the average system administrator, a
> GUI seems preferable. IMHO, pgadmin provides too many features to be
> practical for someone like that, you would probably prefer something that
> only does user administration. I don't know of anything that does that
> though (not a GUI user myself)...
> 
> > It occurs to me that some client frameworks might have issues with their
> > connection pools if those connections keep switching users, assuming they
> > even can, but let’s set that aside for now. Or perhaps every connection
> > could immediately do a SET USER before executing its connection?
> > 
> > This seems an attractive proposition from a security standpoint: if I use
> > row-level security pervasively, I can have a security system that’s
> > nestled nice and close to the data and presumably tricky to work around
> > from a hacker given direct access only to the client application.
> 
> With a few changes, that could work very well.
> 
> First, create roles for the different types of users that you expect. In a
> company, that could be by division, distinguishing division-heads, interns,
> etc.
> 
> Secondly, have a table with the users and their attributes like you describe.
> Include an attribute for their database role there. Only administrator users
> should have access to that table.
> 
> Finally, create a stored procedure that looks up a user name in that table
> and sets the accompanying role. If a user is not found, set the role to some
> default 'unprivileged' user.
> Make that procedure a SECURITY DEFINER with according permissions. That role
> stays active the entire session, so unless you close the connection, create
> a new one or change the user's role, this procedure doesn't need calling
> again.
> 
> > Is this practical? Has anyone here done it? What might the caveats be?
> 
> It's a fairly common practice, the ML archives should contain plenty of
> examples.



I cannot speak to the thousands of users scenario in practice, but in principle 
it seems workable.

As a practical matter, I have implemented something along the lines of what 
Hertroys describes, and taken it a step further, engaging schema privileges as 
well.

The PUBLIC schema contains a single relation that exposes a view with three 
columns: username, password, and email_address. You could expose more, but for 
my purposes minimal is best. This view serves the express purpose of creating 
new user accounts and is, by means of the rule system, writeable. Technically 
it is readable, too, but a select statement provides no information about 
underlying data ... so stalkers can't get a user list.

pg_hba.conf is configured to allow anonymous login which has usage permission 
on, and only on, the PUBLIC schema. So anyone can log in and see that one view 
and do an insert on that view.

The rule system pushes down the INSERT onto a table in a private schema that 
has a trigger which creates the data base role corresponding to the new user 
row inserted into the view. Thus, prospective users "self administer" and 
create their own accounts as needed. The trigger also grants the new role 
membership in a particular group role.

The group role has usage privileges on a second schema that exposes broader 
visibility into the data base as well as additional insert and update 
privileges on some of

Re: [GENERAL] Default column value

2016-12-30 Thread Adrian Klaver

On 12/30/2016 06:38 AM, Rich Shepard wrote:

  Reading the 9.6 docs suggests an answer to my question, but does not
explicitly answer it, so I ask here.

  If a column has a default value specified does this mean the column
cannot
contain a NULL value? In other words, is DEFAULT  NOT NULL
redundant?


No:

test=> create table default_test(id int, fld_1 varchar DEFAULT NULL);
CREATE TABLE

test=> \d default_test
  Table "public.default_test"
 Column |   Type| Modifiers
+---+---
 id | integer   |
 fld_1  | character varying |



TIA,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Default column value

2016-12-30 Thread Rich Shepard

  Reading the 9.6 docs suggests an answer to my question, but does not
explicitly answer it, so I ask here.

  If a column has a default value specified does this mean the column cannot
contain a NULL value? In other words, is DEFAULT  NOT NULL
redundant?

TIA,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-30 Thread Michael Sheaver
The reason that you cannot use git for that is that it was never intended for 
live data and stuff that resides on a database.

That said, I agree with you that all table DDLs and stored procedures should be 
kept under version control. And in fact I do keep them under VC on my database. 
How do I do this? I have a directory in which I keep all my SQL scripts, and 
this directory is a git repo. In this directory/project I keep:
1. For every table that is created on the database, I have an SQL script file 
with the DDL that creates it
2. For every stored proc that I create, I put it in its own SQL script as well
3. For any (relatively) static lookup tables (i.e. state-region associations, 
etc.), they are put in a SQL script

This workflow gives me several important benefits, including:
1. Everything needed to recreate the database is kept under version control, 
including the DDL, stored procedures and lookup tables
2. This script folder does not need to reside on the DB server, can be kept 
anywhere, even on my local laptop; I pull them up in DataGrip and run them on 
the server (my scripts are on my laptop)
3. All these scripts are VC'd under git
4. They are easily and quickly pushed to my remote repo on Github and Bitbucket 
for backup
5. When I need to create a new fresh, empty copy of the database somewhere, I 
just run these scripts and it is done quickly and easily

One more little trick I have is to use Gitkraken for my git GUI. It. is free, 
and is absolutely the best git GUI available. The devs made it an absolute joy 
to use, and I never need to use the command line anymore.

Hope this helps!

> On Dec 29, 2016, at 10:43 PM, Tim Uckun  wrote:
> 
> I have datagrip and it's OK but it doesn't really do everything I want.
> 
> I don't understand why it doesn't fetch all objects from the database and 
> then put them into the disk in a directory so I can put it all under git and 
> then let me work on them syncing the files back as they change.  For example 
> today I just renamed a function. It didn't refactor properly by identifying 
> stored procs that reference it. If I was using another jetbrains IDE it would 
> have built an index of the project files and did a proper refactor.
> 
> This would also allow you to make wholesale disk changes and then sync them 
> up properly to get around postgres dependency issues.
> 
> On Fri, Dec 30, 2016 at 1:40 PM, Michael Sheaver  > wrote:
> If you want an IDE, Jetbrains, the makers of great IDEs like IntelliJ, 
> PyCharm. and AppCode, among others, have recently come out with what is 
> arguably the BEST IDE for DBAs, DataGrip. It runs on most major platforms, 
> and is so good that I have bitten the bullet and paid the yearly subscription 
> for it.
> 
> Leave the Postgres core alone focus on what they do best, and that is making 
> the very BEST database environment that can be had at any price, period. Is 
> Postgres perfect? No, not at all. But no other group is so focused on 
> performance, scalability and security as these folks are. And the rate of 
> development, enhancement and continual improvement is, quite honestly, 
> astounding. 
> 
> So here is my hat tip to the Postgres team for an awesome job they are doing!
> 
>> On Dec 29, 2016, at 7:19 PM, Tim Uckun > > wrote:
>> 
>> I am not saying the postgres core people should work on an IDE, just that an 
>> IDE like thing would be nice.
>> 
>> On Fri, Dec 30, 2016 at 12:51 PM, Rob Sargent > > wrote:
>> I would hope Postgres core folk take no more than a nanosecond to reject the 
>> idea that they work on an IDE. Focus on reading and writing faster and 
>> faster ACID all the while. 
>> 
>> On Dec 29, 2016, at 5:32 PM, Tim Uckun > > wrote:
>> 
>>> Honestly I don't even like JS. Having said that I am not too crazy about 
>>> PL-PGSQL either. I am willing to put up with either given that they are 
>>> supported widely in default installs of postgres in AWS, Linux and MacOSX,
>>> 
>>> As I said before, I think posgres gives a unique and underutilized language 
>>> platform. You can code in different languages, it has a good variety of 
>>> built in types, and of course you get persistance and caching built in!  
>>> Using DBLINK you might even be able to separate out your code from the bulk 
>>> of your data in another database. Postgres all the way down!
>>> 
>>> It's fun to play around with.  There is a lot of missing pieces though. A 
>>> good IDE like thing would be good, version control would be nice, deeper 
>>> namespacing (hierarchical schemas?), easier testing etc would go a long 
>>> way. 
>>> 
>>> Thanks for all the input guys! 
>>> 
>>> On Fri, Dec 30, 2016 at 12:14 AM, Ivan Sergio Borgonovo 
>>> mailto:m...@webthatworks.it>> wrote:
>>> On 12/29/2016 10:35 AM, Pavel Stehule wrote:
>>> 
>>> 2016-12-29 10:03 GMT+01:00 Tim Uckun >> 
>>> 

Re: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Francisco Olarte
Job:

On Fri, Dec 30, 2016 at 1:01 PM, Alban Hertroys  wrote:
>> On 30 Dec 2016, at 11:42, Job  wrote:
...
>> The index applied on the timestamp field is a btree("timestamp")
...
>> select domain, sum(accessi) as c_count from TABLE where action='1' AND 
>> profile IN ('PROFILE_CODE') AND timestamp::date  BETWEEN '2016-12-27' AND 
>> '2016-12-30' AND timestamp::time BETWEEN '00:00:00' AND '23:59:59' GROUP BY 
>> domain ORDER BY c_count DESC  LIMIT 101
...
>> Last question: the table is partitioned. I need to manually create index for 
>> every sub-tables or there is a way to create on every sub-tables once?

I think Alban missed this ( or I missed his response ). Yes, you need
to create the indexes for the partitions. When in doubt, create a
partition and \d+ it.

IIRC you can do 'create partition LIKE master INCLUDING indexes
INHERITS(master)', but you'll have to test. Anyway, this is normally
not too useful as the master table is normally indexless and kept
empty. I normally script the partition creation, and I woill recommend
doing that too.


Now onto the BETWEEN PROBLEM:

> It's usually more efficient to cast the constants you're comparing to, than 
> to cast a field value for each record in the set. The exception to that is 
> when you have an index on the casted field.
> In your case, since you're casting to date and time separately, and whole 
> days even, it's probably more efficient to combine that into:
> … AND timestamp BETWEEN '2016-12-27 00:00:00'::timestamp with time zone AND 
> '2016-12-30 23:59:59'::timestamp with time zone ...

Alban is correct here, in both counts.

- DO NOT CAST THE COLUMNS, cast the constants ( unless you are
building a specialized index, you can build an index in cast(timestamp
as date), and it would be useful if you did a lot of queries ONLY ON
DATES ).

- DO NOT USE  CLOSED INTERVALS for real number-like columns ( remember
I told you timestamps are a point in the time line, so real-like ).
The man problems strives from the fact tht you cannot cover the real
line with non-overlapping CLOSED intervals, BETWEEN uses closed
intervals and subtle problems permeate from this fact. Math is a harsh
mistress.

( Even when working with integer-like numbers half-open intervals are
normally the best way to go in the not so short term, but between
seems so nice and natural and reads so well that even I use it where I
should not )

> But even then, you're excluding items that fall in the second between the end 
> date and the next day. The new range types are useful there, for example:
>
> … AND timestamp <@ '[2016-12-27 00:00:00, 2016-12-31 00:00:00)'::tsrange

And this is how it is solved with those new-fangled interval thingies
( I've been keying (ts>=xxx and ts  The above isn't entirely correct, as tsrange uses timestamp without time 
> zone, but you get the gist.

Or use >=, < those work.

> However, if those time ranges can have other values than '[00:00. 23:59]', 
> then you probably need 2 indexes on that timestamp column; one cast to date 
> and one to time. Otherwise, you end up creating timestamp range filters for 
> each day in the range in the query (which could still be the better approach).

Even if they have other values, single index on timestamp column is
the way to go if you only select single intervals. I mean, Xmas
morning ( data between 25 and 25 and time between 8:00 and 12:59 can
easiliy be selected by the interval [20161225T08,
20161225T13), but all the mornings in december can not ( although
a query with ts>='20160101' and ts <'20170101' and ts:time >='08:00'
and ts:time<'13:00' should work quite well, the first two condition
guide to an index scan and the rest is done with a filtering ).

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
Hi Alban,

I was wrong: i have only one column: tsrarnge.

Which index can i create to use this statement fastly:
 ... AND timestamp <@ '[2016-12-27 00:00:00, 2016-12-31 00:00:00)'::tsrange

Thank you again!

/F

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
Dear Alban,

Regarding:

>>... AND timestamp BETWEEN '2016-12-27 00:00:00'::timestamp with time zone AND 
>>'2016-12-30 23:59:59'::timestamp with time zone ... 

I think it is a very good approach, and i would like to try.
My table has got two different field for "starting" and "ending" timestamp 
values.

Shall i create an index together with the two field (starting and ending) with 
the "tsrange" statement?

Thank you again!

/F

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Alban Hertroys

> On 30 Dec 2016, at 11:42, Job  wrote:
> 
>>> And, basically, if you need help with some queries you could try
>>> posting them whole, even redacted, along the table defs, this way
>>> perople can see the problem and not invent one based on a partial
>>> description
> 
> Thank you very much, very kind from you.
> 
> The index applied on the timestamp field is a btree("timestamp")
> 
> The query is:
> 
> select domain, sum(accessi) as c_count from TABLE where action='1' AND 
> profile IN ('PROFILE_CODE') AND timestamp::date  BETWEEN '2016-12-27' AND 
> '2016-12-30' AND timestamp::time BETWEEN '00:00:00' AND '23:59:59' GROUP BY 
> domain ORDER BY c_count DESC  LIMIT 101
> 
> The table format is:
>  Column   |   Type   |   
> Modifiers
> ---+--+
> id| numeric(1000,1)  | not null default 
> function_get_next_sequence('webtraffic_archive_id_seq'::text)
> timestamp | timestamp with time zone |
> domain| character varying(255)   |
> action| character varying(5) |
> profile   | character varying|
> accessi   | bigint   |
> url   | text |
> 
> Indexes:
>"webtraffic_archive_day_2016_04_15_action_wbidx" btree (action)
>"webtraffic_archive_day_2016_04_15_domain_wbidx" btree (domain)
>"webtraffic_archive_day_2016_04_15_profile_wbidx" btree (profile) CLUSTER
>"webtraffic_archive_day_2016_04_15_timestamp_wbidx" btree ("timestamp")
> 
> Last question: the table is partitioned. I need to manually create index for 
> every sub-tables or there is a way to create on every sub-tables once?

It's usually more efficient to cast the constants you're comparing to, than to 
cast a field value for each record in the set. The exception to that is when 
you have an index on the casted field.

In your case, since you're casting to date and time separately, and whole days 
even, it's probably more efficient to combine that into:

… AND timestamp BETWEEN '2016-12-27 00:00:00'::timestamp with time zone AND 
'2016-12-30 23:59:59'::timestamp with time zone ...

But even then, you're excluding items that fall in the second between the end 
date and the next day. The new range types are useful there, for example:

… AND timestamp <@ '[2016-12-27 00:00:00, 2016-12-31 00:00:00)'::tsrange

The above isn't entirely correct, as tsrange uses timestamp without time zone, 
but you get the gist.

However, if those time ranges can have other values than '[00:00. 23:59]', then 
you probably need 2 indexes on that timestamp column; one cast to date and one 
to time. Otherwise, you end up creating timestamp range filters for each day in 
the range in the query (which could still be the better approach).


Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Charles Clavadetscher
Hello

> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Job
> Sent: Freitag, 30. Dezember 2016 11:42
> To: Francisco Olarte 
> Cc: David G. Johnston ; 
> pgsql-general@postgresql.org
> Subject: R: [GENERAL] Special index for "like"-based query
> 
> >>And, basically, if you need help with some queries you could try
> >>posting them whole, even redacted, along the table defs, this way
> >>perople can see the problem and not invent one based on a partial
> >>description
> 
> Thank you very much, very kind from you.
> 
> The index applied on the timestamp field is a btree("timestamp")
> 
> The query is:
> 
> select domain, sum(accessi) as c_count from TABLE where action='1' AND 
> profile IN ('PROFILE_CODE') AND
> timestamp::date  BETWEEN '2016-12-27' AND '2016-12-30' AND timestamp::time 
> BETWEEN '00:00:00' AND '23:59:59' GROUP
> BY domain ORDER BY c_count DESC  LIMIT 101
> 
> The table format is:
>   Column   |   Type   |   
> Modifiers
> ---+--+-
> ---
>  id| numeric(1000,1)  | not null default
> function_get_next_sequence('webtraffic_archive_id_seq'::text)
>  timestamp | timestamp with time zone |
>  domain| character varying(255)   |
>  action| character varying(5) |
>  profile   | character varying|
>  accessi   | bigint   |
>  url   | text |
> 
> Indexes:
> "webtraffic_archive_day_2016_04_15_action_wbidx" btree (action)
> "webtraffic_archive_day_2016_04_15_domain_wbidx" btree (domain)
> "webtraffic_archive_day_2016_04_15_profile_wbidx" btree (profile) CLUSTER
> "webtraffic_archive_day_2016_04_15_timestamp_wbidx" btree ("timestamp")

Is the index on timestamp used at all? The index on timestamp is built on 
timestamp, but you query using timestamp::date.
You can check this using EXPLAIN.

Bye
Charles

> 
> Last question: the table is partitioned. I need to manually create index for 
> every sub-tables or there is a way to
> create on every sub-tables once?
> 
> THANK YOU!
> /F
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LYDB: Feasible to use PG roles instead of application-level security?

2016-12-30 Thread Alban Hertroys

> On 30 Dec 2016, at 4:19, Guyren Howe  wrote:
> 
> Further to my attempts to enlighten application developers about what they 
> might better do in the database:
> 
> https://medium.com/@gisborne/love-your-database-lydb-23c69f480a1d#.4jngp2rcb
> 
> it occurs to me to wonder whether it is practical to use PG’s own roles and 
> security model in lieu of using an application-level one.
> 
> It seems that the role system in PG is sufficient for most general purposes. 
> One could presumably also have a table with role names and associated 
> metainformation (email address etc) as needed.
> 
> If I have a system with many thousands of users, is it practical to manage 
> these users’ authentication and authorization using *just* Postgres?

Postgres roles are global to the cluster, so you would end up with multiple 
thousands of roles if you have multiple databases in your cluster with 
different users on each. Which roles each user is allowed to have becomes quite 
the nightmare for the administrators, I suspect.

For a web-application facing the internet, I'd say no, don't do that. You're 
dealing with far too many users to be maintainable.

For an intranet database in a not-too-large company with a fixed set of users, 
it could be a good solution, especially if those roles can be linked to the 
company's LDAP server (assuming that's possible, I don't know). Multiple 
intranet applications on that same database can use the same users and roles.

Someone needs to do the administration though; with volumes (of users) like 
that and the database knowledge level of the average system administrator, a 
GUI seems preferable. IMHO, pgadmin provides too many features to be practical 
for someone like that, you would probably prefer something that only does user 
administration. I don't know of anything that does that though (not a GUI user 
myself)...

> It occurs to me that some client frameworks might have issues with their 
> connection pools if those connections keep switching users, assuming they 
> even can, but let’s set that aside for now. Or perhaps every connection could 
> immediately do a SET USER before executing its connection?
> 
> This seems an attractive proposition from a security standpoint: if I use 
> row-level security pervasively, I can have a security system that’s nestled 
> nice and close to the data and presumably tricky to work around from a hacker 
> given direct access only to the client application.

With a few changes, that could work very well.

First, create roles for the different types of users that you expect. In a 
company, that could be by division, distinguishing division-heads, interns, etc.

Secondly, have a table with the users and their attributes like you describe. 
Include an attribute for their database role there. Only administrator users 
should have access to that table.

Finally, create a stored procedure that looks up a user name in that table and 
sets the accompanying role. If a user is not found, set the role to some 
default 'unprivileged' user.
Make that procedure a SECURITY DEFINER with according permissions. That role 
stays active the entire session, so unless you close the connection, create a 
new one or change the user's role, this procedure doesn't need calling again.

> Is this practical? Has anyone here done it? What might the caveats be?

It's a fairly common practice, the ML archives should contain plenty of 
examples.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
>>And, basically, if you need help with some queries you could try
>>posting them whole, even redacted, along the table defs, this way
>>perople can see the problem and not invent one based on a partial
>>description

Thank you very much, very kind from you.

The index applied on the timestamp field is a btree("timestamp")

The query is:

select domain, sum(accessi) as c_count from TABLE where action='1' AND profile 
IN ('PROFILE_CODE') AND timestamp::date  BETWEEN '2016-12-27' AND '2016-12-30' 
AND timestamp::time BETWEEN '00:00:00' AND '23:59:59' GROUP BY domain ORDER BY 
c_count DESC  LIMIT 101

The table format is:
  Column   |   Type   |   
Modifiers
---+--+
 id| numeric(1000,1)  | not null default 
function_get_next_sequence('webtraffic_archive_id_seq'::text)
 timestamp | timestamp with time zone |
 domain| character varying(255)   |
 action| character varying(5) |
 profile   | character varying|
 accessi   | bigint   |
 url   | text |

Indexes:
"webtraffic_archive_day_2016_04_15_action_wbidx" btree (action)
"webtraffic_archive_day_2016_04_15_domain_wbidx" btree (domain)
"webtraffic_archive_day_2016_04_15_profile_wbidx" btree (profile) CLUSTER
"webtraffic_archive_day_2016_04_15_timestamp_wbidx" btree ("timestamp")

Last question: the table is partitioned. I need to manually create index for 
every sub-tables or there is a way to create on every sub-tables once?

THANK YOU!
/F

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Francisco Olarte
On Fri, Dec 30, 2016 at 11:00 AM, Job  wrote:
> I tried to create a GIST/GIN index on a timestamp without time zone field
> but it does not work.
> Are there alternatives index types or timezone could speed query up?

Remember a timestamp is just a real number ( a point on the time line
) with some fancy formatting for I/O ( or you will suffer ). This
menas when you have a ts column and want to query for a date it is
usually better to do [ts>='2016-12-29' and ts<'2016-12-13'] than doing
[cast(ts as date) = '2016-12-29'] ( similar to how a real number is
better queried as [r>=1.0 and r<2.0] than [int(r)=1] ). Normally you
get good results with btree indexes.

And, basically, if you need help with some queries you could try
posting them whole, even redacted, along the table defs, this way
perople can see the problem and not invent one based on a partial
description. I do not see any thing in common between 'like based
query' and timestmap columns.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
I tried to create a GIST/GIN index on a timestamp without time zone field but 
it does not work.
Are there alternatives index types or timezone could speed query up?

Thank you
/F


Da: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] per 
conto di Job [j...@colliniconsulting.it]
Inviato: venerdì 30 dicembre 2016 10.55
A: David G. Johnston
Cc: pgsql-general@postgresql.org
Oggetto: R: [GENERAL] Special index for "like"-based query

>>GIST​
>>https://www.postgresql.org/docs/9.6/static/pgtrgm.html
>>​https://www.postgresql.org/docs/9.6/static/btree-gist.html

I tried with a GIST-like index and queries improves a lot, thank you!

Furthermore, this type of index is also suitable for a timestamp query, where 
we can mix date and time parameters?

Thank you again!
/F



Da: David G. Johnston [david.g.johns...@gmail.com]
Inviato: venerdì 30 dicembre 2016 0.33
A: Job
Cc: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] Special index for "like"-based query

On Thu, Dec 29, 2016 at 4:21 PM, Job 
mailto:j...@colliniconsulting.it>> wrote:
Hello,

in Postgresql 9.6 we have a query running on a very large table based, in some 
cases, on a like statement:

... WHERE FIELD='CODE' OR FIELD LIKE 'CODE_%'

Which type of index can i create to speed to the search when the "like" case 
happens?

​GIST​

https://www.postgresql.org/docs/9.6/static/pgtrgm.html

​https://www.postgresql.org/docs/9.6/static/btree-gist.html
​
David J.




R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
>>GIST​
>>https://www.postgresql.org/docs/9.6/static/pgtrgm.html
>>​https://www.postgresql.org/docs/9.6/static/btree-gist.html

I tried with a GIST-like index and queries improves a lot, thank you!

Furthermore, this type of index is also suitable for a timestamp query, where 
we can mix date and time parameters?

Thank you again!
/F



Da: David G. Johnston [david.g.johns...@gmail.com]
Inviato: venerdì 30 dicembre 2016 0.33
A: Job
Cc: pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] Special index for "like"-based query

On Thu, Dec 29, 2016 at 4:21 PM, Job 
mailto:j...@colliniconsulting.it>> wrote:
Hello,

in Postgresql 9.6 we have a query running on a very large table based, in some 
cases, on a like statement:

... WHERE FIELD='CODE' OR FIELD LIKE 'CODE_%'

Which type of index can i create to speed to the search when the "like" case 
happens?

​GIST​

https://www.postgresql.org/docs/9.6/static/pgtrgm.html

​https://www.postgresql.org/docs/9.6/static/btree-gist.html
​
David J.