Re: [GENERAL] Setup pgpool-II with streaming replication

2016-09-07 Thread John R Pierce

On 9/6/2016 2:10 PM, Ashish Chauhan wrote:

...Cannot afford downtime on db servers...


thats an impossible target in general unless you expect computer systems 
to run perfectly forever without ever needing hardware or software 
maintenance.there are strategies to /minimize/ downtime, such as 
having active/standby failover, but the simple act of failing over 
requires a short downtime where your applications are forcibly 
disconnected from the formerly active database and have to reconnect to 
the newly activated master, restarting any transactions that were in 
progress at the time of interruption.   then there's the issues of 
building fully redundant network infrastructure, as well as redundant 
pooling. In my experience, the more pieces you add to a system in an 
attempt to make it more robust, the more fragile and delicate it becomes.




--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Setup pgpool-II with streaming replication

2016-09-07 Thread Venkata B Nagothi
I gone through below articles but some or other way require DB server
> restart which is not possible in our case. Could you please provide some
> documentation or steps how to achive connection pooling without any
> downtime?
>

DB server restart ? Why do you think you need to restart the database ?
that can be confirmed if you could help us know your planned architecture
and the existing database configuration settings.

Yes, downtime is definitely needed as the Application needs to connect to
the database using pgPool port instead of DB port which is something to do
with the Application server.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] What limits Postgres performance when the whole database lives in cache?

2016-09-07 Thread dandl
> > I don't think this is quite true. The mechanism he proposes has a
> small window in which committed transactions can be lost, and this
> should be addressed by replication or by a small amount of UPC (a few
> seconds).
> 
> Except that's the entire point where all those kind of solutions
> *completely* depart ways from Postgres. Postgres is designed to *lose
> absolutely no data after a COMMIT*, potentially including requiring
> that data to be synchronized out to a second server. That is worlds
> apart from "we might lose a few seconds", and there's a lot of stuff
> Postgres has to worry about to accomplish that. Some of that stuff can
> be short-circuited if you don't care (that's what SET
> synchronous_commit = off does), but there's always going to be some
> amount of extra work to support synchronous_commit = local or
> remote_*.

I understand that. What I'm trying to get a handle on is the magnitude of that 
cost and how it influences other parts of the product, specifically for 
Postgres. If the overhead for perfect durability were (say) 10%, few people 
would care about the cost. But Stonebraker puts the figure at 2500%! His 
presentation says that a pure relational in-memory store can beat a row store 
with disk fully cached in memory by 10x to 25x. [Ditto column stores beat row 
stores by 10x for complex queries in non-updatable data.]

So my question is not to challenge the Postgres way. It's simply to ask whether 
there are any known figures that would directly support or refute his claims. 
Does Postgres really spend 96% of its time in thumb-twiddling once the entire 
database resides in memory?

> Presumably there's more improvements that could be made to Postgres in
> this area, but if you really don't care about losing seconds worth of
> data and you need absolutely the best performance possible then maybe
> Postgres isn't the right choice for you.

Achieving durability for an in-memory database requires either UPS or active 
replication or both, which is an additional cost that is not needed for every 
application. My question precedes that one: is there a big performance gain 
there for the taking, or is it smoke and mirrors?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-07 Thread Patrick B
2016-09-08 11:49 GMT+12:00 Jim Nasby :

> Please include the mailing list in replies...
>
> On 9/7/16 6:10 PM, David Gibbons wrote:
>
>> That is NOT safe. The problem is it allows rsync to use mtime alone
>> to decide that a file is in sync, and that will fail if Postgres
>> writes to a file in the same second that the first rsync reads from
>> it (assuming Postgres writes after rsync reads). You need to add the
>> --checksum flag to rsync (which means it will still have to read
>> everything that's in /var/lib/pgsql).
>>
>>
>> The checksum flag as you mention is not performant,
>>
>
> Definitely not. :/
>
> If this is a concern, you're much better using the *--modify-window *flag:
>> When comparing two timestamps, rsync treats the timestamps as being
>> equal if they differ by no more than the modify-window value. This is
>> normally 0 (for an exact match), but you may find it useful to set this
>> to a larger value in some situations.
>>
>> Hence, rsync -va --modify-window=1 would remove your concern about a
>> same second race condition without forcing the sync to read through all
>> the files.
>>
>
> Very interesting and useful!
> 
>

Cool! I'll use the rsync -va --modify-window=1 instead.

Thanks!
Patrick


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-07 Thread Jim Nasby

Please include the mailing list in replies...

On 9/7/16 6:10 PM, David Gibbons wrote:

That is NOT safe. The problem is it allows rsync to use mtime alone
to decide that a file is in sync, and that will fail if Postgres
writes to a file in the same second that the first rsync reads from
it (assuming Postgres writes after rsync reads). You need to add the
--checksum flag to rsync (which means it will still have to read
everything that's in /var/lib/pgsql).


The checksum flag as you mention is not performant,


Definitely not. :/


If this is a concern, you're much better using the *--modify-window *flag:
When comparing two timestamps, rsync treats the timestamps as being
equal if they differ by no more than the modify-window value. This is
normally 0 (for an exact match), but you may find it useful to set this
to a larger value in some situations.

Hence, rsync -va --modify-window=1 would remove your concern about a
same second race condition without forcing the sync to read through all
the files.


Very interesting and useful!
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What limits Postgres performance when the whole database lives in cache?

2016-09-07 Thread Jim Nasby

On 9/2/16 7:39 PM, dandl wrote:

I don't think this is quite true. The mechanism he proposes has a small window 
in which committed transactions can be lost, and this should be addressed by 
replication or by a small amount of UPC (a few seconds).


Except that's the entire point where all those kind of solutions 
*completely* depart ways from Postgres. Postgres is designed to *lose 
absolutely no data after a COMMIT*, potentially including requiring that 
data to be synchronized out to a second server. That is worlds apart 
from "we might lose a few seconds", and there's a lot of stuff Postgres 
has to worry about to accomplish that. Some of that stuff can be 
short-circuited if you don't care (that's what SET synchronous_commit = 
off does), but there's always going to be some amount of extra work to 
support synchronous_commit = local or remote_*.


Presumably there's more improvements that could be made to Postgres in 
this area, but if you really don't care about losing seconds worth of 
data and you need absolutely the best performance possible then maybe 
Postgres isn't the right choice for you.


"All databases suck, each one just sucks in a different way." - Me, 
circa 1999.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Adrian Klaver

On 09/07/2016 04:25 PM, Jim Nasby wrote:

On 9/7/16 6:07 PM, Ken Tanzer wrote:

ERROR:  PL/Python functions cannot accept type record


Ugh, yeah... that won't work. plperl might be able to do it, but I
suspect you're going to be stuck pulling the size info out of
info_schema or the catalog.

Actually, there is a way you could hack this via plpython; pass the row
in as text as well as the relation (regclass is good for that). You
could then do plpy.execute('SELECT (%::%).*'.format(row_text,
relation)); that should give you a dict just like Adrian's example did.

It would be nice if there was a function that accepted something with a
row descriptor and spit out the details of the descriptor.
http://pgxn.org/dist/colnames/doc/colnames.html comes close; if you know
much about C at all it shouldn't be hard to add a function to that
extension that returned the full details of the row. That and converting
the row to JSON would make it relatively easy to accomplish what you
want in a plpgsql (or maybe even plsql) function.


Getting closer:

CREATE OR REPLACE FUNCTION public.str_concat(r json)
 RETURNS text
 LANGUAGE plpythonu
AS $function$
import json
j = json.loads(r)
str_out = ""
plpy.notice(type(j))
for col in j:
str_out += j[col]
return str_out
$function$

production=# select str_concat(row_to_json(t)) from str_test as t;
NOTICE:  
CONTEXT:  PL/Python function "str_concat"
  str_concat
---
 09/07/161234  1



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread John R Pierce

On 9/7/2016 1:36 PM, Ken Tanzer wrote:
No transformation is needed, except for padding the fields out to 
their maximum lengths.


without accessing metadata, how would you know what those maximum 
lengths are??


and how would the calling program even know what the fields are if its 
not aware of the field lengths?


what if a field is type 'text' ? or an array type ?


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IDE for function/stored proc development.

2016-09-07 Thread Jim Nasby

On 9/3/16 7:49 AM, Tim Uckun wrote:

I was hoping there was some IDE which made that process seamless.
Something like PgAdmin but better editing features and features like
"find definition" or "find usages" and such.  The jetbrains products
come close but as I said they are buggy and don't work very well with
postgres.


Keep in mind that workflow doesn't work well if you need to deploy to 
production on a regular basis.


The workflow I generally use is sqitch[1] and a thin wrapper that runs 
my unit tests (you do write unit tests for your functions, right? :)). 
Something like:


revert_to=`sqitch tag|tail -n2|head -n1` # Get second to last deployed tag
sqitch rebase -y --onto $revert_to $DB && sqitch rebase -y --onto 
$revert_to $DB && db/run_test $DB


Normally you won't be re-deploying that much, so that would be pretty 
fast. Note that you'll want to create a separate sqitch migration for 
each object.


[1] http://sqitch.org/
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Jim Nasby

On 9/7/16 6:07 PM, Ken Tanzer wrote:

ERROR:  PL/Python functions cannot accept type record


Ugh, yeah... that won't work. plperl might be able to do it, but I 
suspect you're going to be stuck pulling the size info out of 
info_schema or the catalog.


Actually, there is a way you could hack this via plpython; pass the row 
in as text as well as the relation (regclass is good for that). You 
could then do plpy.execute('SELECT (%::%).*'.format(row_text, 
relation)); that should give you a dict just like Adrian's example did.


It would be nice if there was a function that accepted something with a 
row descriptor and spit out the details of the descriptor. 
http://pgxn.org/dist/colnames/doc/colnames.html comes close; if you know 
much about C at all it shouldn't be hard to add a function to that 
extension that returned the full details of the row. That and converting 
the row to JSON would make it relatively easy to accomplish what you 
want in a plpgsql (or maybe even plsql) function.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clustered index to preserve data locality in a multitenant application?

2016-09-07 Thread Jim Nasby

On 9/6/16 11:21 AM, Nicolas Grilly wrote:

It looks like I can move forward with our migration from MySQL to
PostgreSQL, without worrying about the lack of clustered indexes,
because there are better solutions to keep tenant data contiguous!


First rule of performance tuning: don't. :) There are lots of areas 
where Postgres can be expected to perform better than MySQL, so without 
testing your app you really don't know how it's going to fare.


There's also another option: use a logical replication system (such as 
pg_logical, BDR, londiste or Slony) to maintain at least one replica. 
You can take that replica down to perform maintenance (such as a 
database-wide CLUSTER) as needed, and let replication catch up once you 
bring it back online. That, combined with scripted failover makes a lot 
of database maintenance items far easier, at the cost of having to 
maintain the replication. Depending on your needs, a major benefit to 
this method is it makes major version upgrades very simple: you just 
stand up a new replica on the new version and then failover to it. If 
anything goes wrong, you can fail back to the old version without losing 
any data.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Adrian Klaver

On 09/07/2016 03:38 PM, Jim Nasby wrote:

On 9/7/16 5:32 PM, Ken Tanzer wrote:

SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;

I know TCL and probably Python and others can work with a record as a
trigger function.  But TCL doesn't seem to accept a record as an
argument.  Can any of the other languages that could also accomplish
this function? Or some other way? Thanks.


A PL that can accept composite types (such as plpythonu) should be able
to do this.


But can they be anonymous types?

Ken wants this to be generic so any tables record can be supplied as an 
argument. In plpythonu it seems you need to declare the table type when 
supplying the record.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 3:38 PM, Jim Nasby  wrote:

> On 9/7/16 5:32 PM, Ken Tanzer wrote:
>
>> SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;
>>
>> I know TCL and probably Python and others can work with a record as a
>> trigger function.  But TCL doesn't seem to accept a record as an
>> argument.  Can any of the other languages that could also accomplish
>> this function? Or some other way? Thanks.
>>
>
> A PL that can accept composite types (such as plpythonu) should be able to
> do this.


OK, python is new to me, so I'm trying to dig into it.  I installed it, and
tried declaring a function:

CREATE OR REPLACE FUNCTION ebh_transaction_assemble( varchar, record )
RETURNS varchar AS $$...

but got the same error:

ERROR:  PL/Python functions cannot accept type record

Is there some other way I should be specifying this?  Or do I need a more
recent version of Postgres (I'm on 9.2) to do this?  Thanks.



>
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
> 855-TREBLE2 (855-873-2532)   mobile: 512-569-9461
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 3:46 PM, Adrian Klaver 
wrote:

> On 09/07/2016 03:32 PM, Ken Tanzer wrote:
>
>>
>>
>> On Wed, Sep 7, 2016 at 3:18 PM, Adrian Klaver > > wrote:
>>
>> On 09/07/2016 01:36 PM, Ken Tanzer wrote:
>>
>>
>>
>> On Wed, Sep 7, 2016 at 1:22 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>
>> >
>> >> wrote:
>>
>> On 09/07/2016 01:15 PM, Ken Tanzer wrote:
>>
>> Hi.  Using version 9.2.  I'm trying to create a function
>> that
>> will take
>> a record from any view and assemble it into a string,
>> for export to
>> another system.  For example, this view:
>>
>> \d ebh_gain
>>
>> View "public.ebh_gain"
>>   Column   | Type  | Modifiers
>> ---+---+---
>>  reporting_unit_id | character(3)  |
>>  case_id   | character(10) |
>>  event_date| character(8)  |
>>  ids_score | character(1)  |
>>  eds_score | character(1)  |
>>  sds_score | character(1)  |
>>  kc_auth_number| integer   |
>>  king_county_id| integer   |
>>
>> would get converted into a string with all the fields
>> concatenated
>> together, and space-padded to their full lengths.
>>
>>
>> I think an example is needed. I was thinking you wanted the
>> field
>> values transformed, but the below seems to indicate
>> something different.
>>
>> No transformation is needed, except for padding the fields out
>> to their
>> maximum lengths.  So for example with these values
>>
>> ('AA','1243','20160801','2','1','1',37,24)
>>
>> I need a string created that looks like this:
>>
>> 'AA 1243  201608012113724'
>>
>> I have a whole bunch of views that I need to do this for, and am
>> hoping
>> to avoid coding something specific for each one.
>>
>>
>> I can do it relatively easy in plpythonu:
>>
>> production=# \d str_test
>>Table "history.str_test"
>>   Column   | Type  | Modifiers
>> ---+---+---
>>  reporting_unit_id | character(3)  |
>>  case_id   | character(10) |
>>  event_date| character(8)  |
>>
>> production=# insert into str_test values ('1', '1234', '09/07/16');
>> INSERT 0 1
>>
>>
>> DO
>> $$
>> rs =  plpy.execute("SELECT * FROM str_test", 1)
>> cols = rs.colnames()
>> plpy.notice(rs.colnames())
>> str_out = ""
>> for col in cols:
>> str_out += str(rs[0][col])
>> plpy.notice(str_out)
>> $$ LANGUAGE plpythonu;
>>
>> NOTICE:  ['reporting_unit_id', 'case_id', 'event_date']
>> CONTEXT:  PL/Python anonymous code block
>> NOTICE:  1  1234  09/07/16
>> CONTEXT:  PL/Python anonymous code block
>> DO
>>
>> Yeah, that and a trip to the information schema to pad out the fields
>> would get me the string I need.  But I was hoping to be able to do this
>>
>
> Well the above has the padding already there, though that assumes char(x)
> fields.

Oh, I didn't see how it would pick up the padding, but great!


>
>
> without having the function select the individual record.  Ideally:
>>
>> SELECT my_cat(ebh_gain) FROM ebh_gain;
>>
>
> So do you want one record to be converted at a time or many?
>
>
I would like one record converted per function call.  But of course to be
able to generate multiple ones in a select:

INSERT INTO holding_table my_cat(ebh_gain,'ebh_gain') FROM ebh_gain WHERE
needs_to_be_exported...

Thanks,
Ken




>
>> or, at least somewhat more realistically:
>>
>> SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;
>>
>> I know TCL and probably Python and others can work with a record as a
>> trigger function.  But TCL doesn't seem to accept a record as an
>> argument.  Can any of the other languages that could also accomplish
>> this function? Or some other way? Thanks.
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-07 Thread Jim Nasby

On 9/2/16 11:44 AM, David Gibbons wrote:

rsync -va /var/lib/pgsql/ /var/lib/pgsql2/
service postgres stop
rsync -va /var/lib/pgsql/ /var/lib/pgsql2/

The second rsync will only copy the deltas from the first, it still has
to go in and determine what needs to be copied/what changed but the bulk
of it can be prepared/migrated before the actual downtime window.


That is NOT safe. The problem is it allows rsync to use mtime alone to 
decide that a file is in sync, and that will fail if Postgres writes to 
a file in the same second that the first rsync reads from it (assuming 
Postgres writes after rsync reads). You need to add the --checksum flag 
to rsync (which means it will still have to read everything that's in 
/var/lib/pgsql).

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Privileges on public schema can't be revoked?

2016-09-07 Thread Jim Nasby

On 9/6/16 3:16 PM, Greg Fodor wrote:

It seems that functionality that lets a superuser quickly audit the
privileges for a user (including those granted via PUBLIC) would be
really helpful for diagnosing cases where that user can do something
they shouldn't be allowed to.


That's actually relatively easy to do today; see the has_*_privilege() 
functions.


You might also find http://pgxn.org/dist/pg_acl useful.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-07 Thread Patrick B
>
>
> Or if you want, upgrade on your existing provider first, then setup
> streaming replication. But no matter what, pg_upgrade will require some
> form of downtime.
>
> You could also use either Slony or londiste to directly migrate from
> Rackspace to an EC2 instance on 9.5; no need for DMS at all. That has the
> added benefit of allowing you to switch to a new version with virtually
> zero downtime (as in seconds if you script it) and allowing you to maintain
> the old version (along with any new data changes) in case you need to fail
> back (without data loss).
>
> Keep in mind that in this case you're really only using DMS as a form of
> replication, so you might be better off just sticking with Postgres tools.
>
>
I'll be using DMS to migrate from an EC2 Instance to RDS instance. I won't
be using EC2 on production environment, so to migrate to a RDS I need to
use DMS...

I got some located servers at Rackspace, this means that I'm not using
Rackspace postgres database service... so If i'd use slony I'd have to
install/configure it on my own.

Thanks for the answers guys...  This migration must happen and it will, and
of course I'll keep u updated about how it went.



Patrick


Re: [GENERAL] Materialized view auto refresh

2016-09-07 Thread Jim Nasby

On 9/6/16 9:47 PM, Nguyễn Trần Quốc Vinh wrote:

I'm sorry. Please check it again at
http://it.ued.udn.vn/myprojects/pgTriggerGen/: 
http://it.ued.udn.vn/myprojects/pgTriggerGen/PgSQLMVTriggersGenerator_v2_src.rar.


FWIW, people would be much more likely to find this if you put it on 
GitHub (and it would be far easier for them to submit improvements).

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-07 Thread Jim Nasby

On 9/6/16 10:53 PM, John R Pierce wrote:

1) setup new streaming slave at new provider using 9.2, wait for
replication to complete and satisfy yourself that the database is
complete and intact.
2) shut down master at old provider (and any other slaves), promote new
provider 9.2 to master.
3) pg_upgrade using --link option to 9.4 on new provider.   if this
fails, restart old master and start over at step 1
4) bring up applications on new database.  if they fail, restart old
master, and start over at step 1
5) bring up new slave(s) on new database.


Or if you want, upgrade on your existing provider first, then setup 
streaming replication. But no matter what, pg_upgrade will require some 
form of downtime.


You could also use either Slony or londiste to directly migrate from 
Rackspace to an EC2 instance on 9.5; no need for DMS at all. That has 
the added benefit of allowing you to switch to a new version with 
virtually zero downtime (as in seconds if you script it) and allowing 
you to maintain the old version (along with any new data changes) in 
case you need to fail back (without data loss).


Keep in mind that in this case you're really only using DMS as a form of 
replication, so you might be better off just sticking with Postgres tools.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Adrian Klaver

On 09/07/2016 03:32 PM, Ken Tanzer wrote:



On Wed, Sep 7, 2016 at 3:18 PM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:

On 09/07/2016 01:36 PM, Ken Tanzer wrote:



On Wed, Sep 7, 2016 at 1:22 PM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>
>> wrote:

On 09/07/2016 01:15 PM, Ken Tanzer wrote:

Hi.  Using version 9.2.  I'm trying to create a function
that
will take
a record from any view and assemble it into a string,
for export to
another system.  For example, this view:

\d ebh_gain

View "public.ebh_gain"
  Column   | Type  | Modifiers
---+---+---
 reporting_unit_id | character(3)  |
 case_id   | character(10) |
 event_date| character(8)  |
 ids_score | character(1)  |
 eds_score | character(1)  |
 sds_score | character(1)  |
 kc_auth_number| integer   |
 king_county_id| integer   |

would get converted into a string with all the fields
concatenated
together, and space-padded to their full lengths.


I think an example is needed. I was thinking you wanted the
field
values transformed, but the below seems to indicate
something different.

No transformation is needed, except for padding the fields out
to their
maximum lengths.  So for example with these values

('AA','1243','20160801','2','1','1',37,24)

I need a string created that looks like this:

'AA 1243  201608012113724'

I have a whole bunch of views that I need to do this for, and am
hoping
to avoid coding something specific for each one.


I can do it relatively easy in plpythonu:

production=# \d str_test
   Table "history.str_test"
  Column   | Type  | Modifiers
---+---+---
 reporting_unit_id | character(3)  |
 case_id   | character(10) |
 event_date| character(8)  |

production=# insert into str_test values ('1', '1234', '09/07/16');
INSERT 0 1


DO
$$
rs =  plpy.execute("SELECT * FROM str_test", 1)
cols = rs.colnames()
plpy.notice(rs.colnames())
str_out = ""
for col in cols:
str_out += str(rs[0][col])
plpy.notice(str_out)
$$ LANGUAGE plpythonu;

NOTICE:  ['reporting_unit_id', 'case_id', 'event_date']
CONTEXT:  PL/Python anonymous code block
NOTICE:  1  1234  09/07/16
CONTEXT:  PL/Python anonymous code block
DO

Yeah, that and a trip to the information schema to pad out the fields
would get me the string I need.  But I was hoping to be able to do this


Well the above has the padding already there, though that assumes 
char(x) fields.



without having the function select the individual record.  Ideally:

SELECT my_cat(ebh_gain) FROM ebh_gain;


So do you want one record to be converted at a time or many?



or, at least somewhat more realistically:

SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;

I know TCL and probably Python and others can work with a record as a
trigger function.  But TCL doesn't seem to accept a record as an
argument.  Can any of the other languages that could also accomplish
this function? Or some other way? Thanks.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Tom Lane
Jim Nasby  writes:
> On 9/7/16 5:32 PM, Ken Tanzer wrote:
>> I know TCL and probably Python and others can work with a record as a
>> trigger function.  But TCL doesn't seem to accept a record as an
>> argument.  Can any of the other languages that could also accomplish
>> this function? Or some other way? Thanks.

> A PL that can accept composite types (such as plpythonu) should be able 
> to do this.

I think pltcl is actually the only one of our standard PLs that can't take
"record".  There's no intrinsic reason for that, it just hasn't gotten the
love the other PLs have.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a column definition list is required for functions returning "record"

2016-09-07 Thread Tom Lane
Jim Nasby  writes:
> On 9/7/16 2:02 PM, Merlin Moncure wrote:
>> We have that already, it's named 'json_each_text'

> Apparently you haven't looked at json parse/deparse costs ;P

Well, a PL function is gonna be none too cheap either.

Using something like JSON definitely has lots to recommend it --- eg, it
probably won't break when you find out your initial spec for the transport
format was too simplistic.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Jim Nasby

On 9/7/16 5:32 PM, Ken Tanzer wrote:

SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;

I know TCL and probably Python and others can work with a record as a
trigger function.  But TCL doesn't seem to accept a record as an
argument.  Can any of the other languages that could also accomplish
this function? Or some other way? Thanks.


A PL that can accept composite types (such as plpythonu) should be able 
to do this.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-07 Thread Jim Nasby

On 9/6/16 2:08 PM, Scott Marlowe wrote:

checkpoint_completion_target = 0.9

Too high of a checkpoint completion target may cause buffers to get
written out more often than needed. but it varies based on load etc.


The odds on that don't seem to be terribly high. Even if that is a 
common occurrence if it's enough to make a difference then you're 
already close to the limits of your IO, and if that's true then you 
definitely want to spread the checkpoint out over a longer interval.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 3:18 PM, Adrian Klaver 
wrote:

> On 09/07/2016 01:36 PM, Ken Tanzer wrote:
>
>>
>>
>> On Wed, Sep 7, 2016 at 1:22 PM, Adrian Klaver > > wrote:
>>
>> On 09/07/2016 01:15 PM, Ken Tanzer wrote:
>>
>> Hi.  Using version 9.2.  I'm trying to create a function that
>> will take
>> a record from any view and assemble it into a string, for export
>> to
>> another system.  For example, this view:
>>
>> \d ebh_gain
>>
>> View "public.ebh_gain"
>>   Column   | Type  | Modifiers
>> ---+---+---
>>  reporting_unit_id | character(3)  |
>>  case_id   | character(10) |
>>  event_date| character(8)  |
>>  ids_score | character(1)  |
>>  eds_score | character(1)  |
>>  sds_score | character(1)  |
>>  kc_auth_number| integer   |
>>  king_county_id| integer   |
>>
>> would get converted into a string with all the fields concatenated
>> together, and space-padded to their full lengths.
>>
>>
>> I think an example is needed. I was thinking you wanted the field
>> values transformed, but the below seems to indicate something
>> different.
>>
>> No transformation is needed, except for padding the fields out to their
>> maximum lengths.  So for example with these values
>>
>> ('AA','1243','20160801','2','1','1',37,24)
>>
>> I need a string created that looks like this:
>>
>> 'AA 1243  201608012113724'
>>
>> I have a whole bunch of views that I need to do this for, and am hoping
>> to avoid coding something specific for each one.
>>
>
> I can do it relatively easy in plpythonu:
>
> production=# \d str_test
>Table "history.str_test"
>   Column   | Type  | Modifiers
> ---+---+---
>  reporting_unit_id | character(3)  |
>  case_id   | character(10) |
>  event_date| character(8)  |
>
> production=# insert into str_test values ('1', '1234', '09/07/16');
> INSERT 0 1
>
>
> DO
> $$
> rs =  plpy.execute("SELECT * FROM str_test", 1)
> cols = rs.colnames()
> plpy.notice(rs.colnames())
> str_out = ""
> for col in cols:
> str_out += str(rs[0][col])
> plpy.notice(str_out)
> $$ LANGUAGE plpythonu;
>
> NOTICE:  ['reporting_unit_id', 'case_id', 'event_date']
> CONTEXT:  PL/Python anonymous code block
> NOTICE:  1  1234  09/07/16
> CONTEXT:  PL/Python anonymous code block
> DO
>
Yeah, that and a trip to the information schema to pad out the fields would
get me the string I need.  But I was hoping to be able to do this without
having the function select the individual record.  Ideally:

SELECT my_cat(ebh_gain) FROM ebh_gain;

or, at least somewhat more realistically:

SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;

I know TCL and probably Python and others can work with a record as a
trigger function.  But TCL doesn't seem to accept a record as an argument.
Can any of the other languages that could also accomplish this function? Or
some other way? Thanks.

>
>


Re: [GENERAL] a column definition list is required for functions returning "record"

2016-09-07 Thread Jim Nasby

On 9/7/16 2:02 PM, Merlin Moncure wrote:

>> Exactly.  You can pass anonymous record types around today, as long as you
>> don't do anything that requires knowing what their contents are, either in
>> the function or in the calling query:

>
> What I was thinking of is something (like a function) that has explicitly
> defined what the contents of the record are.

We have that already, it's named 'json_each_text'


Apparently you haven't looked at json parse/deparse costs ;P
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Adrian Klaver

On 09/07/2016 01:36 PM, Ken Tanzer wrote:



On Wed, Sep 7, 2016 at 1:22 PM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:

On 09/07/2016 01:15 PM, Ken Tanzer wrote:

Hi.  Using version 9.2.  I'm trying to create a function that
will take
a record from any view and assemble it into a string, for export to
another system.  For example, this view:

\d ebh_gain

View "public.ebh_gain"
  Column   | Type  | Modifiers
---+---+---
 reporting_unit_id | character(3)  |
 case_id   | character(10) |
 event_date| character(8)  |
 ids_score | character(1)  |
 eds_score | character(1)  |
 sds_score | character(1)  |
 kc_auth_number| integer   |
 king_county_id| integer   |

would get converted into a string with all the fields concatenated
together, and space-padded to their full lengths.


I think an example is needed. I was thinking you wanted the field
values transformed, but the below seems to indicate something different.

No transformation is needed, except for padding the fields out to their
maximum lengths.  So for example with these values

('AA','1243','20160801','2','1','1',37,24)

I need a string created that looks like this:

'AA 1243  201608012113724'

I have a whole bunch of views that I need to do this for, and am hoping
to avoid coding something specific for each one.


I can do it relatively easy in plpythonu:

production=# \d str_test
   Table "history.str_test"
  Column   | Type  | Modifiers
---+---+---
 reporting_unit_id | character(3)  |
 case_id   | character(10) |
 event_date| character(8)  |

production=# insert into str_test values ('1', '1234', '09/07/16');
INSERT 0 1


DO
$$
rs =  plpy.execute("SELECT * FROM str_test", 1)
cols = rs.colnames()
plpy.notice(rs.colnames())
str_out = ""
for col in cols:
str_out += str(rs[0][col])
plpy.notice(str_out)
$$ LANGUAGE plpythonu;

NOTICE:  ['reporting_unit_id', 'case_id', 'event_date']
CONTEXT:  PL/Python anonymous code block
NOTICE:  1  1234  09/07/16
CONTEXT:  PL/Python anonymous code block
DO









My original idea was to do this in TCL by passing a record and a
view
name.  The function would then look up the columns in the
information_schema, and use that info to assemble and return the
string.  But it looks like TCL functions won't accept a record as an
argument.

Any suggestions or advice most welcome.  Thanks!

Ken
--
AGENCY Software
A Free Software data system
By and for non-profits
/http://agency-software.org//
/https://agency-software.org/demo/client/

ken.tan...@agency-software.org

>
(253) 245-3801 

Subscribe to the mailing list
?body=subscribe>
to
learn more about AGENCY or
follow the discussion.



--
Adrian Klaver
adrian.kla...@aklaver.com 




--
AGENCY Software
A Free Software data system
By and for non-profits
/http://agency-software.org//
/https://agency-software.org/demo/client/
ken.tan...@agency-software.org 
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] joined tables with USING and GROUPBY on the USING() column

2016-09-07 Thread Geoff Winkless
Thanks for the suggestions. Turns out I was right: I _was_ missing
something obvious - results had token as char(4), tokens had token as
varchar(4). Because the columns aren't the same they aren't treated as
identical so the query builder won't accept the unqualified name.

Next task is to work out how on earth the server ended up with its
results table different to all the others...

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin 4 records limit of 2000

2016-09-07 Thread Melvin Davidson
On Wed, Sep 7, 2016 at 3:23 PM, Adrian Klaver 
wrote:

> On 09/06/2016 12:58 PM, dudedoe01 wrote:
>
>> Hi Adrian,
>>
>> The query tool toolbar is disabled in my pgAdmin 4. Is there anyway I can
>> enable it.
>>
>
> I don't know. I rarely use pgAdmin3 and have only looked at the new
> pgAdmin once or twice. I would say you will find a more informed audience
> here:
>
> https://www.postgresql.org/list/pgadmin-support/
>
>
>> 
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Per the developers :
>In Query Tool mode the control is disabled (as are the filtering options)
as you have complete control over the SQL query being executed.
Those controls are enabled in Edit Grid mode (when you right-click and View
Data on a table or view etc), where the SQL query is constructed for you.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 1:22 PM, Adrian Klaver 
wrote:

> On 09/07/2016 01:15 PM, Ken Tanzer wrote:
>
>> Hi.  Using version 9.2.  I'm trying to create a function that will take
>> a record from any view and assemble it into a string, for export to
>> another system.  For example, this view:
>>
>> \d ebh_gain
>>>
>> View "public.ebh_gain"
>>   Column   | Type  | Modifiers
>> ---+---+---
>>  reporting_unit_id | character(3)  |
>>  case_id   | character(10) |
>>  event_date| character(8)  |
>>  ids_score | character(1)  |
>>  eds_score | character(1)  |
>>  sds_score | character(1)  |
>>  kc_auth_number| integer   |
>>  king_county_id| integer   |
>>
>> would get converted into a string with all the fields concatenated
>> together, and space-padded to their full lengths.
>>
>
> I think an example is needed. I was thinking you wanted the field values
> transformed, but the below seems to indicate something different.
>
> No transformation is needed, except for padding the fields out to their
maximum lengths.  So for example with these values

('AA','1243','20160801','2','1','1',37,24)

I need a string created that looks like this:

'AA 1243  201608012113724'

I have a whole bunch of views that I need to do this for, and am hoping to
avoid coding something specific for each one.




>
>> My original idea was to do this in TCL by passing a record and a view
>> name.  The function would then look up the columns in the
>> information_schema, and use that info to assemble and return the
>> string.  But it looks like TCL functions won't accept a record as an
>> argument.
>>
>> Any suggestions or advice most welcome.  Thanks!
>>
>> Ken
>> --
>> AGENCY Software
>> A Free Software data system
>> By and for non-profits
>> /http://agency-software.org//
>> /https://agency-software.org/demo/client/
>> ken.tan...@agency-software.org 
>> (253) 245-3801
>>
>> Subscribe to the mailing list
>>  to
>> learn more about AGENCY or
>> follow the discussion.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Adrian Klaver

On 09/07/2016 01:15 PM, Ken Tanzer wrote:

Hi.  Using version 9.2.  I'm trying to create a function that will take
a record from any view and assemble it into a string, for export to
another system.  For example, this view:


\d ebh_gain

View "public.ebh_gain"
  Column   | Type  | Modifiers
---+---+---
 reporting_unit_id | character(3)  |
 case_id   | character(10) |
 event_date| character(8)  |
 ids_score | character(1)  |
 eds_score | character(1)  |
 sds_score | character(1)  |
 kc_auth_number| integer   |
 king_county_id| integer   |

would get converted into a string with all the fields concatenated
together, and space-padded to their full lengths.


I think an example is needed. I was thinking you wanted the field values 
transformed, but the below seems to indicate something different.




My original idea was to do this in TCL by passing a record and a view
name.  The function would then look up the columns in the
information_schema, and use that info to assemble and return the
string.  But it looks like TCL functions won't accept a record as an
argument.

Any suggestions or advice most welcome.  Thanks!

Ken
--
AGENCY Software
A Free Software data system
By and for non-profits
/http://agency-software.org//
/https://agency-software.org/demo/client/
ken.tan...@agency-software.org 
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
Hi.  Using version 9.2.  I'm trying to create a function that will take a
record from any view and assemble it into a string, for export to another
system.  For example, this view:

> \d ebh_gain
View "public.ebh_gain"
  Column   | Type  | Modifiers
---+---+---
 reporting_unit_id | character(3)  |
 case_id   | character(10) |
 event_date| character(8)  |
 ids_score | character(1)  |
 eds_score | character(1)  |
 sds_score | character(1)  |
 kc_auth_number| integer   |
 king_county_id| integer   |

would get converted into a string with all the fields concatenated
together, and space-padded to their full lengths.

My original idea was to do this in TCL by passing a record and a view
name.  The function would then look up the columns in the
information_schema, and use that info to assemble and return the string.
But it looks like TCL functions won't accept a record as an argument.

Any suggestions or advice most welcome.  Thanks!

Ken
-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] joined tables with USING and GROUPBY on the USING() column

2016-09-07 Thread Adrian Klaver

On 09/07/2016 01:11 PM, Geoff Winkless wrote:

On 7 Sep 2016 9:01 p.m., "Adrian Klaver" mailto:adrian.kla...@aklaver.com>> wrote:

What happens if you table qualify all the references to token?


Oh it definitely fixes it; I was more confused why it works on one
server and not another. I thought perhaps there was a config option to
allow more lax naming in this way. If not I'll have to look more closely
at the table defs, although I thought I had checked they were the same.


I would check search_path to see if you have another table of same name 
in the path on the problem machine versus on the other machines.




Geoff




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] joined tables with USING and GROUPBY on the USING() column

2016-09-07 Thread Geoff Winkless
On 7 Sep 2016 9:01 p.m., "Adrian Klaver"  wrote:
> What happens if you table qualify all the references to token?

Oh it definitely fixes it; I was more confused why it works on one server
and not another. I thought perhaps there was a config option to allow more
lax naming in this way. If not I'll have to look more closely at the table
defs, although I thought I had checked they were the same.

Geoff


Re: [GENERAL] joined tables with USING and GROUPBY on the USING() column

2016-09-07 Thread John R Pierce

On 9/7/2016 12:53 PM, Geoff Winkless wrote:

I'll start by saying that I'm sure I'm missing something obvious...

I have a query that is working fine on all my servers except one. The
only obvious difference is that the failing one is running 9.5.3,
while most are running 9.5.4, but since the query works on a 9.5.1 box
I also have I can't imagine that's the cause.

I can simplify the query (and still retain the error) down to:

SELECT results.token FROM results INNER JOIN tokens USING(token) GROUP BY token;

ERROR: column "results.token" must appear in the GROUP BY clause or be
used in an aggregate function

All well and good, and I'm happy enough to change the query's GROUP BY
to include the table name, but it's confusing me how it works OK on
all servers except one.

Is there some configuration option I'm missing?


the only thing I can think of is that 'group by token' is ambiguous, if 
that was group by results.token, it would be specific.   yes, I know, 
your join conditions using(token) are equivalent to saying on 
results.token=tokens.token, so either token should be the same thing...



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] joined tables with USING and GROUPBY on the USING() column

2016-09-07 Thread Adrian Klaver

On 09/07/2016 12:53 PM, Geoff Winkless wrote:

I'll start by saying that I'm sure I'm missing something obvious...

I have a query that is working fine on all my servers except one. The
only obvious difference is that the failing one is running 9.5.3,
while most are running 9.5.4, but since the query works on a 9.5.1 box
I also have I can't imagine that's the cause.

I can simplify the query (and still retain the error) down to:

SELECT results.token FROM results INNER JOIN tokens USING(token) GROUP BY token;

ERROR: column "results.token" must appear in the GROUP BY clause or be
used in an aggregate function


Seems to me Postgres is not seeing results.token as the same as token.

What are schema for the tables results and tokens?

What happens if you table qualify all the references to token?



All well and good, and I'm happy enough to change the query's GROUP BY
to include the table name, but it's confusing me how it works OK on
all servers except one.

Is there some configuration option I'm missing?

Thanks!

Geoff





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] joined tables with USING and GROUPBY on the USING() column

2016-09-07 Thread Geoff Winkless
I'll start by saying that I'm sure I'm missing something obvious...

I have a query that is working fine on all my servers except one. The
only obvious difference is that the failing one is running 9.5.3,
while most are running 9.5.4, but since the query works on a 9.5.1 box
I also have I can't imagine that's the cause.

I can simplify the query (and still retain the error) down to:

SELECT results.token FROM results INNER JOIN tokens USING(token) GROUP BY token;

ERROR: column "results.token" must appear in the GROUP BY clause or be
used in an aggregate function

All well and good, and I'm happy enough to change the query's GROUP BY
to include the table name, but it's confusing me how it works OK on
all servers except one.

Is there some configuration option I'm missing?

Thanks!

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-07 Thread John R Pierce

On 9/2/2016 8:38 PM, Pradeep wrote:

...  In task bar it is showing 2.7GB Utilization ...


odd, the task bar doesn't show any sort of memory utilization on any of 
my windows systems.are you referring instead to the Task Manager ?


Note the Windows Task Manager by default doesn't show shared memory 
resources either, you'd have to go to the 'details...' view, then add a 
column for 'Memory (shared working set)' to see this.


also note, the 'working set' is the memory thats actually being used, 
not potentially allocated.   so even if you've told PG it can have 10GB 
of shared buffers, if your database accesses since restarting the 
database server have only touched 2GB of actual data, thats all that can 
be 'working set'



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-07 Thread Merlin Moncure
On Tue, Sep 6, 2016 at 11:12 PM, Pradeep  wrote:
> Dear Naveed,
>
> I am using PostgreSQL 9.3 version on Windows .After changing these
> parameters, I have not seen any resource management utilization.
>
> I have observed before and after changing  the parameter values ,it is not
> reflecting the memory level. Maximum utilization of RAM is 3GB only.
>
> So kindly let me ,whether it will impact the  RAM utilization or not?

Postgres reserves some memory for itself and relies on the operating
system to buffer the rest.  So this is not really unusual or
interesting.  What would be interesting is specific examples of things
that are not running as fast as you think they should be.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin 4 records limit of 2000

2016-09-07 Thread Adrian Klaver

On 09/06/2016 12:58 PM, dudedoe01 wrote:

Hi Adrian,

The query tool toolbar is disabled in my pgAdmin 4. Is there anyway I can
enable it.


I don't know. I rarely use pgAdmin3 and have only looked at the new 
pgAdmin once or twice. I would say you will find a more informed 
audience here:


https://www.postgresql.org/list/pgadmin-support/










--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a column definition list is required for functions returning "record"

2016-09-07 Thread Merlin Moncure
 Fri, Sep 2, 2016 at 6:55 PM, Jim Nasby  wrote:
> On 8/29/16 6:28 AM, Tom Lane wrote:
>>
>> Pavel Stehule  writes:
>>>
>>> > 2016-08-29 1:59 GMT+02:00 Jim Nasby :

 >> It would be nice if there was a way to pass dynamically formed
 >> records
 >> around, similar to how you can pass the results of row() around.
 >> Someone
 >> else has actually be asking about this at
 >> https://github.com/decibel/pg_
 >> lambda/issues/1.
>>>
>>> > Probably there is a space to be PLpgSQL more flexible - but there are
>>> > limits - PLpgSQL is black box for SQL engine, and when output is any
>>> > record
>>> > type, then SQL engine knows zero about returning data structure in
>>> > preprocessing time.
>>
>> Exactly.  You can pass anonymous record types around today, as long as you
>> don't do anything that requires knowing what their contents are, either in
>> the function or in the calling query:
>
> What I was thinking of is something (like a function) that has explicitly
> defined what the contents of the record are.

We have that already, it's named 'json_each_text' :-).

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-07 Thread Naveed Shaikh
Hello Pradeep,

After making the changes have you restarted the cluster?


---
Warm Regards,
--
Naveed Shaikh


On Wed, Sep 7, 2016 at 9:42 AM, Pradeep  wrote:

> Dear Naveed,
>
>
>
> I am using PostgreSQL 9.3 version on Windows .After changing these
> parameters, I have not seen any resource management utilization.
>
> I have observed before and after changing  the parameter values ,it is not
> reflecting the memory level. Maximum utilization of RAM is 3GB only.
>
> So kindly let me ,whether it will impact the  RAM utilization or not?
>
>
>
> Thanks & Regards
>
> Pradeep Kanth
>
> Ext : 3026
>
>
>
> *From:* Naveed Shaikh [mailto:naveed.sha...@enterprisedb.com]
> *Sent:* 06 September, 2016 11:22 PM
> *To:* Pradeep
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] PostgreSQL Database performance
>
>
>
> Which version of PostgreSQL are you using on your windows?
>
>
>
> Increasing work_mem can lead to far less disk-swapping, and therefore far
> quicker queries. However, it can cause problems if set too high, and should
> be constrained taking into account max_connections. The following
> calculation is what is typically recommended to determine a decent
> work_mem value:
>
> Total RAM * 0.25 / max_connections
>
> If there are large reporting queries that run on the database which
> require more work memory than a typical connection,work_mem can be set
> for those particular queries. If, for example, there is a reporting user
> that only runs infrequent but large reports, a specific work_mem setting
> can be applied to that particular role.
> e.g.
>
> ALTER ROLE reporting SET work_mem = '64MB';
>
>
> ---
>
> Warm Regards,
>
> --
>
> Naveed Shaikh
>
>
>
> On Sat, Sep 3, 2016 at 9:08 AM, Pradeep  wrote:
>
> Dear Team,
>
>
>
> Could you please help me, after changing the below parameters in
> PostgreSQL configuration file it was not reflecting in OS level and also
> Database performance is degrading.
>
>
>
> *Example*: I am using Windows 2008 R2 server .For PostgreSQL I have
> allocated 24GB RAM out of 32GB.
>
> However after changing the below parameters, In task bar it is showing
> 2.7GB Utilization even though my utilization is more.
>
> So kindly suggest us, whether it will impact or not in Open source
> PostgreSQL database
>
>
>
> max_connections = 100
>
> shared_buffers = 512MB
>
> effective_cache_size = 24GB
>
> work_mem = 110100kB
>
> maintenance_work_mem = 2GB
>
> checkpoint_segments = 64
>
> checkpoint_completion_target = 0.9
>
> wal_buffers = 16MB
>
> default_statistics_target = 100
>
>
>
> Thanks & Regards
>
> Pradeep Kanth
>
>
>
>
>


Re: [GENERAL] Restricted access on DataBases

2016-09-07 Thread Charles Clavadetscher

Hello

I did oversee the additional questions...

On 09/07/2016 06:45 PM, Charles Clavadetscher wrote:

Hello

On 09/07/2016 03:24 PM, Durumdara wrote:

Dear Everybody!

I read the documentation based on your example. First reactions.
2016-09-05 18:25 GMT+02:00 Charles Clavadetscher
mailto:clavadetsc...@swisspug.org>>:


GRANT us_a, us_b, us_c TO main_admin;



Ah, it's good. I can merge the "owner" rights to one. :-)
It's like "doubling"! :-)

Here an example (obviously you will choose secure passwords and
initialize them using \password . This is just a very
simple example). I used 9.5 but it would work with earlier versions
as well.

-- Create roles and databases

CREATE ROLE main_admin LOGIN PASSWORD 'xxx';

CREATE ROLE us_a LOGIN PASSWORD 'xxx';
CREATE DATABASE db_a;
ALTER DATABASE db_a OWNER TO us_a;

CREATE ROLE us_b LOGIN PASSWORD 'xxx';
CREATE DATABASE db_b;
ALTER DATABASE db_b OWNER TO us_b;

-- Restrict access

REVOKE CONNECT ON DATABASE db_a FROM public;
GRANT CONNECT ON DATABASE db_a TO us_a;

REVOKE CONNECT ON DATABASE db_b FROM public;
GRANT CONNECT ON DATABASE db_b TO us_b;

-- Grant all user rights to main_admin:

GRANT us_a, us_b TO main_admin;


What could be the problem with the revoking only "connect" priv? What
are/would be the silent side-effects?


None.

Just before I go on answering your questions, a general statement from
my side. If I understood you correctly you have a set of customers that
each is owner of his database. Other customers are not supposed to look
into his data. But for maintenance reasons you have a main_admin user
that must have the same privileges as your various customers.

So we have distinct databases, not distinct schemas in a single database.


For example:
Ok, us_b can't connect to db_a, but...
He can connect to db_b and may he can start(?) a multidatabase query...


He can't, how should he? In order to access other databases you would
need to set up foreign data wrappers and adjust the privileges on it.
But for your use case there is no need for it. Even if the community
would implement something like multidatabase query natively I would
expect the privileges on the database to hold on.


He can set his role to bla, and he can insert the table db_a.X...


If you are speaking still of us_b, well he can't. A user can only set a
role he belongs to (see documentation). Now if you have a user bla that
has access to db_a and you granted that role to us_b, well, yes he can,
but this is your responsibility. PostgreSQL does not protect you from
doing security design errors.


He can read the temp tables of db_a...


Well, a temp table is usually created within a transaction, so no other
users have access to them anyway. Besides they are created in the
owner's database, so without connect, no way.


He can read the structure of db_a


No. User us_b has access to the structure of db_b not db_a. This is
defined on a database level and not global (like, e.g. roles).


He can break out from his sandbox by...???


AFAIK he can't. But maybe some other specialist will be able to build an
attack vector to it.

So far, I would say that you are on a pretty sound ground and that is
due to a very clean implementation from the community.

Charles




---

Other question:
Can I imagine the GRANT as present, and the DEFAULT PRIVILEGES as future?


It depends. From the requirements that you submitted, you don't need to 
alter default privileges. Your single user, let's say us_a, can create 
objects, including schemas and then tables in that schema within db_a 
and your main_admin will be able to access them via the grant of role 
us_a to him.



Your two solutions are seem to be better like "revoke public in all and
grant all rights in all object in the present (GRANT) and in the future
(DEF. PRIV)".


No. You need to change the default privileges in other scenarios. If the 
explanations so far don't match your requirements, I may have 
misunderstood what you are trying to achieve. In that case send please a 
more clarifying use case.


Charles



Very-very thank you!

dd




--
Swiss PostgreSQL Users Group
c/o Charles Clavadetscher
Treasurer
Motorenstrasse 18
CH – 8005 Zürich

http://www.swisspug.org

+---+
|     __  ___   |
|  /)/  \/   \  |
| ( / ___\) |
|  \(/ o)  ( o)   ) |
|   \_  (_  )   \ ) _/  |
| \  /\_/\)/|
|  \/ |
|   _|  |   |
|   \|_/|
|   |
| PostgreSQL 1996-2016  |
|  20 Years of Success  |
|   |
+---+


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restricted access on DataBases

2016-09-07 Thread Charles Clavadetscher

Hello

On 09/07/2016 03:24 PM, Durumdara wrote:

Dear Everybody!

I read the documentation based on your example. First reactions.
2016-09-05 18:25 GMT+02:00 Charles Clavadetscher
mailto:clavadetsc...@swisspug.org>>:


GRANT us_a, us_b, us_c TO main_admin;



Ah, it's good. I can merge the "owner" rights to one. :-)
It's like "doubling"! :-)

Here an example (obviously you will choose secure passwords and
initialize them using \password . This is just a very
simple example). I used 9.5 but it would work with earlier versions
as well.

-- Create roles and databases

CREATE ROLE main_admin LOGIN PASSWORD 'xxx';

CREATE ROLE us_a LOGIN PASSWORD 'xxx';
CREATE DATABASE db_a;
ALTER DATABASE db_a OWNER TO us_a;

CREATE ROLE us_b LOGIN PASSWORD 'xxx';
CREATE DATABASE db_b;
ALTER DATABASE db_b OWNER TO us_b;

-- Restrict access

REVOKE CONNECT ON DATABASE db_a FROM public;
GRANT CONNECT ON DATABASE db_a TO us_a;

REVOKE CONNECT ON DATABASE db_b FROM public;
GRANT CONNECT ON DATABASE db_b TO us_b;

-- Grant all user rights to main_admin:

GRANT us_a, us_b TO main_admin;


What could be the problem with the revoking only "connect" priv? What
are/would be the silent side-effects?


None.

Just before I go on answering your questions, a general statement from 
my side. If I understood you correctly you have a set of customers that 
each is owner of his database. Other customers are not supposed to look 
into his data. But for maintenance reasons you have a main_admin user 
that must have the same privileges as your various customers.


So we have distinct databases, not distinct schemas in a single database.


For example:
Ok, us_b can't connect to db_a, but...
He can connect to db_b and may he can start(?) a multidatabase query...


He can't, how should he? In order to access other databases you would 
need to set up foreign data wrappers and adjust the privileges on it. 
But for your use case there is no need for it. Even if the community 
would implement something like multidatabase query natively I would 
expect the privileges on the database to hold on.



He can set his role to bla, and he can insert the table db_a.X...


If you are speaking still of us_b, well he can't. A user can only set a 
role he belongs to (see documentation). Now if you have a user bla that 
has access to db_a and you granted that role to us_b, well, yes he can, 
but this is your responsibility. PostgreSQL does not protect you from 
doing security design errors.



He can read the temp tables of db_a...


Well, a temp table is usually created within a transaction, so no other 
users have access to them anyway. Besides they are created in the 
owner's database, so without connect, no way.



He can read the structure of db_a


No. User us_b has access to the structure of db_b not db_a. This is 
defined on a database level and not global (like, e.g. roles).



He can break out from his sandbox by...???


AFAIK he can't. But maybe some other specialist will be able to build an 
attack vector to it.


So far, I would say that you are on a pretty sound ground and that is 
due to a very clean implementation from the community.


Charles




---

Other question:
Can I imagine the GRANT as present, and the DEFAULT PRIVILEGES as future?

Your two solutions are seem to be better like "revoke public in all and
grant all rights in all object in the present (GRANT) and in the future
(DEF. PRIV)".

Very-very thank you!

dd


--
Swiss PostgreSQL Users Group
c/o Charles Clavadetscher
Treasurer
Motorenstrasse 18
CH – 8005 Zürich

http://www.swisspug.org

+---+
|     __  ___   |
|  /)/  \/   \  |
| ( / ___\) |
|  \(/ o)  ( o)   ) |
|   \_  (_  )   \ ) _/  |
| \  /\_/\)/|
|  \/ |
|   _|  |   |
|   \|_/|
|   |
| PostgreSQL 1996-2016  |
|  20 Years of Success  |
|   |
+---+


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-07 Thread Pradeep
Dear Naveed,

 

I am using PostgreSQL 9.3 version on Windows .After changing these parameters, 
I have not seen any resource management utilization.

I have observed before and after changing  the parameter values ,it is not 
reflecting the memory level. Maximum utilization of RAM is 3GB only.

So kindly let me ,whether it will impact the  RAM utilization or not?

 

Thanks & Regards

Pradeep Kanth

Ext : 3026

 

From: Naveed Shaikh [mailto:naveed.sha...@enterprisedb.com] 
Sent: 06 September, 2016 11:22 PM
To: Pradeep
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] PostgreSQL Database performance

 

Which version of PostgreSQL are you using on your windows?

 

Increasing work_mem can lead to far less disk-swapping, and therefore far 
quicker queries. However, it can cause problems if set too high, and should be 
constrained taking into account max_connections. The following calculation is 
what is typically recommended to determine a decent work_mem value:

Total RAM * 0.25 / max_connections

If there are large reporting queries that run on the database which require 
more work memory than a typical connection,work_mem can be set for those 
particular queries. If, for example, there is a reporting user that only runs 
infrequent but large reports, a specific work_mem setting can be applied to 
that particular role.
e.g.

ALTER ROLE reporting SET work_mem = '64MB';




---

Warm Regards,

--

Naveed Shaikh

 

On Sat, Sep 3, 2016 at 9:08 AM, Pradeep mailto:pgund...@avineonindia.com> > wrote:

Dear Team,

 

Could you please help me, after changing the below parameters in PostgreSQL 
configuration file it was not reflecting in OS level and also Database 
performance is degrading.

 

Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated 
24GB RAM out of 32GB.

However after changing the below parameters, In task bar it is showing 2.7GB 
Utilization even though my utilization is more.

So kindly suggest us, whether it will impact or not in Open source PostgreSQL 
database

 

max_connections = 100

shared_buffers = 512MB

effective_cache_size = 24GB

work_mem = 110100kB

maintenance_work_mem = 2GB

checkpoint_segments = 64

checkpoint_completion_target = 0.9

wal_buffers = 16MB

default_statistics_target = 100

 

Thanks & Regards

Pradeep Kanth

 

 



[GENERAL] Setup pgpool-II with streaming replication

2016-09-07 Thread Ashish Chauhan
Hi,

We do have Postgres 9.4 streaming replication in Prod environment, currently we 
are having issue with our DB connections and planning to implement PgPool-II 
for database pooling with streaming replication.

Key Challenges:


* Cannot afford downtime on db servers.

* Add PgPool-II with current streaming replication.

* Not planning to use PgPoolAdmin tool.

I gone through below articles but some or other way require DB server restart 
which is not possible in our case. Could you please provide some documentation 
or steps how to achive connection pooling without any downtime?

http://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/
http://www.wirabumisoftware.com/en/openbravo-developer-guide/postgresqldevtutorial/25-pgsqlid/82-install-pgpool2

Thanks,
Ashish Chauhan



Re: [GENERAL] pgAdmin 4 records limit of 2000

2016-09-07 Thread dudedoe01
Hi Adrian,

The query tool toolbar is disabled in my pgAdmin 4. Is there anyway I can
enable it.

 



--
View this message in context: 
http://postgresql.nabble.com/pgAdmin-4-records-limit-of-2000-tp5919751p5919825.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGDATA / data_directory

2016-09-07 Thread Christoph Berg
Re: Jehan-Guillaume de Rorthais 2016-09-07 <20160907140816.3e13eaa3@firost>
> Indeed. I never noticed data_directory was set in postgresql.conf file...
> 
> But then, why starting PostgreSQL with the following command ?
> 
>   /usr/lib/postgresql/9.4/bin/postgres -D /var/lib/postgresql/9.4/main \
> -c config_file=/etc/postgresql/9.4/main/postgresql.conf
> 
> It seems the following one work as expected and seems more logical with this
> setup:
> 
>   /usr/lib/postgresql/9.4/bin/postgres -D /etc/postgresql/9.4/main/

TBH, I've been wondering about that myself, but never bothered to to
anything about it. Digging in the (now git) history, it's been like
that from the very beginning in 2005:

https://anonscm.debian.org/cgit/pkg-postgresql/postgresql-common.git/commit/?id=9fa563e78366db3b27d680607c202b6fbb00bef2

It got touched a bit when 8.0 support was added, but basically not
changed since then.

https://anonscm.debian.org/cgit/pkg-postgresql/postgresql-common.git/commit/?id=fecbaad06c3683452228d31c0baffb01ba626a9a

I'll see if I can simply remove it. Thanks for the suggestion.

Christoph


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread Alexander Farber
Thank you, this works well now and comments about IN is appreciated too :-)


Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread Sándor Daku
On 7 September 2016 at 15:05, Alexander Farber 
wrote:

> Good afternoon,
>
> when trying to create a custom function to temporary ban a user:
>
> CREATE OR REPLACE FUNCTION words_ban_user(
> IN in_uid integer,
> IN in_until varchar,-- '1 week' OR '1 month' OR '1 year'
> IN in_reason varchar)
> RETURNS void AS
> $func$
> BEGIN
> 
> UPDATE words_users SET
> banned_until  = CURRENT_TIMESTAMP + INTERVAL in_until,
> banned_reason = in_reason,
> vip_until = vip_until + INTERVAL in_until,-- for
> paying user
> grand_until   = grand_until + INTERVAL in_until
> WHERE uid = in_uid;
>
> END
> $func$ LANGUAGE plpgsql;
>
> in 9.5.4 I unfortunately get the error:
>
> ERROR:  syntax error at or near "in_until"
> LINE 69: ...   banned_until = CURRENT_TIMESTAMP + INTERVAL in_until,
>^
> Is there please a better way here?
>
> Thank you
> Alex
>
>
First it doesn't make sense to use IN parameters. Every parameter is IN
parameter by default if it's not explicitly OUT parameter.
And I think the :: casting operator is more straightforward.

If I wrote this function it would look like this:

CREATE OR REPLACE FUNCTION words_ban_user(
in_uid integer,
in_until varchar,-- '1 week' OR '1 month' OR '1 year'
in_reason varchar)
RETURNS void AS
$func$
BEGIN

UPDATE words_users SET
banned_until  = CURRENT_TIMESTAMP + in_until::interval,
banned_reason = in_reason,
vip_until = vip_until + in_until::interval,-- for
paying user
grand_until   = grand_until + in_until::interval
WHERE uid = in_uid;

END
$func$ LANGUAGE plpgsql;

And as the others pointed this out you could declare in_until as interval,
skip the whole casting and still could call the function as
select words_ban_user(1, '1 day', 'attacking other users')

Regards,
Sándor


Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread Adrian Klaver

On 09/07/2016 06:05 AM, Alexander Farber wrote:

Good afternoon,

when trying to create a custom function to temporary ban a user:

CREATE OR REPLACE FUNCTION words_ban_user(
IN in_uid integer,
IN in_until varchar,-- '1 week' OR '1 month' OR '1 year'
IN in_reason varchar)
RETURNS void AS
$func$
BEGIN

UPDATE words_users SET
banned_until  = CURRENT_TIMESTAMP + INTERVAL in_until,
banned_reason = in_reason,
vip_until = vip_until + INTERVAL in_until,-- for
paying user
grand_until   = grand_until + INTERVAL in_until
WHERE uid = in_uid;

END
$func$ LANGUAGE plpgsql;

in 9.5.4 I unfortunately get the error:

ERROR:  syntax error at or near "in_until"
LINE 69: ...   banned_until = CURRENT_TIMESTAMP + INTERVAL in_until,
   ^
Is there please a better way here?


DO
$$
DECLARE
in_until varchar := '1 day';
banned_until timestamp;
BEGIN
banned_until  = CURRENT_TIMESTAMP +  in_until::interval;
RAISE NOTICE '%', banned_until;
END$$;

NOTICE:  2016-09-08 06:50:14.051719

When I did it your way I got:

test=> DO
$$
DECLARE
in_until varchar := '1 day';
banned_until timestamp;
BEGIN
banned_until  = CURRENT_TIMESTAMP + INTERVAL in_until;
END$$;
ERROR:  column "interval" does not exist
LINE 1: SELECT CURRENT_TIMESTAMP + INTERVAL in_until
   ^
QUERY:  SELECT CURRENT_TIMESTAMP + INTERVAL in_until
CONTEXT:  PL/pgSQL function inline_code_block line 6 at assignment




Thank you
Alex




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread Tom Lane
Alexander Farber  writes:
> unfortunately, I can not call INTERVAL 'in_until', that wouldn't work.

No, because the syntax TYPENAME 'LITERAL' only works for simple string
literals.  What you want here is a run-time cast, either
CAST(in_until AS interval) or in_until::interval.

> I would prefer to call my custom function as
> select words_ban_user(1, '1 day', 'attacking other users');

That is not a reason not to declare the argument as interval.

> and not as
> select words_ban_user(1, CURRENT_TIMESTAMP + '1 day', 'attacking other
> users');

Hm?  That would be passing a timestamp not an interval.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgadmin4 rc1 query tool performance

2016-09-07 Thread Attila Soki

> Am 07.09.2016 um 08:29 schrieb Martijn Tonies (Upscene Productions) 
> :
> 
 i testing the latest release of pgadmin4 (rc1) and noticed that the query 
 tool is significantly slower than the query tool in pgadmin3.
 i am not sure if this occurs only on my computer or only under os x 
 (10.10.5) or is this a known behavior.
>>> 
>>> I don't know. You might get an answer quicker at the link below though:
>>> 
>>> https://www.postgresql.org/list/pgadmin-hackers/
>> 
>> sorry for the noise, i posted this accidentally here instead of 
>> pgadmin-support. i got already an answer there.
> 
> What was the answer?

http://www.mail-archive.com/pgadmin-support@postgresql.org/msg15121.html

>Dave Page Tue, 06 Sep 2016 03:48:07 -0700>
>
>On Tue, Sep 6, 2016 at 11:41 AM, Michal Kozusznik
> wrote:
>> On 6.9.2016 11:51, Dave Page wrote:
>>>
>>> On Tue, Sep 6, 2016 at 10:47 AM, Attila Soki  wrote:

 Hi,

 i testing pgadmin4 (rc1) and noticed that the query tool is significantly
 slower than the query tool in pgadmin3.
 i am not sure if this occurs only on my computer or only under os x
 (10.10.5) or is this a known behavior.
>>>
>>> It's a natural consequence of it being a web application.
>>
>>
>> Indeed It is, but not to the extent shown by OP.
>> Or in other words, it should be considered as show stopper for tool like
>> pgAdmin.
>>
>> I really cannot imaging daily work with such slow tool. I still hope in
>> significant improvement in this area.
>
>Patches are welcome

Attila



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread Alexander Farber
Hello Rob,

On Wed, Sep 7, 2016 at 3:24 PM, rob stone  wrote:

>
> I think the interval values need to be quoted.
> In any event I always use the P or T syntax. 'P1D' means add one day,
>

unfortunately, I can not call INTERVAL 'in_until', that wouldn't work.

Also 'P1D' vs. '1 day' seems to be just a matter of taste.

Finally, to provide more context to my question -

I would prefer to call my custom function as

select words_ban_user(1, '1 day', 'attacking other users');

and not as

select words_ban_user(1, CURRENT_TIMESTAMP + '1 day', 'attacking other
users');

because additionally to the temporary ban I would like to prolong
VIP-periods for paying users (to avoid discussions) - and that would be
more difficult in the latter case (would require more date acrobatics)...

Regards
Alex


Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread hubert depesz lubaczewski
On Wed, Sep 07, 2016 at 03:05:38PM +0200, Alexander Farber wrote:
> ERROR:  syntax error at or near "in_until"
> LINE 69: ...   banned_until = CURRENT_TIMESTAMP + INTERVAL in_until,
>^
> Is there please a better way here?

Why don't you simply make in_util parameter interval? What is the point
of accepting varchar, if you're unconditionally casting it to interval
anyway?

Having said that, the error looks rather strange for this.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread rob stone
Hello,
On Wed, 2016-09-07 at 15:05 +0200, Alexander Farber wrote:
> Good afternoon,
> 
> when trying to create a custom function to temporary ban a user:
> 
> CREATE OR REPLACE FUNCTION words_ban_user(
>     IN in_uid integer,
>     IN in_until varchar,    -- '1 week' OR '1 month' OR '1 year'
>     IN in_reason varchar)
>     RETURNS void AS
> $func$
> BEGIN
> 
>     UPDATE words_users SET
>     banned_until  = CURRENT_TIMESTAMP + INTERVAL
> in_until,
>     banned_reason = in_reason,
>     vip_until = vip_until + INTERVAL in_until,    --
> for paying user
>     grand_until   = grand_until + INTERVAL in_until
>     WHERE uid = in_uid;
> 
> END
> $func$ LANGUAGE plpgsql;
> 
> in 9.5.4 I unfortunately get the error:
> 
> ERROR:  syntax error at or near "in_until"
> LINE 69: ...   banned_until = CURRENT_TIMESTAMP + INTERVAL
> in_until,
>    ^
> Is there please a better way here?
> 
> Thank you
> Alex
> 


I think the interval values need to be quoted.
In any event I always use the P or T syntax. 'P1D' means add one day,
etc.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restricted access on DataBases

2016-09-07 Thread Durumdara
Dear Everybody!

I read the documentation based on your example. First reactions.
2016-09-05 18:25 GMT+02:00 Charles Clavadetscher :

>
>> GRANT us_a, us_b, us_c TO main_admin;
>>
>

Ah, it's good. I can merge the "owner" rights to one. :-)
It's like "doubling"! :-)



>
>>
>
> Here an example (obviously you will choose secure passwords and initialize
> them using \password . This is just a very simple example). I
> used 9.5 but it would work with earlier versions as well.
>
> -- Create roles and databases
>
> CREATE ROLE main_admin LOGIN PASSWORD 'xxx';
>
> CREATE ROLE us_a LOGIN PASSWORD 'xxx';
> CREATE DATABASE db_a;
> ALTER DATABASE db_a OWNER TO us_a;
>
> CREATE ROLE us_b LOGIN PASSWORD 'xxx';
> CREATE DATABASE db_b;
> ALTER DATABASE db_b OWNER TO us_b;
>
> -- Restrict access
>
> REVOKE CONNECT ON DATABASE db_a FROM public;
> GRANT CONNECT ON DATABASE db_a TO us_a;
>
> REVOKE CONNECT ON DATABASE db_b FROM public;
> GRANT CONNECT ON DATABASE db_b TO us_b;
>
> -- Grant all user rights to main_admin:
>
> GRANT us_a, us_b TO main_admin;
>

What could be the problem with the revoking only "connect" priv? What
are/would be the silent side-effects?
For example:
Ok, us_b can't connect to db_a, but...
He can connect to db_b and may he can start(?) a multidatabase query...
He can set his role to bla, and he can insert the table db_a.X...
He can read the temp tables of db_a...
He can read the structure of db_a
He can break out from his sandbox by...???

---

Other question:
Can I imagine the GRANT as present, and the DEFAULT PRIVILEGES as future?

Your two solutions are seem to be better like "revoke public in all and
grant all rights in all object in the present (GRANT) and in the future
(DEF. PRIV)".

Very-very thank you!

dd


Re: [GENERAL] pgadmin4 rc1 query tool performance

2016-09-07 Thread rob stone
Hello,
On Tue, 2016-09-06 at 18:40 +0200, Attila Soki wrote:
> > 
> > Am 06.09.2016 um 15:23 schrieb Adrian Klaver  > .com>:
> > 
> > On 09/06/2016 02:35 AM, Attila Soki wrote:
> > > 
> > > Hi,
> > > 
> > > i testing the latest release of pgadmin4 (rc1) and noticed that
> > > the query tool is significantly slower than the query tool in
> > > pgadmin3.
> > > i am not sure if this occurs only on my computer or only under os
> > > x (10.10.5) or is this a known behavior.
> > 
> > I don't know. You might get an answer quicker at the link below
> > though:
> > 
> > https://www.postgresql.org/list/pgadmin-hackers/
> 
> sorry for the noise, i posted this accidentally here instead of
> pgadmin-support. i got already an answer there.
> 
> Attila
> 

IMO the answer is nonsensical.

I'd do the following:-

1) Alter your postgresql.conf to log every statement.
2) Restart the server.
3) Run your tests again.
4) Examine the postgresl.log file for the queries created and compare
them. Also, pgadmin might be querying the catalogues in an inefficient
manner thus leading to the increases in elapsed times.

My 2 cents worth.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread Alexander Farber
Good afternoon,

when trying to create a custom function to temporary ban a user:

CREATE OR REPLACE FUNCTION words_ban_user(
IN in_uid integer,
IN in_until varchar,-- '1 week' OR '1 month' OR '1 year'
IN in_reason varchar)
RETURNS void AS
$func$
BEGIN

UPDATE words_users SET
banned_until  = CURRENT_TIMESTAMP + INTERVAL in_until,
banned_reason = in_reason,
vip_until = vip_until + INTERVAL in_until,-- for
paying user
grand_until   = grand_until + INTERVAL in_until
WHERE uid = in_uid;

END
$func$ LANGUAGE plpgsql;

in 9.5.4 I unfortunately get the error:

ERROR:  syntax error at or near "in_until"
LINE 69: ...   banned_until = CURRENT_TIMESTAMP + INTERVAL in_until,
   ^
Is there please a better way here?

Thank you
Alex


Re: [GENERAL] PGDATA / data_directory

2016-09-07 Thread Jehan-Guillaume de Rorthais
On Sun, 4 Sep 2016 11:40:38 +0200
Christoph Berg  wrote:

> Re: Benoit Lobréau 2016-08-31
> 
> > Hi,
> > 
> > My company is using PGDATA to store configuration files and the guc
> > data_directory to give the path to the instance directory.
> > 
> > They would use it like this:
> > 
> > pg_ctl start -D  -w
> > 
> > with this directory setup:
> > 
> > /CONFDIR => postgresql.conf pg_hba.conf pg_ident.conf
> > /SYSTEM => All the normal stuff in the postgres instance
> > directory + recovery.conf recovery.done etc...
> > 
> > Is it commonly used ?
> 
> That's the default cluster layout on Debian (and Debian-based
> distributions).
> 
> /etc/postgresql/9.5/main/
> /var/lib/postgresql/9.5/main/
> /var/run/postgresql/9.5-main.pg_stat_tmp/

Indeed. I never noticed data_directory was set in postgresql.conf file...

But then, why starting PostgreSQL with the following command ?

  /usr/lib/postgresql/9.4/bin/postgres -D /var/lib/postgresql/9.4/main \
-c config_file=/etc/postgresql/9.4/main/postgresql.conf

It seems the following one work as expected and seems more logical with this
setup:

  /usr/lib/postgresql/9.4/bin/postgres -D /etc/postgresql/9.4/main/

-- 
Jehan-Guillaume de Rorthais


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general