[GENERAL] ORDER with CASE and Random for each case

2017-10-16 Thread Alex Magnum
Hi,

If have a view that I would like to sort where I divide the return in 3
different groups.
These 3 groups then should have a random sort order each.
As I am I using it with an offset, and limit, the randomness should be the
same.

For example:
SELECT user_id, age  FROM view_users
ORDER BY CASE WHEN age < 20 THEN 1
  WHEN age < 50 THEN 2
  ELSE 3
 END
OFFSET 0  LIMIT 20;

If I have for each age group 30 users. I want these 3 groups to be ordered
randomly but during the paging maintain the order.

The way I would do it now is to use setseed() and a union of 3 selects with
the 3 conditions and random() in each of the three.

Is there a better and more efficient way to do it in one query?

Thanks for any suggestions

A


[GENERAL] problem

2017-09-19 Thread Alex Samad
Hi

I setup a sync rep cluster 9.6

unfortunately I have made a bit of an issue for myself.

my backup site is full on the data partition, which i believe has lead for
my write site to be full on the pg_xlog partition - i believe that it is
holding logs to replicate and can't any more.

so now neither site will start 

1) how . what steps do I need to do to fix this - with out just adding
space !
2) how do I stop this from happening - i think the space being used up is
the repmgr stats page - i think - will not know till I can get the servers
back up

Alex


Re: [GENERAL] Begginers question

2017-08-17 Thread Alex Samad
On 16 August 2017 at 20:55, Achilleas Mantzios <ach...@matrix.gatewaynet.com
> wrote:

> On 16/08/2017 13:46, Alex Samad wrote:
>
>
>
> On 16 August 2017 at 16:16, Michael Paquier <michael.paqu...@gmail.com>
> wrote:
>
>> On Wed, Aug 16, 2017 at 2:32 PM, Alex Samad <a...@samad.com.au> wrote:
>> > 1) why did it fill up this time and not previously
>> > I add this
>> > archive_command = '/bin/true'
>> > wal_keep_segments = 1000 # <<< I'm guessing its this
>> >
>> > 2) how do I fix up, can I just remove the files from the pg_xlog
>> directory
>>
>
How can I work out how much space this is going to use,
I see in the comments a segment is 16M, so 1000 would have been 16G, So I
thought I would set it at 100 so 1.6G. but it ended up using about 4.5G. So
now I am confused


>
>> Don't do that. those files are managed by Postgres so you may finish
>> with a corrupted cluster. Instead you should lower the value of
>>
>
> Too late, its okay its a learning experience.
>
>
>> wal_keep_segments, reload the server parameters, and then enforce two
>> checkpoints to force WAL segments to be recycled. Note that this
>>
>
> how do I force check points
>
> checkpoint ;
> (the ; is not meant as a smiley or whatever )
>
>
>
>> depends also on the values of checkpoint_segments
>> (max_wal_size/min_wal_size in Postgres 9.5 and onwards).
>> --
>> Michael
>>
>
> thanks
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>


Re: [GENERAL] cluster question

2017-08-16 Thread Alex Samad
On 17 August 2017 at 10:51, Ian Barwick <ian.barw...@2ndquadrant.com> wrote:

> On 08/16/2017 02:41 PM, Alex Samad wrote:
> (...)
> >
> > okay think I have it setup, but when i do a switch over it gets stuck
> here.
> >
> >
> >
> > NOTICE: STANDBY PROMOTE successful
> > NOTICE: Executing pg_rewind on old master server
> > NOTICE: 3 files copied to /var/lib/pgsql/9.6/data
> > NOTICE: restarting server using '/usr/pgsql-9.6/bin/pg_ctl  -w -D
> /var/lib/pgsql/9.6/data -m fast restart'
> > pg_ctl: PID file "/var/lib/pgsql/9.6/data/postmaster.pid" does not exist
> > Is server running?
> > starting server anyway
> > NOTICE: STANDBY FOLLOW successful
>
> From the repmgr README:
>
> >> You must ensure that following a server start using `pg_ctl`, log output
> >> is not send to STDERR (the default behaviour). If logging is not
> configured,
> >> we recommend setting `logging_collector=on` in `postgresql.conf` and
> >> providing an explicit `-l/--log` setting in `repmgr.conf`'s
> `pg_ctl_options`
> >> parameter.
>
> i.e. when the old primary is restarted with:
>
> /usr/pgsql-9.6/bin/pg_ctl  -w -D /var/lib/pgsql/9.6/data -m fast
> restart
>
> the calling process hangs, waiting for logging output from pg_ctl.
> In "repmgr.conf" set "pg_ctl_options" to something like:
>
> pg_ctl_options='-l /path/to/log'
>
>
> Regards


Thanks, simple when you know, too many new things to look at



>
>
> Ian Barwick
>
> --
>  Ian Barwick   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [GENERAL] Begginers question

2017-08-16 Thread Alex Samad
Great I will add it to my notes.

Thanks

On 16 August 2017 at 20:55, Achilleas Mantzios <ach...@matrix.gatewaynet.com
> wrote:

> On 16/08/2017 13:46, Alex Samad wrote:
>
>
>
> On 16 August 2017 at 16:16, Michael Paquier <michael.paqu...@gmail.com>
> wrote:
>
>> On Wed, Aug 16, 2017 at 2:32 PM, Alex Samad <a...@samad.com.au> wrote:
>> > 1) why did it fill up this time and not previously
>> > I add this
>> > archive_command = '/bin/true'
>> > wal_keep_segments = 1000 # <<< I'm guessing its this
>> >
>> > 2) how do I fix up, can I just remove the files from the pg_xlog
>> directory
>>
>> Don't do that. those files are managed by Postgres so you may finish
>> with a corrupted cluster. Instead you should lower the value of
>>
>
> Too late, its okay its a learning experience.
>
>
>> wal_keep_segments, reload the server parameters, and then enforce two
>> checkpoints to force WAL segments to be recycled. Note that this
>>
>
> how do I force check points
>
> checkpoint ;
> (the ; is not meant as a smiley or whatever )
>
>
>
>> depends also on the values of checkpoint_segments
>> (max_wal_size/min_wal_size in Postgres 9.5 and onwards).
>> --
>> Michael
>>
>
> thanks
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>


Re: [GENERAL] Begginers question

2017-08-16 Thread Alex Samad
On 16 August 2017 at 16:16, Michael Paquier <michael.paqu...@gmail.com>
wrote:

> On Wed, Aug 16, 2017 at 2:32 PM, Alex Samad <a...@samad.com.au> wrote:
> > 1) why did it fill up this time and not previously
> > I add this
> > archive_command = '/bin/true'
> > wal_keep_segments = 1000 # <<< I'm guessing its this
> >
> > 2) how do I fix up, can I just remove the files from the pg_xlog
> directory
>
> Don't do that. those files are managed by Postgres so you may finish
> with a corrupted cluster. Instead you should lower the value of
>

Too late, its okay its a learning experience.


> wal_keep_segments, reload the server parameters, and then enforce two
> checkpoints to force WAL segments to be recycled. Note that this
>

how do I force check points


> depends also on the values of checkpoint_segments
> (max_wal_size/min_wal_size in Postgres 9.5 and onwards).
> --
> Michael
>

thanks


Re: [GENERAL] cluster question

2017-08-15 Thread Alex Samad
On 15 August 2017 at 16:35, Andreas Kretschmer <andr...@a-kretschmer.de>
wrote:

>
>
> Am 15.08.2017 um 05:15 schrieb Alex Samad:
>
>> Hi
>>
>> Quick question.  I have a 2 node cluster - each node has its own ip.
>>
>> But from reading this, I really need a 3rd ip, which potentially floats
>> between the nodes to which ever is the master / rw node.
>>
>> Is that right? Sort of makes sense I guess
>>
>>
> That's one way to solve that problem. If you are using repmgr/repmgrd you
> can call own commands on events like failover. And, if you are using
> pgbouncer or connection-pooling you can change the config for pgbouncer
> (redefine the databases and there connection strings) and reload pgbouncer.
> You can do that with the event-notification commands defined in your
> repmgr-config.
> Other solution: with some Java-drivers you can define several databases
> and checks (if the database read-only or rw), the next PostgreSQL 10 will
> offer a similar feature.
>
>
okay think I have it setup, but when i do a switch over it gets stuck here.



NOTICE: STANDBY PROMOTE successful
NOTICE: Executing pg_rewind on old master server
NOTICE: 3 files copied to /var/lib/pgsql/9.6/data
NOTICE: restarting server using '/usr/pgsql-9.6/bin/pg_ctl  -w -D
/var/lib/pgsql/9.6/data -m fast restart'
pg_ctl: PID file "/var/lib/pgsql/9.6/data/postmaster.pid" does not exist
Is server running?
starting server anyway
NOTICE: STANDBY FOLLOW successful


doesn't look like the output from https://github.com/
2ndQuadrant/repmgr/blob/master/README.md

NOTICE: STANDBY PROMOTE successful
NOTICE: Executing pg_rewind on old master server
NOTICE: 5 files copied to /var/lib/postgresql/9.5/data
NOTICE: restarting server using '/usr/local/bin/pg_ctl -w -D
/var/lib/postgresql/9.5/node_1/data -m fast restart'
pg_ctl: PID file "/var/lib/postgresql/9.5/node_1/data/postmaster.pid"
does not exist
Is server running?
starting server anyway
NOTICE: node 1 is replicating in state "streaming"
NOTICE: switchover was successful






> PS.: please don't top-posting.
>
>
>



> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Begginers question

2017-08-15 Thread Alex Samad
Hi

So I have been playing with an streaming  cluster. I have the replication
working I believe.

But whilst attempting to do an import of my original DB, I filled up my
disk pg_xlog directory.

Strangley I have tried this before and not filled this up.

so
1) why did it fill up this time and not previously
I add this
archive_command = '/bin/true'
wal_keep_segments = 1000 # <<< I'm guessing its this

2) how do I fix up, can I just remove the files from the pg_xlog directory

A


Re: [GENERAL] cluster question

2017-08-14 Thread Alex Samad
Hi

Quick question.  I have a 2 node cluster - each node has its own ip.

But from reading this, I really need a 3rd ip, which potentially floats
between the nodes to which ever is the master / rw node.

Is that right? Sort of makes sense I guess

A

On 14 August 2017 at 16:47, Andreas Kretschmer <andr...@a-kretschmer.de>
wrote:

> On 14 August 2017 08:39:54 GMT+02:00, Alex Samad <a...@samad.com.au>
> wrote:
> >Hi
> >
> >I have setup a streaming replicating cluster, with a hot standby.
> >
> >Now I would like to change the RW to hot standby and change the hot
> >standby
> >to be the RW server.
> >
> >Is it just a matter of updating recover.conf file ?
> >
> >Alex
>
> I would suggest you repmgr, with this tool you can do "repmgr standby
> switchover" to perform such tasks.
>
> Regards, Andreas.
>
>
> --
> 2ndQuadrant - The PostgreSQL Support Company
>


[GENERAL] cluster question

2017-08-14 Thread Alex Samad
Hi

I have setup a streaming replicating cluster, with a hot standby.

Now I would like to change the RW to hot standby and change the hot standby
to be the RW server.

Is it just a matter of updating recover.conf file ?

Alex


Re: [GENERAL] Question about loading up a table

2017-08-02 Thread Alex Samad
Hi

I don't have an extra 4T of filespace. I could potentially move the
attached lun from one server and attach to the other

well that was my question how to check if its pg_dump thats bound.  I have
checked network performance - 9.8Gb and I can write more data to disk

I do have 1 index

A


On 3 August 2017 at 02:11, Scott Marlowe <scott.marl...@gmail.com> wrote:

> On Tue, Aug 1, 2017 at 4:27 PM, Alex Samad <a...@samad.com.au> wrote:
> > Hi
> >
> > So just to go over what i have
> >
> >
> > server A (this is the original pgsql server 9.2)
> >
> > Server X and Server Y ... PGSQL 9.6 in a cluster - streaming replication
> > with hot standby.
> >
> >
> > I have 2 tables about 2.5T of diskspace.
> >
> > I want to get the date from A into X and X will replicate into Y.
> >
> >
> > I am currently on X using this command
> >
> > pg_dump -U  -h  -t BIGTABLE -a  | sudo -u postgres
> -i
> > psql -q ;
> >
> > This is taking a long time, its been 2 days and I have xfered around 2T..
> > This is just a test to see how long and to populate my new UAT env. so I
> > will have to do it again.
> >
> > Problem is time.  the pg_dump process is single threaded.
> > I have 2 routers in between A and X but its 10G networking - but my
> network
> > graphs don't show much traffic.
> >
> > Server X is still in use, there are still records being inserted into the
> > tables.
> >
> > How can I make this faster.
> >
> > I could shutdown server A and present the disks to server X, could I load
> > this up in PGSQL and do a table to table copy - i presume this would be
> > faster ... is this possible ?  how do I get around the same DB name ?
> > What other solutions do I have ?
>
> Yes, but if it's taking days to transfer 2TB then you need to
> investigate where your performance is tanking.
>
> Have you tried resyncing / scping files across the network to see how
> fast your network connection is?
>
> Have you tried just pg_dumping / restoring locally to get an idea how
> fast you can dump / restore withoout doing it over a network
> connection?
>
> Are you IO bound? Network bound? CPU bound?
>
> Is the destination copying data, or building indexes? Do you insert
> into a schema that already has indexes in place? If so have you tried
> dropping the indexes first and rebuilding them?
>


Re: [GENERAL] Question about loading up a table

2017-08-01 Thread Alex Samad
Hi

So just to go over what i have


server A (this is the original pgsql server 9.2)

Server X and Server Y ... PGSQL 9.6 in a cluster - streaming replication
with hot standby.


I have 2 tables about 2.5T of diskspace.

I want to get the date from A into X and X will replicate into Y.


I am currently on X using this command

pg_dump -U  -h  -t BIGTABLE -a  | sudo -u postgres -i
psql -q ;

This is taking a long time, its been 2 days and I have xfered around 2T..
This is just a test to see how long and to populate my new UAT env. so I
will have to do it again.

Problem is time.  the pg_dump process is single threaded.
I have 2 routers in between A and X but its 10G networking - but my network
graphs don't show much traffic.

Server X is still in use, there are still records being inserted into the
tables.

How can I make this faster.

I could shutdown server A and present the disks to server X, could I load
this up in PGSQL and do a table to table copy - i presume this would be
faster ... is this possible ?  how do I get around the same DB name ?
What other solutions do I have ?

Alex




On 1 August 2017 at 23:24, Scott Marlowe <scott.marl...@gmail.com> wrote:

> On Mon, Jul 31, 2017 at 11:16 PM, Alex Samad <a...@samad.com.au> wrote:
> > Hi
> >
> > I double checked and there is data going over, thought I would correct
> that.
> >
> > But it seems to be very slow.   Having said that how do I / what tools
> do I
> > use to check through put
>
> Try the pg_current_xlog_location function on the slave?
>


Re: [GENERAL] Question about loading up a table

2017-08-01 Thread Alex Samad
Hi

I double checked and there is data going over, thought I would correct that.

But it seems to be very slow.   Having said that how do I / what tools do I
use to check through put

A

On 1 August 2017 at 08:56, Alex Samad <a...@samad.com.au> wrote:

> Hi
>
> I'm using pg_dump 9.6 to do the dumps.
>
> I'm also pretty sure no data is being replicated until the end of the copy
> stdin as I was watching tcpdump output and I can see data from the orig
> master to the new master and no traffic between new master and the standby,
> pretty sure my replication is working as my other tables have replicated
> over.
>
>
> as for allow pg_dump to create copy stdin with specific number of rows -
> not sure what that is so hard / bad, if it was a option for somebody to
> use.  For my situation its sounds like a really good idea.
>
> I'm dumping a single table into a new single table so constraint shouldn't
> be a problem.
>
>
> Guess I have to just let it rung to completion
>
> Thanks
>
>
>
> On 1 August 2017 at 06:59, Scott Marlowe <scott.marl...@gmail.com> wrote:
>
>> On Mon, Jul 31, 2017 at 2:31 AM, vinny <vi...@xs4all.nl> wrote:
>> > On 2017-07-31 11:02, Alex Samad wrote:
>> >>
>> >> Hi
>> >>
>> >> I am using pg_dump | psql to transfer data from my old 9.2 psql into a
>> >> 9.6 psql.
>>
>> Note that you should be doing pg_dump with 9.6's pg_dump, as it's
>> possible for 9.2's pg_dump to not know about a 9.6 feature.
>>
>> >> The new DB server is setup as master replicating to a hot standby
>> >> server.
>> >>
>> >> What I have noticed is that the rows don't get replicated over until
>> >> the copy from stdin is finished...  hard to test when you have M+ lines
>> >> of rows.
>>
>> SNIP
>>
>> >> Is there a way to tell the master to replicate earlier
>> >
>> > I highly doubt it, because the master cannot know what to replicate
>> until
>> > your transaction is ended with a COMMIT. If you end with ROLLBACK,
>> > or your last query is DELETE FROM (your_table>;
>> > then there isn't even anything to replicate at all...
>>
>> This is actually a visibility issue. All the new changes are
>> replicated to the slave, but just like on the master, other
>> connections can't see the change because it's not visible. The slave,
>> except for some small delay (seconds etc) is an exact replica of the
>> master. So even a delete at the end gets replicated. You just don't
>> see anything but possible table bloat to show for it.
>>
>> To prove this to oneself, start the copy, and get into another session
>> to the master. You don't see any rows there either until the commit
>> after the copy.
>>
>
>


Re: [GENERAL] partitioning question

2017-07-31 Thread Alex Samad
The comparison would be

a if then else end if .. about 8 of them 2013-> and a static insert into

v's

making a dynamic string and using execute, my presumption would be the
execute would be expensive verses a INSERT command

A

On 1 August 2017 at 07:04, Scott Marlowe <scott.marl...@gmail.com> wrote:

> On Sun, Jul 30, 2017 at 7:13 PM, Alex Samad <a...@samad.com.au> wrote:
> > How expensive is dynamic over static.  I'm looking at storing yearly
> now, so
> > I figure if my if then clause has the latest year at the top it should be
> > very quick.
>
> Assuming you're not doing anything particularly crazy it's minimal.
> But what is good performance for one application may not be acceptable
> for others. Generally the cost of inserting is MUCH higher than the
> cost of dynamically setting the target, esp if you stick to plpgsql
> and don't try to use rules to accomplish it.
>


Re: [GENERAL] Question about loading up a table

2017-07-31 Thread Alex Samad
Hi

I'm using pg_dump 9.6 to do the dumps.

I'm also pretty sure no data is being replicated until the end of the copy
stdin as I was watching tcpdump output and I can see data from the orig
master to the new master and no traffic between new master and the standby,
pretty sure my replication is working as my other tables have replicated
over.


as for allow pg_dump to create copy stdin with specific number of rows -
not sure what that is so hard / bad, if it was a option for somebody to
use.  For my situation its sounds like a really good idea.

I'm dumping a single table into a new single table so constraint shouldn't
be a problem.


Guess I have to just let it rung to completion

Thanks



On 1 August 2017 at 06:59, Scott Marlowe <scott.marl...@gmail.com> wrote:

> On Mon, Jul 31, 2017 at 2:31 AM, vinny <vi...@xs4all.nl> wrote:
> > On 2017-07-31 11:02, Alex Samad wrote:
> >>
> >> Hi
> >>
> >> I am using pg_dump | psql to transfer data from my old 9.2 psql into a
> >> 9.6 psql.
>
> Note that you should be doing pg_dump with 9.6's pg_dump, as it's
> possible for 9.2's pg_dump to not know about a 9.6 feature.
>
> >> The new DB server is setup as master replicating to a hot standby
> >> server.
> >>
> >> What I have noticed is that the rows don't get replicated over until
> >> the copy from stdin is finished...  hard to test when you have M+ lines
> >> of rows.
>
> SNIP
>
> >> Is there a way to tell the master to replicate earlier
> >
> > I highly doubt it, because the master cannot know what to replicate until
> > your transaction is ended with a COMMIT. If you end with ROLLBACK,
> > or your last query is DELETE FROM (your_table>;
> > then there isn't even anything to replicate at all...
>
> This is actually a visibility issue. All the new changes are
> replicated to the slave, but just like on the master, other
> connections can't see the change because it's not visible. The slave,
> except for some small delay (seconds etc) is an exact replica of the
> master. So even a delete at the end gets replicated. You just don't
> see anything but possible table bloat to show for it.
>
> To prove this to oneself, start the copy, and get into another session
> to the master. You don't see any rows there either until the commit
> after the copy.
>


[GENERAL] Question about loading up a table

2017-07-31 Thread Alex Samad
Hi

I am using pg_dump | psql to transfer data from my old 9.2 psql into a 9.6
psql.

The new DB server is setup as master replicating to a hot standby server.

What I have noticed is that the rows don't get replicated over until the
copy from stdin is finished... hard to test when you have M+ lines of rows.

Is there a way to tell the master to replicate earlier or is there a way to
get pg_dump to bundle into say 100K rows at a time ?

Thanks
Alex


Re: [GENERAL] vacuum on streaming replication

2017-07-31 Thread Alex Samad
Thanks

On 31 July 2017 at 18:11, Chris Travers  wrote:

>
>
> On Mon, Jul 31, 2017 at 10:08 AM, Michael Paquier <
> michael.paqu...@gmail.com> wrote:
>
>> On Mon, Jul 31, 2017 at 7:28 AM, Andreas Kretschmer
>>  wrote:
>> > The standby is read only, vacuum runs on the master and replicated to
>> the standby. Analyse as well.
>>
>> Please note as well that if hot_standby_feedback is enabled, the
>> cleanup done by VACUUM on the primary is influenced as well so as
>> tuples that a standby may need to avoid conflicts for its transactions
>> are not removed. So VACUUM may result in less cleanup depending on the
>> read load on the standby.
>>
>
> Also that replication slots provide standby feedback and may further delay
> vacuuming when the standby is offline.
>
>> --
>> Michael
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
> Best Wishes,
> Chris Travers
>
> Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
> lock-in.
> http://www.efficito.com/learn_more
>


[GENERAL] vacuum on streaming replication

2017-07-30 Thread Alex Samad
Hi

setup a cluster, with streaming replication and hot stand by

the idea is to use the stand by to do queries whilst the primary is doing
inserts.

But I noticed the stats on the stand by server don't update, nor can I run
vacuum against it as its in recovery mode.

So how do update the stats and how will it affect my queries ?


Re: [GENERAL] partitioning question

2017-07-30 Thread Alex Samad
How expensive is dynamic over static.  I'm looking at storing yearly now,
so I figure if my if then clause has the latest year at the top it should
be very quick.




On 31 July 2017 at 11:07, Justin Pryzby <pry...@telsasoft.com> wrote:

> On Mon, Jul 31, 2017 at 10:25:54AM +1000, Alex Samad wrote:
> > I note that you link to P10 and I am currently looking at 9.6.  The
> changes
> > do look nice for partitioning for p10.
>
> Yes sorry, pg10 is beta - avoid using it except for testing purposes.
>
> > I will add currently we don't delete anything, we will keep adding to it.
> >
> > Also I am thinking my insert trigger becomes a lot smaller and easier if
> I
> > leave it at yearly.
> Note: the trigger function can either be a static function updated monthly
> (to
> handle the next month), preferably with the most recent months tested
> first (so
> a typical newly-inserted rows only goes through one if/case test).
>
> Alternately, the trigger function can dynamically compute the table into
> which
> to insert using plpgsql "format()" similar to here:
> https://www.postgresql.org/docs/9.1/static/plpgsql-
> statements.html#PLPGSQL-QUOTE-LITERAL-EXAMPLE
>
> Justin
>


Re: [GENERAL] partitioning question

2017-07-30 Thread Alex Samad
Hi

I note that you link to P10 and I am currently looking at 9.6.  The changes
do look nice for partitioning for p10.


Interesting your suggest that the MM parition isn't that bad.

I will add currently we don't delete anything, we will keep adding to it.

Also I am thinking my insert trigger becomes a lot smaller and easier if I
leave it at yearly.

Also thinking if P10 was the current recommended version right now I would
probably look at MM because it looks like it makes partitioning easier

Alex



On 31 July 2017 at 09:54, Justin Pryzby <pry...@telsasoft.com> wrote:

> On Mon, Jul 31, 2017 at 09:15:29AM +1000, Alex Samad wrote:
> > Hi
> >
> > I was about to partition a large (?) approx 3T of data 2B rows into
> > partition tables but broken up into MM ...
> >
> > Now I have been reading about limiting the number of partitions otherwise
> > it could slow down the parser.
> >
> > My reasoning for limiting to MM was that most of the request would be
> > monthly based.
> >
> > Should I be making the partitioning based on  instead and have lots
> > more indexs.
> >
> > If I have an index on the timestamp field will it help limiting to
> MM ?
>
> The major advantages of partitions are enumerated here:
> https://www.postgresql.org/docs/10/static/ddl-partitioning.html#ddl-
> partitioning-overview
>
> For your case, it might be that seq scans of an entire "monthly" partition
> turn
> out to be very advantageous, compared with index scan (or seq scan of
> entire
> 3TB data).
>
> Also DROPing the oldest partition every month is commonly very much more
> efficient than DELETEing it..
>
> There are warnings like these:
>
> |All constraints on all partitions of the master table are examined during
> |constraint exclusion, so large numbers of partitions are likely to
> increase
> |query planning time considerably. Partitioning using these techniques
> will work
> |well with up to perhaps a hundred partitions; don't try to use many
> thousands
> |of partitions.
>
> Unless you have 100s of years of data I don't think it would be a problem.
>
> For us, having hundreds of partitions hasn't been an issue (planning time
> is
> insignificant for our analytic report queries).  But there's an overhead to
> partitions and at some point the cost becomes significant.  (Actually, I
> think
> one cost which *did* hit us, while experimenting with *daily* partition
> granularity of every table, was probably due to very large pg_statistics
> and
> pg_attributes tables, which no longer fit in buffer cache).
>
> Justin
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] partitioning question

2017-07-30 Thread Alex Samad
Hi

I was about to partition a large (?) approx 3T of data 2B rows into
partition tables but broken up into MM ...

Now I have been reading about limiting the number of partitions otherwise
it could slow down the parser.

My reasoning for limiting to MM was that most of the request would be
monthly based.

Should I be making the partitioning based on  instead and have lots
more indexs.

If I have an index on the timestamp field will it help limiting to MM ?


A


Re: [GENERAL] Question about paritioning

2017-07-27 Thread Alex Samad
so I have a pgsql function that will create tables based on MD for x
years in advance and inherits of base table.
with this
CHECK (_received >= '2013-01-01 00:00:00+11'::timestamp with time zone AND
_received < '2013-02-01 01:00:00+11'::timestamp with time zone)

now for the insert, do I create / update this monthly have had a 2 or 3
level if then check before inserting

or do I create a programatic insert that works out the table name

On 27 July 2017 at 18:36, John R Pierce <pie...@hogranch.com> wrote:

> On 7/27/2017 12:43 AM, Alex Samad wrote:
>
>>
>> ... as long as the queries stay on a small amount of parts that we should
>> be okay.
>>
>
> thats true as long as the planner can restrict the queries to the correct
> partition...  but there's lots of queries that end up having to hit all
> partitions because the planner can't guess correctly.
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Question about paritioning

2017-07-27 Thread Alex Samad
So is date_trunc better than to_char ? I'm thinking it probably is

as for the number of partitions, well we don't plan on deleting anything,
but from my reading as long as the queries stay on a small amount of parts
that we should be okay.

A

On 27 July 2017 at 15:33, John R Pierce <pie...@hogranch.com> wrote:

> On 7/26/2017 10:08 PM, Alex Samad wrote:
>
>> I have a large table about 3B rows, that I would like to partition on a
>> column called _received which is  of type timestamp
>>
>>
> a good goal is to have no more than about 100 partitions max, and ideally
> more like 25.
>
> when we partition on time stamp, we typically do it by the week, as we're
> doing 6 month data retention.
>
> IIRC, we're using DATE_TRUNC('week', timestamp)::DATE  for use as the
> partition label and key.
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Question about paritioning

2017-07-26 Thread Alex Samad
Hi

I have a large table about 3B rows, that I would like to partition on a
column called _received which is  of type timestamp

I followed this
https://medium.com/@StreamBright/creating-partitions-automatically-in-postgresql-7006d68c0fbb


and came up with
CREATE OR REPLACE FUNCTION create_partition_and_insert() RETURNS trigger AS
  $BODY$
DECLARE
  partition_date TEXT;
  partition TEXT;
BEGIN
  partition_date := to_char(NEW._received,'MM');
  partition := TG_TABLE_NAME || '_' || partition_date;
  IF NOT EXISTS(SELECT relname FROM pg_class WHERE relname=partition)
THEN
RAISE NOTICE 'A partition has been created %',partition;
EXECUTE 'CREATE TABLE ' || partition || ' (check (
to_char(_received, '''MM''') = ''' || partition_date || ''')) INHERITS
(' || TG_TABLE_NAME || ');';
  END IF;
  EXECUTE 'INSERT INTO ' || partition || ' SELECT(' || TG_TABLE_NAME ||
' ' || quote_literal(NEW) || ').* RETURNING patent_id;';
  RETURN NULL;
END;
  $BODY$
LANGUAGE plpgsql VOLATILE
COST 100;


Which I believe generates something like this
CREATE TABLE feedintra_201707( check (to_char(_received,'MM') =
'201707' )) INHERITS (XX);

My concern is the cost of doing to_char on every row.

So
Is this going to work
Is this the right way to do it.

I was thinking if I could change the check to be something like

check ( _recieved >= MM 1 00:00 and _recieved <= MM  23:59:59.999 )

so I am not sure how to make up the above line based on a timestamp

Thanks
Alex


[GENERAL] Re: [HACKERS] Why restore_command is called for existing files in pg_xlog?

2017-06-13 Thread Alex Kliukin
Hi Jeff,

On Mon, Jun 12, 2017, at 06:42 PM, Jeff Janes wrote:
> On Mon, Jun 12, 2017 at 5:25 AM, Alex Kliukin
> <al...@hintbits.com> wrote:>> __
>> 
>> On Fri, Jun 2, 2017, at 11:51 AM, Alexander Kukushkin wrote:
>>> Hello hackers,
>>> There is one strange and awful thing I don't understand about
>>> restore_command: it is always being called for every single WAL
>>> segment postgres wants to apply (even if such segment already exists
>>> in pg_xlog) until replica start streaming from the master.>> 
>> 
>> The real problem this question is related to is being unable to bring
>> a former master, demoted after a crash, online, since the WAL
>> segments required to get it to the consistent state were not archived
>> while it was still a master, and local segments in pg_xlog are
>> ignored when a restore_command is defined. The other replicas
>> wouldn't be good candidates for promotion as well, as they were way
>> behind the master (because the last N WAL segments were not archived
>> and streaming replication had a few seconds delay).> 
> I don't really understand the problem.  If the other replicas are not
> candidates for promotion, than why was the master ever "demoted" in
> the first place?  It should just go through normal crash recovery,
> not PITR recovery, and therefore will read the files from pg_xlog
> just fine.
We run an automatic failover daemon, called "Patroni", that uses a
consistency layer (RAFT, implemented by Etcd) in order to decide on
which node should be the leader. In Patroni, only the node that has the
leader key  in Etcd is allowed to become a master.  When Patroni detects
that the PostgreSQL on the  node that holds the leader lock is not
running, it starts the instance in a "read-only" mode by writing a
recovery.conf without the "primary_conninfo". Once the former master
running as a read-only  recovers to a consistent state and is not behind
the last known master's position, it is promoted back unless a replica
takes over the master lock.
The reason we cannot just start the crashed master normally is a
possible split-brain scenario. If during the former master's crash
recovery another replica takes over the lock because it is close enough
to the last known master position and is deemed "healthy" to promote,
the former master starts as a master nevertheless (we have no control
over the PostgreSQL crash recovery process), despite the fact that it
has no lock, violating the rule of "first get the lock, then promote".

> 
> If you already promoted one of the replicas and accepted data changes
> into it, and now are thinking that was not a good idea, then there is
> no off the shelf automatic way to merge together the two systems.  You
> have do a manual inspection of the differences.  To do that, you would
> start by starting up (a copy of) the crashed master, using normal
> crash recovery, not PITR.
In our scenario, no replica is promoted. The master starts in a read-
only mode, and is stuck there forever, since it cannot restore WAL
segments stored in its own WAL directory, and those segments were never
archived. The replicas cannot be promoted, because they are to far
behind from the master.
I don't really see any reasons not to try to restore WAL segments from
the WAL directory first. It would speed up the recovery in many cases,
since the segments are already there, there is no need to fetch them
> Probably more appropriate for pgsql-general or pgsql-admin.

Thanks!

Sincerely,
Alex



Re: [GENERAL] PostgreSQL and Kubernetes

2017-04-03 Thread Alex Kliukin

> On 30. Mar 2017, at 12:10, Moreno Andreo <moreno.and...@evolu-s.it> wrote:

> as databases aren't built for type of dynamic scheduling that something like 
> kubernetes (or any other container management solution), due to how they 
> interact with the filesystem, network stack, and more.
> Further more, he suggests that because of this lack of integration, that if 
> you do run your database in kubernetes/docker, you are very likely to 
> encounter data corruption and data loss"
> 
> What's your advice?


I think dynamic scheduling just increases the chances of your database instance 
being killed. Your K8S cluster, if not configured properly, may decide to kill 
your master database pod when scaling down nodes in order to utilise resources 
efficiently.

If you run your PostgreSQL on K8S make sure you have a good and battle-tested 
HA setup, know your way around configuring both PostgreSQL streaming 
replication and Kubernetes cluster, and have a number of databases to manage, 
as it makes little sense to build an aircraft carrier to carry a single Cessna. 

 
Kind regards,
--
Oleksii “Alex" Kliukin



[GENERAL] Searching array for multiple items

2017-01-25 Thread Alex Magnum
Hi,
I can search an array with 1 = ANY('{1,3,4,7}'::int[])

I need to check for one or multiple items in the array.

e.g. '1,7,3'  = ANY('{1,3,4,7}'::int[]

I do need to check if
a) all items exist in the array
b) at least one item exists in the array

Is there a an operator that allows me to do these two?
Does the order of left and right side matter?
Right now I have a small function but I guess there is a more efficient way.

Thanks for any help.

A


Re: [GENERAL] Drop user cascade

2016-10-19 Thread Alex Ignatov (postgrespro)
 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Melvin Davidson
Sent: Wednesday, October 19, 2016 5:35 PM
To: Alex Ignatov (postgrespro) <a.igna...@postgrespro.ru>
Cc: Tom Lane <t...@sss.pgh.pa.us>; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Drop user cascade

 

 

 

On Wed, Oct 19, 2016 at 10:03 AM, Alex Ignatov (postgrespro) 
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru> > wrote:


-Original Message-
From: pgsql-general-ow...@postgresql.org 
<mailto:pgsql-general-ow...@postgresql.org> 
[mailto:pgsql-general-ow...@postgresql.org 
<mailto:pgsql-general-ow...@postgresql.org> ] On Behalf Of Tom Lane
Sent: Wednesday, October 19, 2016 4:31 PM
To: Alex Ignatov (postgrespro) <a.igna...@postgrespro.ru>
Cc: pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org> 
Subject: Re: [GENERAL] Drop user cascade

"Alex Ignatov \(postgrespro\)" <a.igna...@postgrespro.ru 
<mailto:a.igna...@postgrespro.ru> > writes:
> Why do Postgres have no such functionality as DROP USER CASCADE? Is
> there any reasons in that absence?

The short answer is that DROP USER couldn't reach across databases to get
rid of owned objects in other databases.  See

https://www.postgresql.org/docs/9.6/static/role-removal.html

regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
<mailto:pgsql-general@postgresql.org> ) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Some security consideration bear in mind that DROP OWNED cant delete  own
objects in other DBs? In general what stops  us  to do inter DBs connection
like MSSQL?

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres
Company




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
<mailto:pgsql-general@postgresql.org> )
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


>In general what stops  us  to do inter DBs connection like MSSQL?


It currently is not generic to PostgreSQL, but you can do that with the dblink 
extension/functions.


-- 

Melvin Davidson
I reserve the right to fantasize.  Whether or not you 
wish to share my fantasy is entirely up to you.   
<http://us.i1.yimg.com/us.yimg.com/i/mesg/tsmileys2/01.gif> 

 

I know about dblink =)

The question was – is there any caveats to not allow xross db access. Maybe 
some security considerations prevent to implement it.

We all know that PG = one process rules multiple DBs why not to allow direct 
access to another DB. We have one transaction counter and so on  where is the 
problems if any?

 

--

Alex Ignatov 
Postgres Professional:  <http://www.postgrespro.com> http://www.postgrespro.com 
The Russian Postgres Company

 

 



Re: [GENERAL] Drop user cascade

2016-10-19 Thread Alex Ignatov (postgrespro)

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: Wednesday, October 19, 2016 4:31 PM
To: Alex Ignatov (postgrespro) <a.igna...@postgrespro.ru>
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Drop user cascade

"Alex Ignatov \(postgrespro\)" <a.igna...@postgrespro.ru> writes:
> Why do Postgres have no such functionality as DROP USER CASCADE? Is 
> there any reasons in that absence?

The short answer is that DROP USER couldn't reach across databases to get
rid of owned objects in other databases.  See

https://www.postgresql.org/docs/9.6/static/role-removal.html

regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Some security consideration bear in mind that DROP OWNED cant delete  own
objects in other DBs? In general what stops  us  to do inter DBs connection
like MSSQL? 

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres
Company



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] journaled FS and and WAL

2016-10-19 Thread Alex Ignatov (postgrespro)

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of t.dalpo...@gmail.com
Sent: Wednesday, October 19, 2016 11:01 AM
To: Michael Paquier <michael.paqu...@gmail.com>
Cc: Albe Laurenz <laurenz.a...@wien.gv.at>; pgsql-general@postgresql.org
Subject: Re: [GENERAL] journaled FS and and WAL

So, as for the data content of the WAL file, I see that no more page will be 
allocated. I wonder if during a crash, strange things can still happen at disk 
level however, in particular in SSD devices; on these things we have no 
control, and perhaps journaling helps?
As for the metadata, if during a crash it's flushed (with fdatasync, only when 
the FS decides to do that), can anything bad happen without journaling?

Third, let's suppose that the WAL can't get corrupted. When the system flushes 
data pages to the disk according to the WAL content, if there is a crash, am I 
sure that tables files old pages and /or their metadata, inode can't get 
corrupted?
If that, there is no possibility to reconstruct the things, even through the 
WAL. Even in this case, perhaps journaling helps.

I don't mind about performance but I absolutely mind about reliability, so I 
was thinking about the safest setting of linux FS and postgresql I can use.
Thanks!
Pupillo






Il 15/10/2016 07:52, Michael Paquier ha scritto:
> On Fri, Oct 14, 2016 at 11:27 PM, Albe Laurenz <laurenz.a...@wien.gv.at> 
> wrote:
>> After a successful commit, the WAL file and its metadata are on disk.
>> Moreover, the file metadata won't change (except for the write and 
>> access
>> timestamps) because WAL files are created with their full size and 
>> never extended, so no WAL file should ever get "lost" because of 
>> partial metadata writes.
> This behavior depends as well on the value of wal_sync_method. For 
> example with fdatasync the metadata is not flushed. It does not matter 
> any for for WAL segments as Albe has already mentioned, but the choice 
> here impacts performance.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Hi!
PG can lost its segments from data file and nobody knows it.   For PG  - no 
file = no data and no need to recover after crash, there is no infos about what 
data files belongs to PG.
After this don’t bother about WAL and anything else =)
Just use FS with journal, check sums you DB with initdb -k, fsync=on , do 
regular backups and check it thoroughly with restore. Also don’t forget to 
praise the gods that so far PG clogs file is not corrupted while being not 
protected by any checksums in minds. Youl never know that PG clog is corrupted 
until "doomsday"

--
Alex Ignatov 
Postgres Professional: http://www.postgrespro.com 
The Russian Postgres Company



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drop user cascade

2016-10-19 Thread Alex Ignatov (postgrespro)


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Thomas Kellerer
Sent: Wednesday, October 19, 2016 1:53 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Drop user cascade

Alex Ignatov (postgrespro) schrieb am 19.10.2016 um 12:26:
> Hello!
> 
> Why we can DROP TABLE CASCADE, DROP VIEW CASCADE, DROP SEQUENCE CASCADE but 
> we can’t DROP USER/ROLE CASCADE?
> 
> Why do Postgres have no such functionality as DROP USER CASCADE? Is there any 
> reasons in that absence?

You can do 

  drop owned by user_name;
  drop user user_name;

Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


I know this commands thank yo =).
To use them you need to invoke this command in all database because roles sit 
atop of PG database  or you need to find DBs which has objects that depend on 
ROLE you want to delete. You can't do anything with objects in DB you are not 
connected



--
Alex Ignatov 
Postgres Professional: http://www.postgrespro.com 
The Russian Postgres Company



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Drop user cascade

2016-10-19 Thread Alex Ignatov (postgrespro)
Hello!

Why we can DROP TABLE CASCADE, DROP VIEW CASCADE, DROP SEQUENCE CASCADE but
we can't DROP USER/ROLE CASCADE?

Why do Postgres have no such functionality as DROP USER CASCADE? Is there
any reasons in that absence? 

 

 

 

--

Alex Ignatov 
Postgres Professional:  <http://www.postgrespro.com>
http://www.postgrespro.com 
The Russian Postgres Company

 



Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-16 Thread Alex Ignatov




On 16.09.2016 17:01, Arun Rangarajan wrote:

Hi Laurenz,

Thanks for your reply.

Sorry for the double posting here and StackOverflow
http://stackoverflow.com/questions/39518417/unable-to-create-oracle-fdw-extension-on-postgres
 .

I will update the details on StackOverflow since formatting and editing are 
easier there.

On Fri, Sep 16, 2016 at 3:14 AM, Albe Laurenz <laurenz.a...@wien.gv.at 
<mailto:laurenz.a...@wien.gv.at>> wrote:

Arun Rangarajan wrote:
> But when I try to create the extension, I get the following error:
>
> postgres=# create extension oracle_fdw;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

> t=2016-09-15 16:16:08 PDT d= h= p=25327 a=LOG:  0: server process 
(PID 20397) was terminated by signal 11: Segmentation fault

Well, as I told you, get a stack trace with debugging symbols.

Yours,
Laurenz Albe




Hi!
May be this helps:

"Don't add oracle_fdw to shared_preload_libraries! (c) Laurenz Albe



Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Test letter

2016-09-15 Thread Alex Sviridov
Hi all,

I have  suspicions that my message don't get to pgsql-general mailing list. 

Please, someone, answer this message if this get the mailing list.


Best regards, Alex


[GENERAL] pgAdmin3 backup over ssh tunnel

2016-09-14 Thread Alex Sviridov
Hi all

I have the same problem is it is described here  
http://stackoverflow.com/questions/32398311/pgadmin3-backup-over-ssh-tunnel

I use pgadmin 1.22.1 and when I try to make backup it instead of working with 
database over ssh tunel works with localhost database.
How to fix it? It this bug fired?

Best regards, Alex


Re: [GENERAL] Unable to log in current local time EST

2016-08-25 Thread Alex Lai
On 08/25/2016 11:36 AM, Alex Lai wrote:
> Dear All,
> 
> I have my log_line_prefix set to
> log_line_prefix = '[%d]%p %x %c[%l] %t %i'
> in postgresql.conf
> 
> psql -c 'show timezone'
>   TimeZone
> 
>  US/Eastern
> 
> tail -2 omi_acps.log ; date
> [sipsdb]20180 0 57ab7dcd.4ed4[11895717] 2016-08-25 15:32:45 GMT
> SELECTLOG:  duration: 0.326 ms
> [sipsdb]20180 0 57ab7dcd.4ed4[11895718] 2016-08-25 15:32:45 GMT idle in
> transactionLOG:  statement: fetch all in ""
> Thu Aug 25 11:32:45 EDT 2016
> My host is running EST time.
> 
> Postgres keeps log the GMT time.
> I want to log EST time.  How can I do that?  Your help would be greatly
> appreciated.
> 
> 
> 
Actually, I found the problem.  I missed the parameter log_timezone.  It
took by default as GMT.  I added log_timezone=US/Eastern and run
pg_ctl reload.

It works.

--
Best regards,

Alex Lai  (:-)
OMP SIPS DBA ADNET Systems, Inc.

7515 Mission Drive, Suite A100
Lanham, MD 20706
301 352-4657 (phone)
301 352-0437 (fax)
m...@sesda3.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unable to log in current local time EST

2016-08-25 Thread Alex Lai
Dear All,

I have my log_line_prefix set to
log_line_prefix = '[%d]%p %x %c[%l] %t %i'
in postgresql.conf

psql -c 'show timezone'
  TimeZone

 US/Eastern

tail -2 omi_acps.log ; date
[sipsdb]20180 0 57ab7dcd.4ed4[11895717] 2016-08-25 15:32:45 GMT
SELECTLOG:  duration: 0.326 ms
[sipsdb]20180 0 57ab7dcd.4ed4[11895718] 2016-08-25 15:32:45 GMT idle in
transactionLOG:  statement: fetch all in ""
Thu Aug 25 11:32:45 EDT 2016
My host is running EST time.

Postgres keeps log the GMT time.
I want to log EST time.  How can I do that?  Your help would be greatly
appreciated.



-- 
Best regards,

Alex Lai  (:-)
OMP SIPS DBA ADNET Systems, Inc.

7515 Mission Drive, Suite A100
Lanham, MD 20706
301 352-4657 (phone)
301 352-0437 (fax)
m...@sesda3.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extract data from JSONB

2016-08-07 Thread Alex Magnum
Hi,
I need some help with extracting data from json.

I have the following jsonb field

modules
{
   "accounts": {"status": true},
   "admin":{"status": true},
   "calendar": {"status": false},
   "chat": {"status": true},
   "contacts": {"status": true},
   "dashboard":{"status": false},
   "help": {"status": true}
}

How can I convert that into one row each based on status; for example if I
only want to have the active modules.

Only true

module| status
--+---
accounts  | true
admin | true
contacts  | true
help  | true

Thanks for any advise


Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Alex Ignatov
Hi! Make trigger function 

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company




On Fri, Aug 5, 2016 at 10:25 PM +0300, "Christian Ohler" <oh...@shift.com> 
wrote:










Hi,
I'm trying to find a way to have Postgres tell me if the current transaction 
would modify database if I committed it now.  I can live with a conservative 
approximation (sometimes – ideally, rarely – get a "yes" even though nothing 
would be modified, but never get a "no" even though there are pending 
modifications).  It's acceptable (probably even desirable) if a no-op write 
operation like "UPDATE foo SET bar = 1 WHERE bar = 1" is considered a 
modification.
(The use case is an audit log mechanism vaguely similar to pgMemento.)

This sentence from 
https://www.postgresql.org/docs/9.5/static/view-pg-locks.html :
> If a permanent ID is assigned to the transaction (which normally happens> 
> only if the transaction changes the state of the database), it also holds> an 
> exclusive lock on its permanent transaction ID until it ends.

makes me think that I can perhaps do it as follows:
SELECT count(*) FROM pg_locks WHERE pid=pg_backend_pid() AND 
locktype='transactionid' AND mode='ExclusiveLock' AND granted;

Is that right?  "Permanent transaction ID" refers to the XID, correct?  Are 
there other, better ways?  Are there ways to avoid false positives due to temp 
tables?
Thanks in advance,Christian.








Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Alex Ignatov


On 05.08.2016 18:54, Tom Lane wrote:

Alex Ignatov <a.igna...@postgrespro.ru> writes:

On 05.08.2016 17:51, Tom Lane wrote:

Sure.  Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shared_buffers.
Otherwise, you might find yourself having to throw an "out of disk
space" error after having already committed the relevant INSERTs.

How about out of space when we filling WAL files?

What about it?  That will be reported before committing, too.

What Grigory wants would imply committing and then sometime later
saying "oh, wait ... remember that data we told you we'd committed?
We lied."

Temp tables do indeed disappear at session end (and a fortiori after
a crash), but that doesn't create an excuse for them not to have
normal transactional behavior within the session.

regards, tom lane


If temp table fits in temp_buffer why do we have to reserve disk space 
for that table?


If we commit after filling temp table ok=> Not enough temp_buffers for 
the new one temp table write the first one to disk=> Not enough space 
for temp file ok - our system in any way cant work further.


Cant see any problems in writing temp table data to disk only when 
temp_buffer is full.


Any arguments against that behavior?

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Alex Ignatov


On 05.08.2016 17:51, Tom Lane wrote:

Grigory Smolkin <g.smol...@postgrespro.ru> writes:

I`ve noticed interesting aspect in temp tables working. It appears
postgres is trying to reserve space on disk for temp tables even before
temp_buffers overflow.

Sure.  Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shared_buffers.
Otherwise, you might find yourself having to throw an "out of disk
space" error after having already committed the relevant INSERTs.

regards, tom lane


How about out of space when we filling WAL files? Just rollback and 
nothing else. We can fall in this with space reservation for WAL files.


In this situation with temp table we reserved space long before we ever 
need it if we have size of temp table < temp_buffer.
 Why not just rollback transaction in "Out of space" situation? With 
this preliminary reservation we use HDD resource but in fact we dont 
need it.
In situation with high rate creation of temp table, we saturate HDD 
resource with "just in case" reservation.


Have we any other explanation except "out of space" problem? May be 
there is some fundamental things tied with that?


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about wal files / pg_xlogs

2016-08-05 Thread Alex Ignatov

Hello!

As I can see files is delivered not with delay but with timeshift.

1. Can you show me restore_command on slave?

2. Also can you check archived WAL creation time on slaves in archive 
location after you copied them with archive_command? Is in near WAL 
creation time in pg_xlogs? Or different?


3. How do you check timezone equivalence between master and slave? What


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

On 04.08.2016 05:21, Patrick B wrote:

Hi all,

I'm currently using PostgreSQL 9.2. I noticed that the wal_files are 
being generated by the master well, no problems. But on the slaves, it 
seems to be a delay to the delivery of those wal_files.


I got two slaves using streaming replication and wal files shipment 
from Master.


*On the master:*

ls -ltr /var/lib/pgsql/9.2/data/pg_xlogs/
Aug  4 02:18 0002159D00D1
Aug  4 02:18 0002159D00D2
Aug  4 02:18 0002159D00D3


select * from pg_current_xlog_location();
159D/D6C8DAF8


So, seems to be ok.


*On the slave:*


ls -ltr /var/lib/pgsql/9.2/wal_archive:
Aug  4 00:58 0002159C0071
Aug  4 00:58 0002159C0072
Aug  4 00:58 0002159C0073


See the time difference? 2 hours? It seems the files are being 
delivered with 2 hours delay.


The streaming replication is working fine... But if that goes down, 
I'll need the wal_files up to date to recover the database.


How can I see what's going on? What would be the steps? any tips?

Cheers
Patrick.




Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Alex Ignatov


On 28.07.2016 19:43, Scott Marlowe wrote:

On Thu, Jul 28, 2016 at 10:32 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:

  Oh, so in contrast to "Oracle world"  "Postgres world" DBA in their right
to do major upgrade without complete and tested backup?
Ok,  I understand you. In Postgres world there always sky is blue and sun is
shining.

Of course we have backups. But we also have slony. So we CAN go back
and forth between latest and previous without a restore.




And? Oracle and MySql doesnt have it but can downgrade right out the 
box. Quick and easy.


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Alex Ignatov


On 28.07.2016 18:41, Igor Neyman wrote:

-Original Message-
From: Alex Ignatov [mailto:a.igna...@postgrespro.ru]
Sent: Thursday, July 28, 2016 11:26 AM
To: Igor Neyman <iney...@perceptron.com>; Rakesh Kumar 
<rakeshkumar46...@gmail.com>
Cc: PostgreSQL General <pgsql-general@postgresql.org>
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL


On 28.07.2016 18:09, Igor Neyman wrote:

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alex Ignatov
Sent: Thursday, July 28, 2016 10:59 AM
To: Rakesh Kumar <rakeshkumar46...@gmail.com>
Cc: PostgreSQL General <pgsql-general@postgresql.org>
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL


On 28.07.2016 17:50, Rakesh Kumar wrote:

On Thu, Jul 28, 2016 at 10:38 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:


Sorry, what? You can rollback to previous version of software what you had.

https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm
#
UPGRD007

Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.

Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true  for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.



Not true -

https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007

About compatible params you are right but downgrade is possible from 12c.

Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres
Company

__
___

Alex, documentation you point to just proves what Rakesh said:

"For supported releases of Oracle Database, you can downgrade a database to the 
release from which you last upgraded. For example, if you recently upgraded from release 
11.2.0.4 to Oracle Database 12c, and you did not change the compatible initialization 
parameter to 12.1 or higher, then you can downgrade to release 11.2.0.4."

So, you can downgrade only if you didn't "activate" newer feature (didn't 
compatibility to higher version).
But then, what's the point in upgrading (I know, there are some like bug fixes, 
etc...), if you aren't going to use newer features.

Regards,
Igor

Not true again.

  From Oracle docs
https://docs.oracle.com/database/121/UPGRD/intro.htm#UPGRD001 Section "How the 
COMPATIBLE Initialization Parameter Operates in Oracle Database"
-
If you run an Oracle Database 12c database with the COMPATIBLE initialization 
parameter set to 11.0.0, then it generates database structures on disk that are 
compatible with Oracle Database 11g.
Therefore, the COMPATIBLE initialization parameter enables or disables the use 
of features. If you try to use any new features that make the database 
incompatible with the COMPATIBLE initialization parameter, then an error 
occurs. However, any new features that do not make incompatible changes on disk 
are enabled.

So if feature do not change disk in incompatible way this features is enable.

Also from this
https://docs.oracle.com/database/121/REFRN/GUID-6C57EE11-BD06-4BB8-A0F7-D6CDDD086FA9.htm#REFRN10019
:

"This parameter specifies the release with which Oracle must maintain compatibility. 
It enables you to take advantage of the maintenance improvements of a new release 
immediately in your production systems without testing the new functionality in your 
environment. Some features of the release may be restricted."

So this parameter is made with rollback in mind and you have some new feature 
from newer software release.

Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company



B.t.w., In Postgres release which doesn't change "database structures"  is called 
"minor" (just replacing binaries), and allows downgrade.

"Compactable" or not,  in "Oracle world" no DBA in their right mind will 
attempt major upgrade without having complete (and tested) backup that could be used in case of 
upgrade going wrong.

Regards,
Igor

 Oh, so in contrast to "Oracle world"  "Postgres world" DBA in their 
right to do major upgrade without complete and tested backup?
Ok,  I understand you. In Postgres world there always sky is blue and 
sun is shining.



Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Alex Ignatov


On 28.07.2016 18:09, Igor Neyman wrote:

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alex Ignatov
Sent: Thursday, July 28, 2016 10:59 AM
To: Rakesh Kumar <rakeshkumar46...@gmail.com>
Cc: PostgreSQL General <pgsql-general@postgresql.org>
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL


On 28.07.2016 17:50, Rakesh Kumar wrote:

On Thu, Jul 28, 2016 at 10:38 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:


Sorry, what? You can rollback to previous version of software what you had.

https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#
UPGRD007

Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.

Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true  for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.



Not true -

https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007

About compatible params you are right but downgrade is possible from 12c.

Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres Company

_

Alex, documentation you point to just proves what Rakesh said:

"For supported releases of Oracle Database, you can downgrade a database to the 
release from which you last upgraded. For example, if you recently upgraded from release 
11.2.0.4 to Oracle Database 12c, and you did not change the compatible initialization 
parameter to 12.1 or higher, then you can downgrade to release 11.2.0.4."

So, you can downgrade only if you didn't "activate" newer feature (didn't 
compatibility to higher version).
But then, what's the point in upgrading (I know, there are some like bug fixes, 
etc...), if you aren't going to use newer features.

Regards,
Igor

Not true again.

From Oracle docs 
https://docs.oracle.com/database/121/UPGRD/intro.htm#UPGRD001 Section 
"How the COMPATIBLE Initialization Parameter Operates in Oracle Database"

-
If you run an Oracle Database 12c database with the COMPATIBLE 
initialization parameter set to 11.0.0, then it generates database 
structures on disk that are compatible with Oracle Database 11g. 
Therefore, the COMPATIBLE initialization parameter enables or disables 
the use of features. If you try to use any new features that make the 
database incompatible with the COMPATIBLE initialization parameter, then 
an error occurs. However, any new features that do not make incompatible 
changes on disk are enabled.


So if feature do not change disk in incompatible way this features is 
enable.


Also from this 
https://docs.oracle.com/database/121/REFRN/GUID-6C57EE11-BD06-4BB8-A0F7-D6CDDD086FA9.htm#REFRN10019 
:


"This parameter specifies the release with which Oracle must maintain 
compatibility. It enables you to take advantage of the maintenance 
improvements of a new release immediately in your production systems 
without testing the new functionality in your environment. Some features 
of the release may be restricted."


So this parameter is made with rollback in mind and you have some new 
feature from newer software release.


Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Alex Ignatov


On 28.07.2016 17:50, Rakesh Kumar wrote:

On Thu, Jul 28, 2016 at 10:38 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:


Sorry, what? You can rollback to previous version of software what you had.

https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#UPGRD007

Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.

Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true  for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.




Not true -

https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007

About compatible params you are right but downgrade is possible from 12c.

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Alex Ignatov


On 27.07.2016 19:51, Rakesh Kumar wrote:

On Wed, Jul 27, 2016 at 12:47 PM, Bruce Momjian <br...@momjian.us> wrote:


Yes.  I was saying I don't know how to improve pg_upgrade to address it.

This problem is there even in oracle/db2/sqlserver. None of them allow
rollback to the lower version
unless it is a minor version upgrade. Major version upgrade almost
definitely involves change in transaction log
(WAL) structure and hence no rollback.



Sorry, what? You can rollback to previous version of software what you had.

https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#UPGRD007


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop script executions

2016-07-26 Thread Alex Ignatov

revoke usage on language plpythonu from username;

revoke usage on language plperl(u) from username;

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

On 26.07.2016 16:05, David G. Johnston wrote:
On Tue, Jul 26, 2016 at 8:53 AM, Dev Kumkar <devdas.kum...@gmail.com 
<mailto:devdas.kum...@gmail.com>>wrote:


Hello Experts,

I want to have my postgreSQL database to only execute SQLs and
avoid execution of perl and python script executions.

Can you please suggest ways to achieve this?


The way to go about prohibiting their usage altogether is specific to 
your installation method.  But by default those languages are not 
installed into newly created databases and only a superuser can 
"CREATE EXTENSION" so without superuser intervention uses in the 
database will be unable to create functions in those languages.  If 
that is an insufficient level of prevention you would have to somehow 
get your installation of PostgreSQL to not include those core 
extensions.  I'm pretty sure that during compilation you ​can do 
this.  For packaged distributions I do not know what all is involved 
(i.e., what you have to choose to not install).



David J.




Re: [GENERAL] pg_dump without any SET command in header of output plain text sql file

2016-07-22 Thread Alex Ignatov

Ok, thanks !

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

On 22.07.2016 06:49, Sameer Kumar wrote:



On Thu, Jul 21, 2016 at 7:14 PM Alex Ignatov <a.igna...@postgrespro.ru 
<mailto:a.igna...@postgrespro.ru>> wrote:


And what is the options you mentioned about?


I stand corrected.
I think the only ones you can avoid are OWNERSHIP and TABLESPACE 
commands.


I think sed would be the best tool here.

Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

On 20.07.2016 21:02, Sameer Kumar wrote:



On Thu, 21 Jul 2016, 1:17 a.m. Alex Ignatov,
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru>> wrote:

Hello everyone!

Is there any way to make pg_dump(9.5) to dump table (with
data) without
any SET command in the header of output plain sql file?


Yeah you need to use specific switch/option to disable each set
command


P.S. Yeah I know about sedding =)


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
--

Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com



--
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com





Re: [GENERAL] pg_dump without any SET command in header of output plain text sql file

2016-07-21 Thread Alex Ignatov

And what is the options you mentioned about?

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

On 20.07.2016 21:02, Sameer Kumar wrote:



On Thu, 21 Jul 2016, 1:17 a.m. Alex Ignatov, <a.igna...@postgrespro.ru 
<mailto:a.igna...@postgrespro.ru>> wrote:


Hello everyone!

Is there any way to make pg_dump(9.5) to dump table (with data)
without
any SET command in the header of output plain sql file?


Yeah you need to use specific switch/option to disable each set command


P.S. Yeah I know about sedding =)


    --
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com





[GENERAL] pg_dump without any SET command in header of output plain text sql file

2016-07-20 Thread Alex Ignatov

Hello everyone!

Is there any way to make pg_dump(9.5) to dump table (with data) without 
any SET command in the header of output plain sql file?


P.S. Yeah I know about sedding =)


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How safe is pg_basebackup + continuous archiving?

2016-06-30 Thread Alex Ignatov


On 29.06.2016 17:51, Kaixi Luo wrote:

Hello,

We use PostgreSQL at work and we do daily backups with pg_dump. After 
that we pg_restore the dump and check the database that there isn't 
any data corruption. As the database grows, the whole pg_dump / 
pg_restore cycle time is quickly approaching 24h, so we need to change 
strategies.


We've thought about using pg_basebackup + continuous archiving as an 
alternative backup process, but I have doubts regarding the safety of 
such procedure. As far as I know, pg_basebackup is done via rsync (and 
we also archive wals using rsync), so if by any chance  disk 
corruption occurs on the master server, the corruption would be 
carried over to our backup server.


How can we check for backup corruption in this case? Thanks you very much.

Kaixi Luo

Hello!
Only pg_dump+ data checksums turned on can make you "sure" in absence 
corruption in backup. But! If at any way you've lost some data file from 
relation or it zeroed by say powerloss(there were some issue with xfs in 
the past) even with pg_dump you will never know it.
But  there is  no any other method in PG to check database for 
corruption. Also PG have no checksums on clog's file. So if any 
corruption is happend in this file you also will never know it.
So at now pg_basebackup+ wal archiving is like walking on the minefield 
with tightly closed eyes . You never know when it will make explode!



Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Corrupted Dabatabase

2016-06-27 Thread Alex Ignatov


On 27.06.2016 16:28, Pau Marc Muñoz Torres wrote:

Hello Everydody,

 My database is corrupted, each time i try to list all the tables in 
the database  i have got the following message


hla2db=> \d
ERROR:  could not open relation 1663/16385/1255: Read-only file system

I read that my best option is to use my backup. Before using my backup 
i would like to remove all databases in postgresql and create 
everything again from scratch. Unfortunatlly i can't see the list of 
databases, althought i am able to create databse




 can you suggest what to do? is it a good idea to reinstall postgresql?


Pau Marc Muñoz Torres
skype: pau_marc
http://www.linkedin.com/in/paumarc
http://www.researchgate.net/profile/Pau_Marc_Torres3/info/



May be you filesystem is mounted read-only after power loss or some OS 
crash?


What OS do you have   - WIndows or some kind of Unix (Linux, etc). Also 
what  file system  Postgresql data dir is using?


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: R: [GENERAL] Vacuum full: alternatives?

2016-06-20 Thread Alex Ignatov


On 20.06.2016 17:30, Vik Fearing wrote:

On 20/06/16 16:23, Martín Marqués wrote:

El 20/06/16 a las 09:50, Melvin Davidson escribió:



but it won't let it grow too (or am I missing something).

Yes, you are missing something. By partioning and {Vacuum Full only the
table with data no longer needed}, the rest of the data remains
available to the users
AND space is reclaimed by the O/S, so it's the best of both worlds.

That's not entirely true. Think about a SELECT which has to scan all
child tables.

Or any SELECT on the parent at all.  The planner needs to examine the
CHECK constraints on the children and can't do it if the child is locked
in ACCESS EXCLUSIVE mode.

+1


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgresSQL and HIPAA compliance

2016-06-17 Thread Alex John
Hello, I have a few questions regarding the use of PostgreSQL and HIPAA
compliance. I work for a company that plans on storing protected health
information (PHI) on our servers. We have looked at various solutions for doing
so, and RDS is a prime candidate except for the fact that they have explicitly
stated that the Postgres engine is *not* HIPAA compliant.

Users on the IRC channel generally say that the guidelines are more catered
towards building better firewalls and a sane access policy, but I would like to
know if there is anything within the implementation of Postgres itself that
violates said compliance.

If anyone works at a similar company and utilizes postgresql to store PHI,
please let me know.

Thank you,
  Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Retrieving comment of rules and triggers

2016-06-09 Thread Alex Magnum
Hi, is there a way to retrieve the comment of rules and triggers. I worked
it out on functions, tables, views but am kind of stuck with rules and
triggers.

Any help is appreciated.
Thanks
Alex


[GENERAL] WAL's listing in pg_xlog by some sql query

2016-06-02 Thread Alex Ignatov

Hello!

Can I list all WAL files in pg_xlog  by using some sql query in Postgres?


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Silent data loss in its pure form

2016-05-30 Thread Alex Ignatov


_
From: David G. Johnston <david.g.johns...@gmail.com>
Sent: Monday, May 30, 2016 23:44
Subject: Re: [GENERAL] Silent data loss in its pure form
To: Alex Ignatov <a.igna...@postgrespro.ru>
Cc:  <pgsql-general@postgresql.org>, Scott Marlowe <scott.marl...@gmail.com>


On Mon, May 30, 2016 at 4:22 PM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:

_
From: Scott Marlowe <scott.marl...@gmail.com>
Sent: Monday, May 30, 2016 20:14
Subject: Re: [GENERAL] Silent data loss in its pure form
To: Alex Ignatov <a.igna...@postgrespro.ru>
Cc:  <pgsql-general@postgresql.org>


On Mon, May 30, 2016 at 10:57 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:
> Following this bug reports from redhat
> https://bugzilla.redhat.com/show_bug.cgi?id=845233
>
> it rising some dangerous issue:
>
> If on any reasons you data file is zeroed after some power loss(it is the
> most known issue on XFS in the past) when you do
> select count(*) from you_table you got zero if you table was in one
> 1GB(default) file or some other numbers !=count (*) from you_table before
> power loss
> No errors, nothing suspicious in logs. No any checksum errors. Nothing.
>
> Silent data loss is its pure form.
>
> And thanks to all gods that you notice it before backup recycling which
> contains good data.
> Keep in mind it while checking you "backups" in any forms (pg_dump or the
> more dangerous and short-spoken PITR file backup)
>
> You data is always in danger with "zeroed data file is normal file"
> paradigm.

That bug shows as having been fixed in 2012. Are there any modern,
supported distros that would still have it? It sounds really bad btw.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

It is not about modern distros it is about possible silent data loss in old 
distros. We have replication, have some form of data check summing, but we are 
powerless in front of this XFS bug just because "zeroed file is you good friend 
in Postgres". With "zero file is good file" paradigm and this noted XFS bug PG  
as it is now is "colossus with feet of clay" It can do many things but it cant 
even tell us that we have some trouble with our precious data. No need to 
prevent or to some other AI magic and so on when zero doom day has come.What we 
need now is some error report about suspicious zeroed file. To make us sure 
that something went wrong and we have to do recovery.Today PG "power loss" 
recovery and this XFS bug poisoning our ensurance that  recovery went well . It 
went well even with zeroed file. It it not healthy behavior. It like a walk on 
a mine field with eyes closed. I think it is  very dangerous view on data to 
have data files without any header in it and without any files checking at 
least on PG start.With this known XFS bug  it can leads to undetected and 
unavoidable loss of data.

​For those not following -general this is basically an extension of the 
following thread.
"Deleting a table file does not raise an error when the table is touched 
afterwards, why?"
https://www.postgresql.org/message-id/flat/184509399.5590018.1464622534207.javamail.zim...@dbi-services.com#184509399.5590018.1464622534207.javamail.zim...@dbi-services.com
David J.
It is not extension of that thread it is about XFS bug and how PG ignoring 
zeroed file even during poweloss recovery. That thread is just topic starter on 
such important theme as how to silently loose your data with broken XFS and PG. 
Key words is silently without any human intervention and "zero length file is 
good file " paradigm. It is not even like on unlinking files by hands.

Alex IgnatovPostgres Professional: http://www.postgrespro.comRussian Postgres 
Company



  

Re: [GENERAL] Silent data loss in its pure form

2016-05-30 Thread Alex Ignatov


_
From: Scott Marlowe <scott.marl...@gmail.com>
Sent: Monday, May 30, 2016 20:14
Subject: Re: [GENERAL] Silent data loss in its pure form
To: Alex Ignatov <a.igna...@postgrespro.ru>
Cc:  <pgsql-general@postgresql.org>


On Mon, May 30, 2016 at 10:57 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:
> Following this bug reports from redhat
> https://bugzilla.redhat.com/show_bug.cgi?id=845233
>
> it rising some dangerous issue:
>
> If on any reasons you data file is zeroed after some power loss(it is the
> most known issue on XFS in the past) when you do
> select count(*) from you_table you got zero if you table was in one
> 1GB(default) file or some other numbers !=count (*) from you_table before
> power loss
> No errors, nothing suspicious in logs. No any checksum errors. Nothing.
>
> Silent data loss is its pure form.
>
> And thanks to all gods that you notice it before backup recycling which
> contains good data.
> Keep in mind it while checking you "backups" in any forms (pg_dump or the
> more dangerous and short-spoken PITR file backup)
>
> You data is always in danger with "zeroed data file is normal file"
> paradigm.

That bug shows as having been fixed in 2012. Are there any modern,
supported distros that would still have it? It sounds really bad btw.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

It is not about modern distros it is about possible silent data loss in old 
distros. We have replication, have some form of data check summing, but we are 
powerless in front of this XFS bug just because "zeroed file is you good friend 
in Postgres". With "zero file is good file" paradigm and this noted XFS bug PG  
as it is now is "colossus with feet of clay" It can do many things but it cant 
even tell us that we have some trouble with our precious data. No need to 
prevent or to some other AI magic and so on when zero doom day has come.What we 
need now is some error report about suspicious zeroed file. To make us sure 
that something went wrong and we have to do recovery.Today PG "power loss" 
recovery and this XFS bug poisoning our ensurance that  recovery went well . It 
went well even with zeroed file. It it not healthy behavior. It like a walk on 
a mine field with eyes closed. I think it is  very dangerous view on data to 
have data files without any header in it and without any files checking at 
least on PG start.With this known XFS bug  it can leads to undetected and 
unavoidable loss of data.
  

[GENERAL] Silent data loss in its pure form

2016-05-30 Thread Alex Ignatov
Following this bug reports from redhat 
https://bugzilla.redhat.com/show_bug.cgi?id=845233


it rising some dangerous issue:

If on any reasons you data file is zeroed after some power loss(it is 
the most known issue on XFS in the past) when you do
select count(*) from you_table you got zero if you table was in one 
1GB(default) file or some other numbers !=count (*) from you_table 
before power loss

No errors, nothing suspicious in logs. No any checksum errors. Nothing.

Silent data loss is its pure form.

And thanks to all gods that you notice it before backup recycling which 
contains good data.
Keep in mind it while checking you "backups" in any forms (pg_dump or 
the more dangerous and short-spoken PITR file backup)


You data is always in danger with "zeroed data file is normal file" 
paradigm.



--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Alex Ignatov


On 30.05.2016 18:35, Daniel Westermann wrote:

Hi,

I need to understand something: Lets assume I have a table t5 with 
1'000'000 rows:


(postgres@[local]:5432) [sample] > select count(*) from t5;
  count
-
 100
(1 row)

Time: 2363.834 ms
(postgres@[local]:5432) [sample] >

I get the file for that table:

postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] oid2name 
-d sample -t t5

From database "sample":
  Filenode  Table Name
--
 32809  t5


Then I delete the file:

postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] rm 32809

When doing the count(*) on the table again:

(postgres@[local]:5432) [sample] > select count(*) from t5;
  count
-
 100
(1 row)

No issue in the log. This is probably coming from the cache, isn't it? 
Is this intended and safe?


Then I restart the instance and do the select again:

2016-05-30 19:25:20.633 CEST - 9 - 2777 -  - @ FATAL:  could not open 
file "base/16422/32809": No such file or directory
2016-05-30 19:25:20.633 CEST - 10 - 2777 -  - @ CONTEXT:  writing 
block 8192 of relation base/16422/32809


(postgres@[local]:5432) [sample] > select count(*) from t5;
 count

 437920
(1 row)

Can someone please tell me the intention behind that? From my point of 
view this is dangerous. If nobody is monitoring the log (which sadly 
is the case in reality) nobody will notice that only parts of the 
table are there. Wouldn't it be much more safe to raise an error as 
soon as the table is touched?


PostgreSQL version:

(postgres@[local]:5432) [sample] > select version();
-[ RECORD 1 
]
version | PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by 
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit


Thanks in advance
Daniel



Hi if you delete file from external process that open this file this 
external process never ever notice it. Only after it  close this file 
handler you fall it some issues with "file not exist" and other.


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



[GENERAL] Subquery uses ungrouped column

2016-05-26 Thread Alex Ignatov

Hello!

Why the following query:

SELECT (select msc_id
  from collectors
  where id = substring(fileid from -1)
   ) msc_id
   from ip_data_records
   group by substring(fileid from -1)

gives me:

ERROR:  subquery uses ungrouped column "ip_data_records.fileid" from 
outer query

LINE 3: where id = substring(fileid from -1)

but  the following query:

SELECT (select msc_id
  from collectors
  where id = fileid
   ) msc_id
   from ip_data_records
   group by fileid

is working ok?

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



[GENERAL] No warnings or errors after same sequential revoke

2016-05-20 Thread Alex Ignatov

Hello!
Why we have no warnings  or errors about that  we have no such grant 
after first revoke?

postgres=> grant select(i2) on table user1.t2 to user2;
GRANT
postgres=> revoke select(i2) on table user1.t2 from user2;
REVOKE
postgres=> revoke select(i2) on table user1.t2 from user2;
REVOKE
postgres=> revoke select(i2) on table user1.t2 from user2;
REVOKE
postgres=> revoke select(i2) on table user1.t2 from user2;
REVOKE

No warnings about that this grant is not available.
It looks like revoking nonexisting grants is allowed???


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



[GENERAL] Keeping top N records of a group

2016-05-14 Thread Alex Magnum
Hi,
i want to archive data and am looking for a query to replace a rather slow
function i am currently using.

The idea is that for every sym and doc_key I want to keep the records of
the top 2 sources. Eg. in below table I want to archive
sym 1022 of source 3000 but keep the 4 records from sources 4045 and 4081.

Any help on this would be appreciated.


 source| sym  | doc_id  | doc_key
---+--+-+--
  3000 | 1022 |  551008 | 23834363  <-- Archive
  3000 | 1022 |  551008 | 23834363  <-- Archive
  4045 | 1022 | 3699290 | 23834363
  4045 | 1022 | 3699290 | 23834363
  4081 | 1022 | 3811196 | 23834363
  4081 | 1022 | 3811196 | 23834363
  3000 | 1028 |  550997 | 23834363
  3468 | 1085 | 1740526 | 23834363
  3000 | 1149 |  551017 | 23834363
  4045 | 1149 | 3699291 | 23834363
  3000 | 12   |  551015 | 23834363
  3000 | 12   |  551015 | 23834363
  3951 | 12   | 3147700 | 23834363
  3951 | 12   | 3147700 | 23834363
  3000 | 13   |  551000 | 23834363
  4045 | 13   | 3699283 | 23834363
  3000 | 1327 |  551010 | 23834363
  3971 | 1327 | 3394469 | 23834363
  3000 | 15   |  551001 | 23834363
  4045 | 15   | 3699284 | 23834363
  3000 | 16   |  551002 | 23834363
  4045 | 16   | 3699285 | 23834363
  3000 | 18   |  551013 | 23834363
  4045 | 18   | 3699286 | 23834363
  3000 | 257  |  551005 | 23834363
  3951 | 257  | 3147701 | 23834363
  3000 | 2795 |  551011 | 23834363 <-- Archive
  3459 | 2795 | 1710571 | 23834363 <-- Archive
  3905 | 2795 | 2994791 | 23834363
  4045 | 2795 | 3699292 | 23834363
  3060 | 2913 |  856199 | 23834363
  3000 | 2954 |  551012 | 23834363 <-- Archive
  3971 | 2954 | 3394470 | 23834363
  4212 | 2954 | 4650870 | 23834363
  3183 | 3427 | 1055492 | 23834363 <-- Archive
  3971 | 3427 | 3394471 | 23834363
  4248 | 3427 | 4763105 | 23834363
  3188 | 594  | 1062642 | 23834363 <-- Archive
  3188 | 594  | 1062642 | 23834363 <-- Archive
  3191 | 594  | 1067501 | 23834363 <-- Archive
  3191 | 594  | 1067501 | 23834363 <-- Archive
  3192 | 594  | 1068391 | 23834363 <-- Archive
  3192 | 594  | 1068391 | 23834363 <-- Archive
  3199 | 594  | 1096070 | 23834363 <-- Archive
  3199 | 594  | 1096070 | 23834363 <-- Archive
  3303 | 594  | 1305467 | 23834363
  3303 | 594  | 1305467 | 23834363
  4117 | 594  | 4000987 | 23834363

Thanks a lot for any ideas.
Alex


Re: [GENERAL] Does this perf output seem 'normal'?

2016-04-28 Thread Alex Ignatov


On 28.04.2016 17:50, Peter Devoy wrote:

Also do you  know that in perf report second column is the total cpu  time 
share and not the first

Thank you, I did notice they did not add up -- that makes more sense now.


Can you expand node from perf report

Yes, my pleasure.  I added "--call-graph fractal,2,callee" to the
command, hopefully it seems more useful:

https://cloud.githubusercontent.com/assets/4375326/14889937/a1378982-0d58-11e6-89b6-aac5e1620fbb.png

To my amateur eye it seems like there is suspicious amount of memory &
thread management going on...?


--
Peter

Or this report is much more infomative and it is looks healthy =)!
At first glance i see that postgres is simply  forking i.e. spawning 
child process for the new session. Creating new PG connection is slow 
process, even if your select is light fast total response time can be 
very long.


Have you ever heard about pgbouncer- the light Postgres connection 
pooler? It is placed between you app server and pg and proxing app 
server queries to pg server.  pgbouncer have some constant number of 
connections to pg server and have some sort of queue on incoming 
connection from app server.


TL;DR Use pgbouncer between app server and Postgres =)

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does this perf output seem 'normal'?

2016-04-28 Thread Alex Ignatov



On 28.04.2016 1:11, Peter Devoy wrote:

I have now done a recording for 60 seconds during a batch of 1000
requests and posted the results on a new issue on the Mapnik repo.

Although Postgres still comes out on top in the perf results I
struggle to believe this is a Postgres issue.  But, if anyone is
curious, the issue is here:
https://github.com/mapnik/mapnik/issues/3414

I may come back here if I don't have luck with the Mapnik developers.

All the best


Peter
3XE
P: 01326 567155
M: 07770 693662
A: 3XE Ltd
Tremough Innovation Centre
PENRYN
TR10 9TA
3XE Ltd · Registered in England and Wales · 9356871


On 27 April 2016 at 16:45, Peter Devoy <pe...@3xe.co.uk> wrote:

If you really want to profile this, you should fire it off in a tight loop, 
using wget or ab2 or curl.

Thanks Jeff, that sounds like a smart idea.  I will try later when I
have access to the server.


Hi! What do you want to see in perf stats? Maybe you can explain your problem 
more in details?

Hi Alex )  I am hoping to find out which function calls are taking the
longest because to the code is taking too long to only parse a small
XML file and do a few quick database queries.  The last version of
this software was able to do a lot more in the same period of time.  I
realise this most likely a client software issue but was hoping it may
be obvious whether Mapnik is spending too much time with Postgres.

Unfortunately I may just have to try again to build Mapnik from source
so that I can get debugging/profiling output, etc. -- I was hoping to
avoid it because it is quite painful.
Can you expand node from perf report where 33.55% and 13.37% and 38.04% 
38.04%.
PS Also do you  know that in perf report second column is the total cpu  
time share and not the first ? And that is why total sum of second 
column is 100% but sum from first can be != 100%? First column is the % 
of time when child function(procedure) is working relative to main proc 
time.




--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does this perf output seem 'normal'?

2016-04-27 Thread Alex Ignatov



On 27.04.2016 2:27, Peter Devoy wrote:

Hi all,

I am trying to work out why a piece of software, Mapnik, is executing
slowly.  All it is doing is loading a config file which causes about
12 preparation queries (i.e. with LIMIT 0) to be executed.  I can see
from pg_stat_statements these only take ~1ms in their totality.

So next I ran "perf record -a -g sleep 0.4" and quickly loaded my PHP
page to start the process (I am interfacing with mapnik via a PHP
Extension), this is a sample of the result:

https://cloud.githubusercontent.com/assets/4375326/14836015/08b0a8b4-0c05-11e6-96e5-f1836497cdfe.png

The PHP page returns in ~60ms (well within the 400ms used for the perf
recording).  If possible, please can anyone with more familiarity with
perf and Postgres tell me whether the result of the recording looks
healthy?

I am wondering if there is some problem with the way in which Mapnik
is implementing libpq -- I already moved from Mapnik 2.2 because it
was effectively DOSing Postgres by leaving client connections open.

Any help would be greatly appreciated.


Kind regards

Peter


Hi! What do you want to see in perf stats? Maybe you can explain your 
problem more in details?


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initdb --data-checksums by default

2016-04-22 Thread Alex Ignatov



On 21.04.2016 20:26, Vick Khera wrote:


On Thu, Apr 21, 2016 at 9:00 AM, Alex Ignatov 
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru>> wrote:


Ms Windows doesnt have ZFS support. AIX also doesnt. Z/OS also. 
Any other commercial Linux distros don't have ZFS support. Yes you

can compile it and use on production but...
But PG runs on the above OS, but have check sum off by default.
Thats the deal. And it is not related to ZFS existence or any
other FS with checksums in any way. The question is only in
performance hit when you turn it on and  now I am in the process
of testing it...


I don't care about those platforms, so changing the default is just 
making more work for me. :)
I see ;). By  the way following my tests turn this option on is only 
1-2% overhead in throughput.


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Initdb --data-checksums by default

2016-04-21 Thread Alex Ignatov



On 20.04.2016 23:28, Vick Khera wrote:


On Wed, Apr 20, 2016 at 3:43 AM, Alex Ignatov 
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru>> wrote:


What do you think about defaulting --data-checksums in initdb?


I think that ZFS storing my database files already does this and can 
correct for it using replicated copies, so why do I need a second 
layer of checksums?


Ms Windows doesnt have ZFS support. AIX also doesnt. Z/OS also.  Any 
other commercial Linux distros don't have ZFS support. Yes you can 
compile it and use on production but...
But PG runs on the above OS, but have check sum off by default. Thats 
the deal. And it is not related to ZFS existence or any other FS with 
checksums in any way. The question is only in performance hit when you 
turn it on and  now I am in the process of testing it...


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Initdb --data-checksums by default

2016-04-20 Thread Alex Ignatov



On 20.04.2016 16:58, Alvaro Herrera wrote:

Alex Ignatov wrote:

Hello everyone!
Today in Big Data epoch silent data corruption becoming more and more issue
to afraid of. With uncorrectable read error rate ~ 10^-15  on multiterabyte
disk bit rot is the real issue.
I think that today checksumming data  must be mandatory  set by default.
Only if someone doesn't care about his data he can manually turn this option
off.

In principle I support the idea of turning data checksums by default,
but can you provide some numbers on how it affects performance on
various workloads?  That's a critical point in the discussion.

Right now i am working on this tests. On various media - hdd ssd ram and 
some various workload. As soon as results be ready i'll provide my numbers


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initdb --data-checksums by default

2016-04-20 Thread Alex Ignatov



On 20.04.2016 12:27, Andreas Joseph Krogh wrote:
På onsdag 20. april 2016 kl. 11:22:33, skrev Alex Ignatov 
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru>>:


[snip]

Why do you think that common pg-users doesn't care about their data?

Did I say that?

Also why do we have wal_level=minimal fsync=on and other stuff?

To make certain garantees that data is by default durable.
What I'm saying is that everything is a compromise, cost/benefit. The 
universe might explode tomorrow but the chances are slim, so no use 
preparing for it.

Those caring enough probably use checksums, battery-packed RAID etc.
--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
<https://www.visena.com>


Wal is not durable without crc in it. But we have by default crc in wal 
and not in data files.


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Initdb --data-checksums by default

2016-04-20 Thread Alex Ignatov



On 20.04.2016 12:10, Andreas Joseph Krogh wrote:
På onsdag 20. april 2016 kl. 11:02:31, skrev Alex Ignatov 
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru>>:


On 20.04.2016 11:40, Andreas Joseph Krogh wrote:

På onsdag 20. april 2016 kl. 10:33:14, skrev Alex Ignatov
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru>>:



On 20.04.2016 11:29, Devrim Gündüz wrote:
> Hi,
>
> On Wed, 2016-04-20 at 10:43 +0300, Alex Ignatov wrote:
>> Today in Big Data epoch silent data corruption becoming
more and more
>> issue to afraid of. With uncorrectable read error rate ~
10^-15  on
>> multiterabyte disk bit rot is the real issue.
>> I think that today checksumming data  must be mandatory 
set by default.

>> Only if someone doesn't care about his data he can
manually turn this
>> option off.
>>
>> What do you think about defaulting --data-checksums in initdb?
> I think this should be discussed in -hackers, right?
>
> Regards,
May be you right but i want to know what people think about
it before
i'll write to hackers.

-1 on changing the default.
10^15 ~= 1000 TB, which isn't very common yet. Those having it
probably are aware of the risk and have enabled checksums already.
-- 
*Andreas Joseph Krogh*

CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
<https://www.visena.com>

It is per bit not bytes. So it is ~100 TB. We working with some
enterprise who have WALs creation rate ~ 4GB per min - so it is
only max 100 days before you get bit rotted and have probability
to get silent data corruption.
Also don't forget that it is theoretical limit and Google tells us
that HDD and SSD is not as reliable as manufactures tell. So this
10^-15 can easily be much higher.

Ok, but still - the case you're describing isn't the common-case for 
PG-users. Enterprises like that certainly chould use --data-checksums, 
I'm not arguing against that, just that it shouldn't be the 
default-setting.

--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
<https://www.visena.com>
Why do you think that common pg-users doesn't care about their data? 
Also why do we have wal_level=minimal fsync=on and other stuff?


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Initdb --data-checksums by default

2016-04-20 Thread Alex Ignatov



On 20.04.2016 11:40, Andreas Joseph Krogh wrote:
På onsdag 20. april 2016 kl. 10:33:14, skrev Alex Ignatov 
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru>>:




On 20.04.2016 11:29, Devrim Gündüz wrote:
> Hi,
>
> On Wed, 2016-04-20 at 10:43 +0300, Alex Ignatov wrote:
>> Today in Big Data epoch silent data corruption becoming more
and more
>> issue to afraid of. With uncorrectable read error rate ~ 10^-15  on
>> multiterabyte disk bit rot is the real issue.
>> I think that today checksumming data  must be mandatory  set by
default.
>> Only if someone doesn't care about his data he can manually
turn this
>> option off.
>>
>> What do you think about defaulting --data-checksums in initdb?
> I think this should be discussed in -hackers, right?
>
> Regards,
May be you right but i want to know what people think about it before
i'll write to hackers.

-1 on changing the default.
10^15 ~= 1000 TB, which isn't very common yet. Those having it 
probably are aware of the risk and have enabled checksums already.

--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
<https://www.visena.com>
It is per bit not bytes. So it is ~100 TB. We working with some 
enterprise who have WALs creation rate ~ 4GB per min - so it is only max 
100 days before you get bit rotted and have probability to get silent 
data corruption.
Also don't forget that it is theoretical limit and Google tells us that 
HDD and SSD is not as reliable as manufactures tell. So this 10^-15 can 
easily be much higher.


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Initdb --data-checksums by default

2016-04-20 Thread Alex Ignatov



On 20.04.2016 11:29, Devrim Gündüz wrote:

Hi,

On Wed, 2016-04-20 at 10:43 +0300, Alex Ignatov wrote:

Today in Big Data epoch silent data corruption becoming more and more
issue to afraid of. With uncorrectable read error rate ~ 10^-15  on
multiterabyte disk bit rot is the real issue.
I think that today checksumming data  must be mandatory  set by default.
Only if someone doesn't care about his data he can manually turn this
option off.

What do you think about defaulting --data-checksums in initdb?

I think this should be discussed in -hackers, right?

Regards,
May be you right but i want to know what people think about it before 
i'll write to hackers.


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initdb --data-checksums by default

2016-04-20 Thread Alex Ignatov



On 20.04.2016 10:58, Michael Paquier wrote:

On Wed, Apr 20, 2016 at 4:43 PM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:

Hello everyone!
Today in Big Data epoch silent data corruption becoming more and more issue
to afraid of. With uncorrectable read error rate ~ 10^-15   on multiterabyte
disk bit rot is the real issue.
I think that today checksumming data  must be mandatory  set by default.
Only if someone doesn't care about his data he can manually turn this option
off.

What do you think about defaulting --data-checksums in initdb?

Not sure that most users deploying Postgres by default are ready to
pay the price of data checksums in their default deployments. People
using it are already using the -k switch, so it may actually be a trap
to switch the default.
WALs also has performance issue. But we have it by default on minimal 
level.

We also have CRC on WALs which also have performance hit.

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initdb --data-checksums by default

2016-04-20 Thread Alex Ignatov



On 20.04.2016 10:47, John R Pierce wrote:

On 4/20/2016 12:43 AM, Alex Ignatov wrote:
Today in Big Data epoch silent data corruption becoming more and more 
issue to afraid of. With uncorrectable read error rate ~ 10^-15on 
multiterabyte disk bit rot is the real issue. 


are not those uncorrectable errors detected by the disk hardware ?   
thats not 'silent'.


whats the rate of uncorrectable AND undetected read errors ?


--
john r pierce, recycling bits in santa cruz
Uncorrectable read error rate ~ 10^-15- 10^-14. This error stays 
undetected and uncorrectable.
 Also how can you associate disk block ->> file block without knowing  
fs structure? Also not every hardware include this checksum feature 
under the hood.


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



[GENERAL] Initdb --data-checksums by default

2016-04-20 Thread Alex Ignatov

Hello everyone!
Today in Big Data epoch silent data corruption becoming more and more 
issue to afraid of. With uncorrectable read error rate ~ 10^-15  on 
multiterabyte disk bit rot is the real issue.
I think that today checksumming data  must be mandatory  set by default. 
Only if someone doesn't care about his data he can manually turn this 
option off.


What do you think about defaulting --data-checksums in initdb?

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Freezing localtimestamp and other time function on some value

2016-04-13 Thread Alex Ignatov



On 13.04.2016 18:40, Alex Ignatov wrote:



On 13.04.2016 17:26, Adrian Klaver wrote:

On 04/13/2016 04:16 AM, Alex Ignatov wrote:



On 12.04.2016 20:50, Tom Lane wrote:

Alex Ignatov <a.igna...@postgrespro.ru> writes:

You always should keep in mind that your application may run in test
mode (future/past time) and maintain this code. While with my 
proposal

you  can always use some time function(now or localtimestamp or
whatever)  which you can  freeze at anytime on DB level, not 
operation

system(using some 3rd libs) or application(using wrappers and other
hacks).

We're not really in the business of being Oracle --- which in this
particular context means not trying to duplicate tens of thousands of
bizarre little features with very narrow use-cases.  If there's a
reasonable way for users to provide corner-case functionality for
themselves (and I'd say a wrapper function is a perfectly reasonable
way for this) then we don't really want to embed it in Postgres.

This particular feature seems like a seriously-poorly-thought-out
one, too.  Time stops advancing across the whole DB? Really?

1. That would break all manner of stuff, for example the timestamps
in automatically-taken dumps, if you've got background jobs running
pg_dump.  Just about everything except the session running the test
case would be unhappy, AFAICS.

2. Would this extend to, say, preventing autovacuum from running?
Or changing the timestamps of messages in the postmaster log, or
timestamps appearing in places like pg_stat_activity?  Or causing
pg_sleep() to wait forever, because time isn't passing?  If your
answer is "yes" across the board, that makes problem #1 an order
of magnitude worse, while if you want to be selective then you
have a bunch of nitty-gritty (and rather arbitrary) decisions to
make about what's frozen and what's not.  And you've weakened the
argument that your test is actually valid, since potentially the
app would see some of the non-frozen values and misbehave.

3. While I can see the point of wanting to, say, test weekend behavior
on a weekday, I do not see how a value of now() that doesn't advance
between transactions would represent a realistic test environment for
an app with time-dependent behavior.  As an example, you might
accidentally write code that expects two successive transactions to
see identical values of now(), and such a testbed wouldn't detect
the problem.

regards, tom lane
1. background jobs in pg?? cron you mean or  may be EnterpriseDB 
vesion?

2. All i need is to freeze some(or may be one ) function for example
now() or smth else =). I dont want to freeze time for the whole
postmaster process!


That was not obvious:

http://www.postgresql.org/message-id/570cd2e3.4030...@postgrespro.ru

"In oracle there is alter system set fixed_date command. Have Postgres
this functionality?"

https://appsfromrajiv.wordpress.com/2011/06/14/oracle-fixed_date-parameter-helpful-during-testing/ 



"This parameter is useful primarily for testing. The value can be in 
the format shown above or in the default Oracle date format, without 
a time. Setting this parameter to a specified timestamp will make the 
time constant for the database engine (the clock will not tick) "


http://www.postgresql.org/message-id/570ce996.30...@postgrespro.ru

"Hi!
It is not about localtimestamp in transactions. It is about global
localtimestamp value  for all session  new and existed no matter inside
transaction or outside."



3. In multithreaded applications it is possible that two transactions
from different sessions  started at the same time and to resolve this
issue some sort of unique id(say serial) is used while inserting some
value in some table ;)




>> "This parameter is useful primarily for testing. The value can be 
in the format shown above or in the default Oracle date format, 
without a time. Setting this parameter to a specified timestamp will 
make the time constant for the database engine (the clock will not tick) "


And if we use TL;DR tag on your link  we'll see ;)
"This parameter did help us in testing future and in past but we had 
our own share of issues also for application testing."


Did help us + issues = Did help us and ≠  issue ;)

Say if we don't need this feature- we dont use it, but if we need it  
but  we have nothing it makes us sad.  I think that have feature > 
have not =)..





--
Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company


Some quick and dirty issue resolution is simple:
set search_path = my_time_schema on db layer. After that you dont need 
to change any code. And can take for example freeze.fixed_date from 
config =)

where my_time_schema contains all time function than I want to freeze.
Nevertheless i dont know how to deal with say localtimestamp with this 
approach %). Where localtimestamp is defined? pg_catalog doesnt have it


Some thoughts ab

Re: [GENERAL] Freezing localtimestamp and other time function on some value

2016-04-13 Thread Alex Ignatov



On 13.04.2016 17:26, Adrian Klaver wrote:

On 04/13/2016 04:16 AM, Alex Ignatov wrote:



On 12.04.2016 20:50, Tom Lane wrote:

Alex Ignatov <a.igna...@postgrespro.ru> writes:

You always should keep in mind that your application may run in test
mode (future/past time) and maintain this code. While with my proposal
you  can always use some time function(now or localtimestamp or
whatever)  which you can  freeze at anytime on DB level, not operation
system(using some 3rd libs) or application(using wrappers and other
hacks).

We're not really in the business of being Oracle --- which in this
particular context means not trying to duplicate tens of thousands of
bizarre little features with very narrow use-cases.  If there's a
reasonable way for users to provide corner-case functionality for
themselves (and I'd say a wrapper function is a perfectly reasonable
way for this) then we don't really want to embed it in Postgres.

This particular feature seems like a seriously-poorly-thought-out
one, too.  Time stops advancing across the whole DB?  Really?

1. That would break all manner of stuff, for example the timestamps
in automatically-taken dumps, if you've got background jobs running
pg_dump.  Just about everything except the session running the test
case would be unhappy, AFAICS.

2. Would this extend to, say, preventing autovacuum from running?
Or changing the timestamps of messages in the postmaster log, or
timestamps appearing in places like pg_stat_activity?  Or causing
pg_sleep() to wait forever, because time isn't passing?  If your
answer is "yes" across the board, that makes problem #1 an order
of magnitude worse, while if you want to be selective then you
have a bunch of nitty-gritty (and rather arbitrary) decisions to
make about what's frozen and what's not.  And you've weakened the
argument that your test is actually valid, since potentially the
app would see some of the non-frozen values and misbehave.

3. While I can see the point of wanting to, say, test weekend behavior
on a weekday, I do not see how a value of now() that doesn't advance
between transactions would represent a realistic test environment for
an app with time-dependent behavior.  As an example, you might
accidentally write code that expects two successive transactions to
see identical values of now(), and such a testbed wouldn't detect
the problem.

regards, tom lane

1. background jobs in pg?? cron you mean or  may be EnterpriseDB vesion?
2. All i need is to freeze some(or may be one ) function for example
now() or smth else =). I dont want to freeze time for the whole
postmaster process!


That was not obvious:

http://www.postgresql.org/message-id/570cd2e3.4030...@postgrespro.ru

"In oracle there is alter system set fixed_date command. Have Postgres
this functionality?"

https://appsfromrajiv.wordpress.com/2011/06/14/oracle-fixed_date-parameter-helpful-during-testing/ 



"This parameter is useful primarily for testing. The value can be in 
the format shown above or in the default Oracle date format, without a 
time. Setting this parameter to a specified timestamp will make the 
time constant for the database engine (the clock will not tick) "


http://www.postgresql.org/message-id/570ce996.30...@postgrespro.ru

"Hi!
It is not about localtimestamp in transactions. It is about global
localtimestamp value  for all session  new and existed no matter inside
transaction or outside."



3. In multithreaded applications it is possible that two transactions
from different sessions  started at the same time and to resolve this
issue some sort of unique id(say serial) is used while inserting some
value in some table ;)




>> "This parameter is useful primarily for testing. The value can be in 
the format shown above or in the default Oracle date format, without a 
time. Setting this parameter to a specified timestamp will make the time 
constant for the database engine (the clock will not tick) "


And if we use TL;DR tag on your link  we'll see ;)
"This parameter did help us in testing future and in past but we had our 
own share of issues also for application testing."


Did help us + issues = Did help us and ≠  issue ;)

Say if we don't need this feature- we dont use it, but if we need it  
but  we have nothing it makes us sad.  I think that have feature > have 
not =)..





--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Freezing localtimestamp and other time function on some value

2016-04-13 Thread Alex Ignatov



On 12.04.2016 21:05, Tom Lane wrote:

I wrote:

3. While I can see the point of wanting to, say, test weekend behavior
on a weekday, I do not see how a value of now() that doesn't advance
between transactions would represent a realistic test environment for
an app with time-dependent behavior.

BTW, one possible way of meeting that particular requirement is to fool
with your timezone setting.

regression=# select timeofday();
   timeofday
-
  Tue Apr 12 14:01:53.254286 2016 EDT
(1 row)

regression=# set time zone interval '+120 hours';
SET
regression=# select timeofday();
   timeofday
--
  Sun Apr 17 18:01:58.293623 2016 +120
(1 row)

regards, tom lane



Oh!
This is better than nothing =)!

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Freezing localtimestamp and other time function on some value

2016-04-13 Thread Alex Ignatov



On 12.04.2016 20:50, Tom Lane wrote:

Alex Ignatov <a.igna...@postgrespro.ru> writes:

You always should keep in mind that your application may run in test
mode (future/past time) and maintain this code. While with my proposal
you  can always use some time function(now or localtimestamp or
whatever)  which you can  freeze at anytime on DB level, not operation
system(using some 3rd libs) or application(using wrappers and other hacks).

We're not really in the business of being Oracle --- which in this
particular context means not trying to duplicate tens of thousands of
bizarre little features with very narrow use-cases.  If there's a
reasonable way for users to provide corner-case functionality for
themselves (and I'd say a wrapper function is a perfectly reasonable
way for this) then we don't really want to embed it in Postgres.

This particular feature seems like a seriously-poorly-thought-out
one, too.  Time stops advancing across the whole DB?  Really?

1. That would break all manner of stuff, for example the timestamps
in automatically-taken dumps, if you've got background jobs running
pg_dump.  Just about everything except the session running the test
case would be unhappy, AFAICS.

2. Would this extend to, say, preventing autovacuum from running?
Or changing the timestamps of messages in the postmaster log, or
timestamps appearing in places like pg_stat_activity?  Or causing
pg_sleep() to wait forever, because time isn't passing?  If your
answer is "yes" across the board, that makes problem #1 an order
of magnitude worse, while if you want to be selective then you
have a bunch of nitty-gritty (and rather arbitrary) decisions to
make about what's frozen and what's not.  And you've weakened the
argument that your test is actually valid, since potentially the
app would see some of the non-frozen values and misbehave.

3. While I can see the point of wanting to, say, test weekend behavior
on a weekday, I do not see how a value of now() that doesn't advance
between transactions would represent a realistic test environment for
an app with time-dependent behavior.  As an example, you might
accidentally write code that expects two successive transactions to
see identical values of now(), and such a testbed wouldn't detect
the problem.

regards, tom lane

1. background jobs in pg?? cron you mean or  may be  EnterpriseDB vesion?
2. All i need is to freeze some(or may be one ) function for example 
now() or smth else =). I dont want to freeze time for the whole 
postmaster process!
3. In multithreaded applications it is possible that two transactions 
from different sessions  started at the same time and to resolve this 
issue some sort of unique id(say serial) is used while inserting some 
value in some table ;)


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Freezing localtimestamp and other time function on some value

2016-04-12 Thread Alex Ignatov



On 12.04.2016 19:45, David G. Johnston wrote:
On Tue, Apr 12, 2016 at 8:37 AM, Alex Ignatov 
<a.igna...@postgrespro.ru <mailto:a.igna...@postgrespro.ru>>wrote:



On 12.04.2016 18:01, Adrian Klaver wrote:


>>I do it by having the date be one of the function arguments and
have the default be something like current_date. When I test I
supply a date to override the default. This allows for testing the
various scenarios by changing the supplied date.

With that approach you have to say application programmer - 'Hey
dude, please edit this piece of code for my purpose and after that
rollback it'.  I think that it is unacceptable in large project...


​ CREATE FUNCTION do_some_date_based_stuff(reference_date date, 
other_args) [...]


CREATE FUNCTION production_wrapper_for_above(other_args) [...]
AS $$
SELECT do_some_date_based_stuff(now(), other_args);
$$ ​;

Easy to test do_some_date_based_stuff since it has fewer if any 
external dependencies.  Shouldn't need to test the wrapper that simply 
calls the "do_some..." with a default value of the current date.


You might be able to define an appropriate function signature that 
avoids having to write the wrapper though regardless there is no need 
to have a different environment for testing versus production if 
approached in this manner.  You just need to decide on the most 
desirable way to make it work.


David J.



I know that we can always write some wrappers etc, etc.
This approach would failed if your do_some_date_based_stuff have no date 
args and contains calls say to now()(or other time function what 
possible can have fix value ) inside it.


Also wrappers lead to  multiple code base,yours client side code needs 
to know what function  we should use - test or production. Also with 
your approach  application server needs to know its working mode test / prod


You always should keep in mind that your application may run in test 
mode (future/past time) and maintain this code. While with my proposal 
you  can always use some time function(now or localtimestamp or 
whatever)  which you can  freeze at anytime on DB level, not operation 
system(using some 3rd libs) or application(using wrappers and other hacks).



--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Freezing localtimestamp and other time function on some value

2016-04-12 Thread Alex Ignatov



On 12.04.2016 18:01, Adrian Klaver wrote:

On 04/12/2016 07:36 AM, Alex Ignatov wrote:

On 12.04.2016 16:57, George Neuner wrote:

On Tue, 12 Apr 2016 13:50:11 +0300, Alex Ignatov
<a.igna...@postgrespro.ru> wrote:


Is there any method to freeze localtimestamp and other time function
value.
Say after freezing on some value sequential calls to these functions
give you the same value over and over again.
This  is useful primarily for testing.

In oracle there is alter system set fixed_date command. Have Postgres
this functionality?

I'm missing how this is useful.   Even having such a feature there is
not any way to duplicate a test trace: execution time of a request is
not guaranteed even if it's issue time is repeatable wrt some epoch.
And if there are concurrent requests, their completion order is not
guaranteed.

It is also true in Oracle, and in every general purpose DBMS that I
know of.  So what exactly do you "test" using a fixed date/time?

George





This is useful if your application written say on stored function on PG
and it works differently on working days and on vacations or weekends.
How can you test your application without this ability? Changing system


I do it by having the date be one of the function arguments and have 
the default be something like current_date. When I test I supply a 
date to override the default. This allows for testing the various 
scenarios by changing the supplied date.



time and affect all application on server or write your own
localtimestamp implementation keep in mind of test functionality?
Also yesterday we have issue while comparing Pg function output
converted from Oracle and its Oracle equivalent on the same data. You
now what -  we cant do it, because function depends on
localtimestamp(Pg) and sysdate (Ora) =/


Because the Postgres and Oracle servers are on different machines and 
are getting different times, because the time functions return 
different values from the same time. or something else?








>>Because the Postgres and Oracle servers are on different machines and 
are getting different times, because the time functions return different 
values from the same time. or something else?


 Because while test we ran this function on different time. And you 
cant start it in exactly one time even on same server.


>>I do it by having the date be one of the function arguments and have 
the default be something like current_date. When I test I supply a date 
to override the default. This allows for testing the various scenarios 
by changing the supplied date.


With that approach you have to say application programmer - 'Hey dude, 
please edit this piece of code for my purpose and after that rollback 
it'.  I think that it is unacceptable in large project...


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Freezing localtimestamp and other time function on some value

2016-04-12 Thread Alex Ignatov

On 12.04.2016 16:57, George Neuner wrote:

On Tue, 12 Apr 2016 13:50:11 +0300, Alex Ignatov
<a.igna...@postgrespro.ru> wrote:


Is there any method to freeze localtimestamp and other time function value.
Say after freezing on some value sequential calls to these functions
give you the same value over and over again.
This  is useful primarily for testing.

In oracle there is alter system set fixed_date command. Have Postgres
this functionality?

I'm missing how this is useful.   Even having such a feature there is
not any way to duplicate a test trace: execution time of a request is
not guaranteed even if it's issue time is repeatable wrt some epoch.
And if there are concurrent requests, their completion order is not
guaranteed.

It is also true in Oracle, and in every general purpose DBMS that I
know of.  So what exactly do you "test" using a fixed date/time?

George





This is useful if your application written say on stored function on PG 
and it works differently on working days and on vacations or weekends.
How can you test your application without this ability? Changing system 
time and affect all application on server or write your own 
localtimestamp implementation keep in mind of test functionality?
Also yesterday we have issue while comparing Pg function output 
converted from Oracle and its Oracle equivalent on the same data. You 
now what -  we cant do it, because function depends on 
localtimestamp(Pg) and sysdate (Ora) =/



--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Freezing localtimestamp and other time function on some value

2016-04-12 Thread Alex Ignatov



On 12.04.2016 15:13, Rakesh Kumar wrote:

I think PG does fixed time within a tran. check the output of the following sql

begin;
select now() ;
select pg_sleep(10);
select now() ;
commit;
select now() ;
select pg_sleep(10);
select now() ;
~

On Tue, Apr 12, 2016 at 6:50 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:

Hello!
Is there any method to freeze localtimestamp and other time function value.
Say after freezing on some value sequential calls to these functions give
you the same value over and over again.
This  is useful primarily for testing.

In oracle there is alter system set fixed_date command. Have Postgres this
functionality?

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Hi!
It is not about localtimestamp in transactions. It is about global 
localtimestamp value  for all session  new and existed no matter inside 
transaction or outside.


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Freezing localtimestamp and other time function on some value

2016-04-12 Thread Alex Ignatov

Hello!
Is there any method to freeze localtimestamp and other time function value.
Say after freezing on some value sequential calls to these functions 
give you the same value over and over again.

This  is useful primarily for testing.

In oracle there is alter system set fixed_date command. Have Postgres 
this functionality?


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-default postgresql.conf values to log

2016-04-07 Thread Alex Ignatov



On 07.04.2016 17:59, Rob Sargent wrote:



On Apr 7, 2016, at 8:26 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:




On 07.04.2016 16:53, Tom Lane wrote:
Alex Ignatov <a.igna...@postgrespro.ru> writes:

My question is: is there any option(s) to log non-default
postgresql.conf values to log file?

No, but you can easily find all the non-default settings by querying
the pg_settings view.

regards, tom lane

Yeah, i know it but pg_settings is available only after pg is ready to accept 
client connections.
Also in some cases log file is placed on another server say, syslog. And if 
your pg server is not available you cant say anything about your pg_settings.

At that point you're looking in a file: log file or config file. Does it matter 
which?

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


It matter when your pg host is down and all you have is log file. It is 
very usual situation. You need historical info.


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-default postgresql.conf values to log

2016-04-07 Thread Alex Ignatov



On 07.04.2016 16:53, Tom Lane wrote:

Alex Ignatov <a.igna...@postgrespro.ru> writes:

My question is: is there any option(s) to log non-default
postgresql.conf values to log file?

No, but you can easily find all the non-default settings by querying
the pg_settings view.

regards, tom lane


Yeah, i know it but pg_settings is available only after pg is ready to 
accept client connections.
Also in some cases log file is placed on another server say, syslog. And 
if your pg server is not available you cant say anything about your 
pg_settings.


--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Non-default postgresql.conf values to log

2016-04-07 Thread Alex Ignatov

Hello!
For example in oracle alert.log file we have:
...
System parameters with non-default values:
  processes= 300
  nls_language = "AMERICAN"
  nls_territory= "RUSSIA"
  memory_target= 720M
  control_files= "/ora/oradata/orcl/control01.ctl"
...
My question is: is there any option(s) to log non-default 
postgresql.conf values to log file?




--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] String literal doesn't autocast to text type

2016-03-04 Thread Alex Ignatov


> On 04 Mar 2016, at 21:08, Tom Lane <t...@sss.pgh.pa.us> wrote:
> 
> Alex Ignatov <a.igna...@postgrespro.ru> writes:
>> Why string literal like 'Hello world!' doesnt automagicaly cast to text 
>> type?
> 
> Because it's not necessarily a string.  It might be meant to be point,
> or json, or any number of other types.
> 
>> Sure we can create our cast:
>> postgres=# create cast (unknown as text) with inout as implicit;
>> CREATE CAST
> 
> That's a seriously bad idea; it will have all sorts of corner-case
> effects that you aren't expecting.
> 
> There has been some talk of forcing unknown to text in the output
> columns of a sub-SELECT, which would fix the example you show with
> a lot less risk of side-effects elsewhere.  But it's not exactly
> trivial because of interactions with INSERT ... SELECT.
> 
>regards, tom lane
Oh! Thank you , Tom!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] String literal doesn't autocast to text type

2016-03-04 Thread Alex Ignatov


> On 04 Mar 2016, at 20:59, David G. Johnston <david.g.johns...@gmail.com> 
> wrote:
> 
> It would be nice if you'd follow the list convention of bottom-posting.  Not 
> doing that is much more annoying than the omission of version and o/s - 
> especially the later since PostgreSQL purports to be generally o/s agnostic.
> 
>>> On Fri, Mar 4, 2016 at 12:17 PM, Alex Ignatov <a.igna...@postgrespro.ru> 
>>> wrote:
>>> Hello!
>>> Why string literal like 'Hello world!' doesnt automagicaly cast to text 
>>> type?
>>> 
>>> 
>>> But why we don't have this type cast by default in Postgres? Is there any 
>>> fundamental restriction on that or there is some reasons for that?
> 
> 
>> On Friday, March 4, 2016, Melvin Davidson <melvin6...@gmail.com> wrote:
>> Probably because pg_typeof() returns the OID of a COLUMN in a table def.. 
>> Strings literals do not have oid's.
>> 
>> http://www.postgresql.org/docs/9.4/interactive/functions-info.html
>> 
>> "pg_typeof returns the OID of the data type of the value that is passed to 
>> it. This can be helpful for troubleshooting or dynamically constructing SQL 
>> queries. The   function is declared as returning regtype,   which is an OID 
>> alias type (see Section 8.18); this means that it is the same as an OID for 
>> comparison purposes but displays as a type name. For example:"
> 
> While true this doesn't actually answer the question - that being what's the 
> motiviation for not implicitly casting away from unknown.  I suspect that the 
> main reason is that in an extensible system like PostgreSQL it is deemed 
> risky to have too many implicit casts.  Having one from unknown required the 
> system to make decisions that could result in unexpected results that would 
> be hard to catch without careful review of queries and results.  It is an 
> issue of some contention in the community but so far no one has convinced the 
> committees to change how this works.
>  
>> Please in the future, ALWAYS specify your PostgreSQL version and O/S, 
>> regardless of whether or not you thinnk it is pertinent.
> 
> But don't feel too bad if you forget...
>  
> David J.
>  
Oops! Sorry about top posting. My phone from 21th century  doesnt handle this 
good but I got you about it!
Deeply sorry once again!

Re: [GENERAL] String literal doesn't autocast to text type

2016-03-04 Thread Alex Ignatov
Oh! Great answer! Thats what i want to know!!! 
Thank you Pavel about explanation!!!

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

> On 04 Mar 2016, at 20:45, Pavel Stehule <pavel.steh...@gmail.com> wrote:
> 
> Hi
> 
> 2016-03-04 18:29 GMT+01:00 Melvin Davidson <melvin6...@gmail.com>:
>> Probably because pg_typeof() returns the OID of a COLUMN in a table def.. 
>> Strings literals do not have oid's.
> 
> no this is not a reason.
> 
> String literal has fictive "unknown" type. Real type is derivated from 
> context - operators, function parameters. pg_typeof has parameter of type 
> "any", and then no conversions from "unknown" is possible.
> 
> I don't known why "text" type is not default for string literal, but I see 
> some logical relations. If we cast "unknown" to "text" early, then we will be 
> limited by "text" type available conversions. Now, it is possible, but years 
> ago, the cast between "text" type and others was disallowed. Still we require 
> explicit cast, and I see it as benefit. Hidden cast (implicit cast) are query 
> performance killers. So "unknown" type requires less explicit casting, 
> because there are implicit casts from this type to any type.
> 
> I am not sure if comparation with numbers is valid. Numbers are much more 
> consistent class than string literals - more implicit casts over this class 
> is there.
> 
> Probably this design can be enhanced, and more consistent - "text" type can 
> be used as fallback type.
> 
> Regards
> 
> Pavel
>  
>> 
>> http://www.postgresql.org/docs/9.4/interactive/functions-info.html
>> 
>> "pg_typeof returns the OID of the data type of the value that is passed to 
>> it. This can be helpful for troubleshooting or dynamically constructing SQL 
>> queries. The   function is declared as returning regtype,   which is an OID 
>> alias type (see Section 8.18); this means that it is the same as an OID for 
>> comparison purposes but displays as a type name. For example:"
>> 
>> Please in the future, ALWAYS specify your PostgreSQL version and O/S, 
>> regardless of whether or not you thinnk it is pertinent.
>> 
>>> On Fri, Mar 4, 2016 at 12:17 PM, Alex Ignatov <a.igna...@postgrespro.ru> 
>>> wrote:
>>> Hello!
>>> Why string literal like 'Hello world!' doesnt automagicaly cast to text 
>>> type?
>>> 
>>> postgres=# select pg_typeof('Hello world');
>>>  pg_typeof
>>> ---
>>>  unknown
>>> (1 row)
>>> 
>>> But for example literal like 1.1 automagically cast to numeric( not float8, 
>>> float4, whatever)
>>> postgres=# select pg_typeof(1.1);
>>>  pg_typeof
>>> ---
>>>  numeric
>>> (1 row)
>>> 
>>> That why we cant do the following without explicit type casting:
>>> postgres=# select t.c||' world' from (select 'Hello' as c) as t;
>>> ERROR:  failed to find conversion function from unknown to text
>>> 
>>> but that ok:
>>> postgres=# select t.c||' world' from (select 'Hello'::text as c) as t;
>>>   ?column?
>>> -
>>>  Hello world
>>> (1 row)
>>> 
>>> or this is ok too:
>>> postgres=# select t.c::text||' world' from (select 'Hello' as c) as t;
>>>   ?column?
>>> -
>>>  Hello world
>>> (1 row)
>>> 
>>> Sure we can create our cast:
>>> postgres=# create cast (unknown as text) with inout as implicit;
>>> CREATE CAST
>>> and after that we have:
>>> postgres=# select t.c||' world' from (select 'Hello' as c) as t;
>>>   ?column?
>>> -
>>>  Hello world
>>> (1 row)
>>> 
>>> But why we don't have this type cast by default in Postgres? Is there any 
>>> fundamental restriction on that or there is some reasons for that?
>>> 
>>> 
>>> -- 
>>> Alex Ignatov
>>> Postgres Professional: http://www.postgrespro.com
>>> The Russian Postgres Company
>>> 
>>> 
>>> 
>>> -- 
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>> 
>> 
>> 
>> -- 
>> Melvin Davidson
>> I reserve the right to fantasize.  Whether or not you 
>> wish to share my fantasy is entirely up to you. 
> 


[GENERAL] String literal doesn't autocast to text type

2016-03-04 Thread Alex Ignatov

Hello!
Why string literal like 'Hello world!' doesnt automagicaly cast to text 
type?


postgres=# select pg_typeof('Hello world');
 pg_typeof
---
 unknown
(1 row)

But for example literal like 1.1 automagically cast to numeric( not 
float8, float4, whatever)

postgres=# select pg_typeof(1.1);
 pg_typeof
---
 numeric
(1 row)

That why we cant do the following without explicit type casting:
postgres=# select t.c||' world' from (select 'Hello' as c) as t;
ERROR:  failed to find conversion function from unknown to text

but that ok:
postgres=# select t.c||' world' from (select 'Hello'::text as c) as t;
  ?column?
-
 Hello world
(1 row)

or this is ok too:
postgres=# select t.c::text||' world' from (select 'Hello' as c) as t;
  ?column?
-
 Hello world
(1 row)

Sure we can create our cast:
postgres=# create cast (unknown as text) with inout as implicit;
CREATE CAST
and after that we have:
postgres=# select t.c||' world' from (select 'Hello' as c) as t;
  ?column?
-
 Hello world
(1 row)

But why we don't have this type cast by default in Postgres? Is there 
any fundamental restriction on that or there is some reasons for that?



--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Log Monitoring with PG Admin

2016-02-17 Thread Alex Magnum
Hi,
i am running and RDS instance on AWS but monitoring logs is a bit
cumbersome.
Is there a way to do the through pgadmin3 ? Like tailing the logfile?

Does anyone know when 9.5 will be available on aws?

Thanks
Alex


[GENERAL] Cannot install Extention plperl in 9.5

2016-02-06 Thread Alex Magnum
Hi,
I installed 9.5 as described in
http://tecadmin.net/install-postgresql-9-5-on-centos/ on centos7.

Is there a problem with the Repo or do I miss something?


[local]:template1=# \dx
List of installed extensions
 Name  | Version |   Schema   | Description
---+-++--
 cube  | 1.0 | public | data type for multidimensional cubes
 dblink| 1.1 | public | connect to other PostgreSQL
databases from within a database
 earthdistance | 1.0 | public | calculate great-circle distances on
the surface of the Earth
 plpgsql   | 1.0 | pg_catalog | PL/pgSQL procedural language
(4 rows)

[local]:template1=# *CREATE EXTENSION plperl;*
ERROR:  could not open extension control file
"/usr/pgsql-9.5/share/extension/plperl.control": No such file or directory


[root@localhost 06:14:37 extension]# *find /usr/pgsql-9.5/|grep perl*
/usr/pgsql-9.5/lib/hstore_plperl.so
/usr/pgsql-9.5/include/server/lib/hyperloglog.h
/usr/pgsql-9.5/include/server/plperl.h
/usr/pgsql-9.5/share/extension/hstore_plperl.control
/usr/pgsql-9.5/share/extension/hstore_plperlu--1.0.sql
/usr/pgsql-9.5/share/extension/hstore_plperlu.control
/usr/pgsql-9.5/share/extension/hstore_plperl--1.0.sql

Thanks for any advice
Alex


[GENERAL] Cannot Create Objects

2016-02-04 Thread Alex Magnum
Hi,
I am having a few problems with access permissions.

When I create a new role with NOCREATEUSER and then create a database for
that role I can connect to the DB but when trying to create a db object I
will get the ERROR:  permission denied for schema public.

Strangely though, if the role is created with CREATEUSERS I don't have any
problems.

Here is what I want to do:

   1. Create a DBO role e.g. dbo_xxx NOCREATEDB NOCREATEUSER
   2. Create a db   mydb WITH OWNER db_xxx
   3. REVOKE all connection rights from public
   4. GRANT only rights to dbo_xxx
   5. GRANT all create rights on mydb TO dbo_xxx ; allowing the user to
   load the db schema

This is what I tried
REVOKE ALL ON SCHEMA public FROM PUBLIC;
CREATE USER dbo_xxx WITH PASSWORD 'mypass' NOCREATEDB NOCREATEUSER;

CREATE DATABASE my_db WITH OWNER dbo_xxx ENCODING 'UTF8';
REVOKE CONNECT ON DATABASE my_db FROM PUBLIC;
GRANT CONNECT ON DATABASE my_db TO dbo_xxx;
GRANT ALL PRIVILEGES ON DATABASE my_db TO dbo_xxx;
-- After schema is loaded
CREATE USER read_only WITH PASSWORD 'mypass' NOCREATEDB NOCREATEUSER;
REVOKE ALL ON ALL TABLES IN SCHEMA public FROM PUBLIC ;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO read_only ;

But i end up with permission denied errors.

Anyone having a suggestion how to get this to work? Did I mess up
permissions in public schema?

Any help and suggestion is greatly appreciated.

Alex


[GENERAL] After configuring remote access,server can't be started

2015-11-22 Thread Alex Luya
My postgresql 9.4 is installed in centos 6.7,and I have followed this:

http://www.cyberciti.biz/faq/postgresql-remote-access-or-connection/


1,cd /var/libpgsql/9.4/data

2,cp  postgresql.conf.sample postgresql.conf

3,sudo vi postgresql.conf and add two lines,and save it:


> *listen_addresses = "*"tcpip_socket = true*


4,cp pg_hba.conf.sample pg_hba.conf

5,sudo vi pg_hba.conf then *commented(maybe this is uneccessary)*

> #hostall all 127.0.0.1/32
>  @authmethodhost@
> #hostall all ::1/128
> @authmethodhost@


6,add two lines:(I have tried to change *md5 to trust*,neither works)

>
> *hostall all 0.0.0.0/0 
>   md5hostall all ::0/0
>   md5*


7,then *save*:pg_hba.conf

8,then *restart* postgresql sever by

*sudo service postgresql-9.4 restart*

9.*close iptables*

*sudo service iptables stop*

*got error:*

> Stopping postgresql-9.4 service:   [  OK  ]
> Starting postgresql-9.4 service:  * [FAILED]*


*tail  /var/lib/pgsql/9.4/pgstartup.log,got*

> < 2015-11-22 11:47:42.691 CST >LOG:  could not create IPv6 socket: Address
> family not supported by protocol
> < 2015-11-22 11:47:42.718 CST >LOG:  redirecting log output to logging
> collector process
> < 2015-11-22 11:47:42.718 CST >HINT:  Future log output will appear in
> directory "pg_log".


Questions are:
  1, Is this "could not create IPv6..." just a warning or the actual reason
of starting failure?
  2, Where is the pg_log directory?


[GENERAL] remote connection error:could not connect to server: Connection refused

2015-11-21 Thread Alex Luya
My postgresql 9.4 is installed in centos 6.7,and I have followed this:

http://www.cyberciti.biz/faq/postgresql-remote-access-or-connection/


1,cd /usr/pgsql-9.4/share/

2,cp  postgresql.conf.sample postgresql.conf

3,sudo vi postgresql.conf and add two lines,and save it:


> *listen_addresses = "*"tcpip_socket = true*


4,cp pg_hba.conf.sample pg_hba.conf

5,sudo vi pg_hba.conf then *commented(maybe this is uneccessary)*

> #hostall all 127.0.0.1/32
>  @authmethodhost@
> #hostall all ::1/128
> @authmethodhost@


6,add two lines:(I have tried to change *md5 to trust*,neither works)

>
> *hostall all 0.0.0.0/0 
>   md5hostall all ::0/0
>   md5*


7,then *save*:pg_hba.conf

8,then *restart* postgresql sever by

*sudo service postgresql-9.4 restart*

9.*close iptables*

*sudo service iptables stop*

10,and ask another guy in *different city* to try to connect by

psql -U postgres -h *135.211.67.23*

He got error

>
>
> *psql: could not connect to server: Connection refusedIs the server
> running on host "135.211.67.23" and acceptingTCP/IP connections on port
> 5432?*


and I tried to run same some command in the host(centos),got same error.

12.Running: "netstat -lputn" in the host got
.
tcp0  0 127.0.0.1:5432  0.0.0.0:*
LISTEN  -


So,How to solve this problem?


Re: [GENERAL]

2015-11-15 Thread Alex Luya
Ubuntu 14.04 64bit + postgresql 9.4 are used here;

What I have done are:

1,Sign In ubuntu by user alex

2,then:sudo -i -u postgres

3,then:psql jump into command client

4,then: create database icare;

5,then: \q jump out command client

6,then run: exit jump out psql

7,then pg_restore --clean --create --exit-on-error --dbname=icare
icare-test.tar


On Fri, Nov 13, 2015 at 10:52 PM, Giuseppe Sacco <
giuse...@eppesuigoccas.homedns.org> wrote:

> Hello,
>
> Il giorno ven, 13/11/2015 alle 13.38 +0800, Alex Luya ha scritto:
> > Hello,
> >I created a new database by
> >create database icare;
> >then quit off psql and  run:
> > pg_restore --clean --create --exit-on-error --dbname=icare
> > icare-test.tar
> >  it complains:
> >   pg_restore: [archiver (db)] Error while PROCESSING TOC:
> >   pg_restore: [archiver (db)] Error from TOC entry 21; 2615
> > 80924 SCHEMA icare icare
> >   pg_restore: [archiver (db)] could not execute query: ERROR:
> >  permission denied for database icare
> >Command was: CREATE SCHEMA icare;
>
> From what I understand, it means that the postgresql user that is
> restoring the dump cannot create a schema on "icare" database. So, is
> that user the same that issued che "create database" earlier? If it's
> not, then you should grant all required priviledges to that user.
>
> Bye,
> Giuseppe
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] pg_restore tells schema “test” already exists but it isn't actually

2015-11-12 Thread Alex Luya
When restoring a dump like this:

pg_restore --clean --create --exit-on-error --dbname=test test.tar

these error messages got printed out:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 21; 2615 80924 SCHEMA test test
pg_restore: [archiver (db)] could not execute query: ERROR:  schema
"test" already exists
Command was: CREATE SCHEMA test;

but when:

select schema_name from information_schema.schemata;

these got printed out

schema_name 
pg_toast
pg_temp_1
pg_toast_temp_1
pg_catalogpublic
information_schema

It seems like schema "test" doesn't exist yet,why do I got this kind of
error?


  1   2   3   4   5   6   7   >