Re: [GENERAL] idle in transaction, why

2017-11-07 Thread Scott Marlowe
On Tue, Nov 7, 2017 at 7:44 AM, Rob Sargent  wrote:
>
>
>> On Nov 7, 2017, at 12:16 AM, Thomas Kellerer  wrote:
>>
>> I would figure values in "minutes" to be more realistic depending on the 
>> workload and characteristics of the application.
>>
>> A transaction that has several seconds of "think time" between individual 
>> statements doesn't seem that unrealistic.
>>
> I started with the default zero and the save went through perfectly. It takes 
> ten minutes so I’ll have a concurrency issue I imagine.

10 minutes is long-ish but if it's not run all the time etc it may be
what you're stuck with. Idle in transaction doesn't necessarily mean
concurrency issues, as long as you're not operating on a whole table
other people need to update as well. I guess now's a good time to
profile what your code is doing, what's take the longest, and see if
maybe you can split that big transaction up into bite sized pieces.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] explain analyze output: 0 rows, 1M loopa

2017-11-01 Thread Scott Marlowe
On Wed, Nov 1, 2017 at 1:19 PM, David G. Johnston
<david.g.johns...@gmail.com> wrote:
> On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe <scott.marl...@gmail.com>
> wrote:
>>
>> So some of my output from an explain analyze here has a line that says
>> this:
>>
>> ex Scan using warranty_order_item_warranty_order_id_idx on
>> warranty_order_item woi_1 (cost=0.57..277.53 rows=6 width=137) (actual
>> time=0.110..0.111 rows=0 loops=1,010,844)
>
>
> Not my strong suit but, I'm pretty sure that reads: "The index was queried
> 1M+ times and none of those inqueries resulted in a record being found".
> IIUC I'd be wondering why some form of hash join wasn't used...

Thanks I think you've got it. I wish it would fit in a hash but the
dataset this query works on is so big that it spills to disk with
work_mem=16GB... :(


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] explain analyze output: 0 rows, 1M loopa

2017-11-01 Thread Scott Marlowe
So some of my output from an explain analyze here has a line that says this:

ex Scan using warranty_order_item_warranty_order_id_idx on
warranty_order_item woi_1 (cost=0.57..277.53 rows=6 width=137) (actual
time=0.110..0.111 rows=0 loops=1,010,844)

How can you have 1M loops over 0 rows?

Running on PostgreSQL 9.5.6.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Announcing PostgreSQL SLES RPM Repository

2017-10-27 Thread Scott Marlowe
On Thu, Oct 26, 2017 at 4:09 PM, Devrim Gündüz  wrote:
>
> Hi,
>
> I am proud to announce the new and shiny PostgreSQL RPM repository for SLES 
> 12:
> https://zypp.postgresql.org/.
SNIP
> This is a part of EnterpriseDB's contribution to the community: EDB provided
> hardware, and let me to use my time for these packages, so I want to thank 
> EDB.

Thanks to both you and EDB!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table corruption

2017-10-23 Thread Scott Marlowe
On Mon, Oct 23, 2017 at 9:35 AM, Peter Geoghegan  wrote:
> On Mon, Oct 23, 2017 at 7:44 AM, Peter Hunčár  wrote:
>> I know that zero_damaged_pages and vacuum (or restore the table from backup)
>> will help, but I want to ask if there is a way to identify affected
>> rows/datafiles, so we can 'fix' only the affected data using the
>> backup/source data, instead of restoring the whole table?
>
> You might find the latest version of amcheck helpful here:
> https://github.com/petergeoghegan/amcheck
>
> It's not really written with repair in mind, since that's such a can
> of worms, but it might still help you.
>
> --
> Peter Geoghegan
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

Nice to see it included in 10!

https://www.postgresql.org/docs/10/static/amcheck.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Tool for PostgreSQL

2017-10-18 Thread Scott Marlowe
On Wed, Oct 18, 2017 at 11:37 AM, Fabricio Pedroso Jorge
 wrote:
> Hi all,
>
>is there a "official" monitoring tool for PostgreSQL databases? For
> example, i come from Oracle Database, and there, we have Enterprise Manager
> to monitor and administrer the product... is there such a similar tool for
> PostgreSQL?

The most commonly used monitoring tool is either zabbix or nagios
using the check_postgres perl script. That's as close to official as
I'm aware (and no, it's not "official" really, just the most common
and well supported method I'm aware of).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread Scott Marlowe
On Wed, Oct 18, 2017 at 11:26 AM, Joshua D. Drake  
wrote:
> On 10/18/2017 08:49 AM, Ron Johnson wrote:
>>
>> On 10/18/2017 10:16 AM, Igal @ Lucee.org wrote:
>>>
>>> On 10/18/2017 7:45 AM, Ron Johnson wrote:

 On 10/18/2017 09:34 AM, Igal @ Lucee.org wrote:
>
> A bit off-topic here, but why upgrade to 9.6 when you can upgrade to
> 10.0?


 There's no way we're going to put an x.0.0 version into production.
>>>
>>>
>>> Then think of it as 9.7.0 but with an easier name to pronounce ;)
>>
>>
>> No .0 is going into production...
>>
>
> I am not sure why this is even a question. There are plenty of businesses
> that can risk the deployment of a .0 release but there are also *MANY THAT
> CAN NOT*. The proper way to do this is to have a staging server running the
> .0 release that gets beaten on by the application for a few months and
> reports anything back to the community they find.

In a past job I would routinely setup a slony slave running the new
version to check to make sure the new version wouldn't choke on the
data in the master etc, then start using it as a read slave after a
few months to make sure the app got along with it as a read only
source, then finally look at promoting it to master, with the option
to unpromote it should things explode. Minimal downtime for upgrades
AND a path back to the old version quickly if needed.

All while having setup dev and stage servers ahead of time to get
beaten on of course.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] core system is getting unresponsive because over 300 cpu load

2017-10-12 Thread Scott Marlowe
On Tue, Oct 10, 2017 at 4:28 PM, pinker  wrote:
>
> Yes, it would be much easier if it would be just single query from the top,
> but the most cpu is eaten by the system itself and I'm not sure why.

You are experiencing a context switch storm. The OS is spending so
much time trying to switch between 1,000+ processes it doesn't have
any time left to do much else.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] core system is getting unresponsive because over 300 cpu load

2017-10-10 Thread Scott Marlowe
On Tue, Oct 10, 2017 at 3:53 PM, pinker  wrote:
> Victor Yegorov wrote
>> Can you provide output of `iostat -myx 10` at the “peak” moments, please?
>
> sure, please find it here:
> https://pastebin.com/f2Pv6hDL

Ouch, unless I'm reading that wrong, your IO subsystem seems to be REALLY slow.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] core system is getting unresponsive because over 300 cpu load

2017-10-10 Thread Scott Marlowe
On Tue, Oct 10, 2017 at 2:40 PM, pinker  wrote:
> Hi to all!
>
> We've got problem with a very serious repetitive incident on our core
> system. Namely, cpu load spikes to 300-400 and the whole db becomes
> unresponsive. From db point of view nothing special is happening, memory
> looks fine, disks io's are ok and the only problem is huge cpu load. Kernel
> parameters that are increasing with load are always the same:

The solution here is to reduce the number of connections usually via
some kind of connection pooling. Any db server will have a max
throughput at around the number of cpu cores == connections (give or
take a factor of 2). Outside that performance falls off, and has a
very sharp knee on the other side as the # of conns goes up.

Reduce connections, db runs faster. Increase it slows until it
eventually falls over.

pgbouncer and pgpool II are useful on the db end, look at pooling
options on the app side as well.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] time series data

2017-10-02 Thread Scott Marlowe
On Sun, Oct 1, 2017 at 2:17 AM, Khalil Khamlichi
 wrote:
> Hi everyone,
>
> I have a data stream of a call center application coming in  to postgres in
> this format :
>
> user_name, user_status, event_time
>
> 'user1', 'ready', '2017-01-01 10:00:00'
> 'user1', 'talking', '2017-01-01 10:02:00'
> 'user1', 'after_call', '2017-01-01 10:07:00'
> 'user1', 'ready', '2017-01-01 10:08:00'
> 'user1', 'talking', '2017-01-01 10:10:00'
> 'user1', 'after_call', '2017-01-01 10:15:00'
> 'user1', 'paused', '2017-01-01 10:20:00'
> ...
> ...
>
> so as you see each new insert of an "event" is in fact the start_time of
> that event and also the end_time of the previous one so should be used to
> calculate the duration of this previous one.
>
> What is the best way to get user_status statistics like total duration,
> frequency, avg ...etc , does any body have an experience with this sort of
> data streams ?

Have you looked at temporal_tables extension? It seems custom made for
what you're trying to do.

http://clarkdave.net/2015/02/historical-records-with-postgresql-and-temporal-tables-and-sql-2011/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Up to date conventional wisdom re max shared_buffer size?

2017-09-20 Thread Scott Marlowe
On Tue, Sep 19, 2017 at 4:00 PM, Jerry Sievers  wrote:
> Briefly, just curious if legacy max values for shared_buffers have
> scaled up since 8G was like 25% of RAM?
>
> Pg 9.3 on monster 2T/192 CPU Xenial thrashing
>
> Upgrade pending but we recently started having $interesting performance
> issues at times looking like I/O slowness and other times apparently
> causing CPU spins.

Have you looked at things like zone reclaim mode and transparent huge
pages? Both of those can cause odd problems. Also it's usually a good
idea to turn off swap as the linux kernel, presented with lots of ram
and a small (by comparison) swap file sometimes makes bad life choices
and starts using swap for things like storing currently unused shared
buffers or something.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any known issues Pg 9.3 on Ubuntu Xenial kernel 4.4.0?

2017-09-20 Thread Scott Marlowe
On Wed, Sep 20, 2017 at 12:14 PM, Jerry Sievers  wrote:
> Basically as per $subject.
>
> We took a perf hit moving up to newer hardware and OS version which
> might in some cases be OK but admittedly there is some risk running a
> much older app (Pg 9.3) on a kernel/OS version that nowhere near existed
> when 9.3 was current.

Are you sure you're using the same locale etc as you were on the old
db? The most common cause of performance loss when migrating is that
the new db uses a locale like en_US while the old one might have been
in locale=C

>
> Be curious to hear of issues encountered and particular to eager to know
> if disabling any kernel 4.x features helped.
>
> Thanks
>
>  PostgreSQL 9.3.19 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
> 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit
>
>
> $ uname -a
> Linux foo.somehost.com 4.4.0-92-generic #115-Ubuntu SMP Thu Aug 10 09:04:33 
> UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
>
> --
> Jerry Sievers
> Postgres DBA/Development Consulting
> e: postgres.consult...@comcast.net
> p: 312.241.7800
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] contrecord is requested

2017-09-12 Thread Scott Marlowe
On Tue, Sep 12, 2017 at 10:19 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:
> On Mon, Sep 11, 2017 at 1:27 PM, Scott Marlowe <scott.marl...@gmail.com>
> wrote:
>>
>> So we have a db we're trying to rewind and get synced to the master.
>> pg_rewind says it doesn't need rewinding, and when we try to bring it
>> up, it gets this error:
>>
>> "contrecord is requested by 2E7/4028"
>>
>> And fails to get back up.
>>
>> Is this a known issue? Possible bug in the continuation record code?
>>
>> The only references I can find for it are in the xlogreader code.
>
>
> I've seen this twice lately and both times it was user error.
>
> One time someone who shall remain nameless made a replica of a remote QA
> server using "pg_basebackup -R ...", but then copied the *.conf files
> **including recovery.conf** from the running replica of the the remote
> production server into the new directory for the replica of the remote QA
> server.  So primary_conninfo had been overwritten to point to the wrong
> master server.
>
> The other time someone who shall also remain nameless accidentally fully
> opened up a newly cloned (from a cold backup, I think) of an dummy
> benchmarking server, instead of putting it into standby.  And then tried to
> shut it down and re-open it as a standby without doing a full refresh. But
> of course it was too late to do that.

Thanks will check on that.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] contrecord is requested

2017-09-11 Thread Scott Marlowe
So we have a db we're trying to rewind and get synced to the master.
pg_rewind says it doesn't need rewinding, and when we try to bring it
up, it gets this error:

"contrecord is requested by 2E7/4028"

And fails to get back up.

Is this a known issue? Possible bug in the continuation record code?

The only references I can find for it are in the xlogreader code.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numeric numbers

2017-09-02 Thread Scott Marlowe
On Sat, Sep 2, 2017 at 10:10 AM, Melvin Davidson  wrote:
>
>
> On Sat, Sep 2, 2017 at 11:54 AM, Francisco Olarte  
> wrote:
>>
>> On Sat, Sep 2, 2017 at 4:16 PM, Олег Самойлов  wrote:
>> > Hi all. I have silly question. Look at "numeric" type. According to
>> > docs it must be "up to 131072 digits before the decimal point; up to
>> > 16383 digits after the decimal point". Well, lets see.
>> >
>> > => select 1::numeric/3;
>> > ?column?
>> > 
>> >  0.
>>
>> => select 1::numeric(100,90)/3;
>>?column?
>> --
>>  
>> 0.33
>> (1 row)
>>
>> It's probably doing 1(integer) => double precioson => numeric(20) or
>> something similar if you do not specify.
>>
>> Francisco Olarte.
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>
> Franciso,
>
> Per the docs, is is not "must be', it is "up to 131072 digits before the 
> decimal point; up to 16383 digits after the decimal point".
> https://www.postgresql.org/docs/9.6/static/datatype-numeric.html#DATATYPE-NUMERIC-TABLE
>
> YOU have specified a precision of numeric(100,90), which means 90 decimals 
> and that is exactly what you got!
> The result is correct, so what is your question?
>
>

Huh. I'm guessing that the cast is the limit here:

smarlowe=# select 1::numeric(1001,500);
ERROR:  NUMERIC precision 1001 must be between 1 and 1000
LINE 1: select 1::numeric(1001,500);


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG and database encryption

2017-08-22 Thread Scott Marlowe
On Tue, Aug 22, 2017 at 3:13 PM, PT  wrote:
> On Tue, 22 Aug 2017 12:48:13 -0700 (MST)
> rakeshkumar464  wrote:
>
>> We have a requirement to encrypt the entire database.  What is the best tool
>> to accomplish this. Our primary goal is that it should be transparent to the
>> application, with no change in the application, as compared to un-encrypted
>> database. Reading about pgcrypto module, it seems it is good for few columns
>> only and using it to encrypt entire database is not a good use-case.
>>
>> Is this which can be done best by file level encryption?  What are the good
>> tools on Linux (RHES), preferably open-source.
>
> "encrypt the database" is bullshit wank terminology for "we're a government
> agency and don't know what we're talking about"
>
> On multiple occasions, I demonstrated that an unecrypted database was the
> least likely disclosure vector for sensative data, and that we shouldn't
> waste any time on it until we had ensured that all other breach vectors had
> been fixed.  Over the course of 4 years at that job, we never managed to get
> all the other (more likely) breach vectors secured.
>
> While it's possible that you've already fixed all other breach
> vectors, I'd be willing to bet actual money that you have not.
> The very fact that you ask for something that "is transparent to the
> application" tells me that you're not going to actually implement it
> effectively anyway.
>
> As a result, my opinion would be that you use filesystem encryption. It's
> very efficient, low management overhead, and proven technology that doesn't
> interfere with anything else you're doing. You can then check that box on
> whatever form you have to fill out and the beaurocrats will leave you alone.
> On top of that, it effectivley protects againts possible breach vectors that
> don't require changing the application.
>
> Real security will require changing the application. But take my word for it,
> nobody wants to hear the list of breach vectors that can only be fixed by
> modifying the application. Because people aren't interested in real security,
> they're just interested in checking boxes on a form.

This. Without a much stricter definition of the attack vectors you're
trying to defeat "encrypt the whole database" is a very hand-wavy
proposition. Are you protecting against people getting into the data
center and stealing your hard drives? Rogue applications getting
access to the db? Someone sniffing the passwords or unencrypting them
on the servers etc etc.

OP: It's just generic a requirement to take seriously. Sit down, come
up with possible attack vectors and possible ways to thwart them.
Security isn't something you do one time and you're done, it's a
constant process of design, review, updates, and education.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] make postgresql 9.5 default on centos 7

2017-08-22 Thread Scott Marlowe
On Tue, Aug 22, 2017 at 12:44 PM, Devrim Gündüz  wrote:
>
> Hi,
>
> On Fri, 2017-08-18 at 13:50 -0400, Steve Clark wrote:
>> I loaded 9.5 on CentOS 7 but by default every thing wants to use the default
>> 9.2 version that comes with CentOS 7.
>>
>> Is there a simple way to fix this so the 9.5 version of tools and libraries
>> are used.
>
> yum remove postgresql-*
> yum reinstall postgresql95*
>
> will do the trick. Please note that the second one will restart the database,
> so make sure that you run these in a maintenance period.
>
> The first command will remove all OS PostgreSQL packages. This is a required
> step to remove actual binaries from OS. The second command will reinstall the
> packages, and they will create alternatives entries for the binaries, etc.

Centos and Yum have supported multiple versions of PostgreSQL for
quite some time now. Just change the ports in their respective
postgresql.conf files and start them up.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_command fails but works outside of Postgres

2017-08-18 Thread Scott Marlowe
On Fri, Aug 18, 2017 at 12:40 PM, twoflower  wrote:
> I changed my archive_command to the following:
>
> archive_command = 'gsutil cp /storage/postgresql/9.6/main/%p
> gs://my_bucket/pg_xlog/'
>
> and it fails, leaving the following in the log:
>
> 2017-08-18 18:34:25.057 GMT [1436][0]: [104319] LOG:  archive command failed
> with exit code 1
> 2017-08-18 18:34:25.057 GMT [1436][0]: [104320] DETAIL:  The failed archive
> command was: gsutil cp
> /storage/postgresql/9.6/main/0001038B00D8
> gs://my_bucket/pg_xlog/
> 2017-08-18 18:34:25.057 GMT [1436][0]: [104321] WARNING:  archiving
> transaction log file "0001038B00D8" failed too many times, will
> try again later
>
>
> But the command works when executed manually:
>
> root$ su postgres -c "gsutil cp
> /storage/postgresql/9.6/main/0001038B00D8
> gs://my_bucket/pg_xlog/"
> root$ echo $?
> 0
>
> The last command verifies that gsutil indeed exited with 0.
>
> How to best debug this issue?

Sounds like it depends on some envvar it doesn't see when run from the
postmaster. If you sudo -u postgres and run it does it work?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] make postgresql 9.5 default on centos 7

2017-08-18 Thread Scott Marlowe
On Fri, Aug 18, 2017 at 11:50 AM, Steve Clark  wrote:
> Hi List,
>
> I loaded 9.5 on CentOS 7 but by default every thing wants to use the default
> 9.2 version that comes with CentOS 7.
>
> Is there a simple way to fix this so the 9.5 version of tools and libraries 
> are used.

As Melvin mentions, psql and most clients default to looking on port
5432 for PostgreSQL. You can either set $PGPORT envar to the port that
9.5 is running on, or edit the postgresql.conf for each service and
swap the ports listed, so that 9.2 runs on the new port (likely 5433)
and 9.5 runs on the default port of 5432.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Queries for Diagramming Schema Keys

2017-08-14 Thread Scott Marlowe
Just to add that running psql with the -E switch is REALLY handy for seeing
how psql executes queries to find how tables etc are put together.


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-08-09 Thread Scott Marlowe
On Wed, Aug 9, 2017 at 6:27 AM, ADSJ (Adam Sjøgren)  wrote:
> On 2017-06-21 Adam Sjøgren  wrote:
>
>> Adam Sjøgren  wrote:
>
>>> Meanwhile, I can report that I have upgraded from 9.3.14 to 9.3.17 and
>>> the errors keep appearing the log.
>
> Just to close this, for the record: We haven't seen the errors since
> 2017-06-30. We upgraded to 9.3.17 (latest 9.3 point-release at the time
> of writing) on 2017-06-10.
>
> Whether this means that the affected rows gradually got overwritten
> after switching to .17 and thus got fixed, or if something subtle in our
> workflow changed, so we aren't hitting this anymore, or something else
> entirely is the answer, we're not sure.
>
> We didn't get to trying Alvaro Herrera's suggestion of removing
> 6c243f90ab6904f27fa990f1f3261e1d09a11853 before the errors stopped
> appearing "by themselves".

This sounds a lot like bad sectors getting remapped.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread Scott Marlowe
On Wed, Aug 9, 2017 at 10:10 AM, Tom Lane  wrote:
> Melvin Davidson  writes:
>> *UPDATE pg_extensionSET extowner = {oid_of_new_owner} WHERE extowner =
>> {oid_from_above_statement};*
>
> Note you'll also have to modify the rows in pg_shdepend that reflect
> this ownership property.

Seems like something that should be handled by alter doesn't it?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Do not INSERT if UPDATE fails

2017-08-02 Thread Scott Marlowe
Does insert's "on conflict" clause not work for this usage?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about loading up a table

2017-08-02 Thread Scott Marlowe
On Tue, Aug 1, 2017 at 4:27 PM, Alex Samad  wrote:
> Hi
>
> So just to go over what i have
>
>
> server A (this is the original pgsql server 9.2)
>
> Server X and Server Y ... PGSQL 9.6 in a cluster - streaming replication
> with hot standby.
>
>
> I have 2 tables about 2.5T of diskspace.
>
> I want to get the date from A into X and X will replicate into Y.
>
>
> I am currently on X using this command
>
> pg_dump -U  -h  -t BIGTABLE -a  | sudo -u postgres -i
> psql -q ;
>
> This is taking a long time, its been 2 days and I have xfered around 2T..
> This is just a test to see how long and to populate my new UAT env. so I
> will have to do it again.
>
> Problem is time.  the pg_dump process is single threaded.
> I have 2 routers in between A and X but its 10G networking - but my network
> graphs don't show much traffic.
>
> Server X is still in use, there are still records being inserted into the
> tables.
>
> How can I make this faster.
>
> I could shutdown server A and present the disks to server X, could I load
> this up in PGSQL and do a table to table copy - i presume this would be
> faster ... is this possible ?  how do I get around the same DB name ?
> What other solutions do I have ?

Yes, but if it's taking days to transfer 2TB then you need to
investigate where your performance is tanking.

Have you tried resyncing / scping files across the network to see how
fast your network connection is?

Have you tried just pg_dumping / restoring locally to get an idea how
fast you can dump / restore withoout doing it over a network
connection?

Are you IO bound? Network bound? CPU bound?

Is the destination copying data, or building indexes? Do you insert
into a schema that already has indexes in place? If so have you tried
dropping the indexes first and rebuilding them?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about loading up a table

2017-08-01 Thread Scott Marlowe
On Mon, Jul 31, 2017 at 11:16 PM, Alex Samad  wrote:
> Hi
>
> I double checked and there is data going over, thought I would correct that.
>
> But it seems to be very slow.   Having said that how do I / what tools do I
> use to check through put

Try the pg_current_xlog_location function on the slave?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Manage slot in logical/pglogical replication

2017-07-31 Thread Scott Marlowe
On Mon, Jul 17, 2017 at 7:08 AM, dpat  wrote:
> yes i have estimated pg_xlog but server, some time, writes a lot of WAL file.
> there are some robust alternative to replicate partial database in
> Postgresql over WAN?
> or, what's the best way to realign pglogical replica? pg_dump/restore?.
> i have to find the best way to replicate only some table over internet to
> remote database.

That's been a job for slony in the past. Now pg_logical seems to be
getting ready / is ready to replace it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schemas and serials

2017-07-31 Thread Scott Marlowe
On Sat, Jul 29, 2017 at 1:17 PM, Melvin Davidson 
wrote:

>
> On Sat, Jul 29, 2017 at 3:38 PM, tel medola  wrote:
>
>> Depends.
>> When you create your tables in new schema, the script was the same from
>> "qa"?
>> Sequences, tables, etc.. belong to the schema where was created.
>>
>> Roberto.
>>
>> Em sáb, 29 de jul de 2017 às 16:17, marcelo 
>> escreveu:
>>
>>> Some days ago I asked regarding tables located in different schemas.
>>> Now, my question is
>>> Suppose I have two schemas (other than public): "qa" and "production".
>>> Initially I create all my tables in "qa". All of them have a primary key
>>> of type serial.
>>> Later, I will copy the tables definitions to production.
>>> It will automatically create the sequences in the new schema, starting
>>> at zero?
>>> TIA
>>> Marcelo
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>
>
>
>
> *Marcelo,>Initially I create all my tables in "qa". All of them have a
> primary key of type serial. >Later, I will copy the tables definitions to
> production.*
>
> *A word of caution, creating tables in a qa "schema" and then transferring
> to production is not the normal/correct (or safe) way to do development.*
>
> *The standard procedure is to create a seperate "qa" database (and/or
> server) with the exact same schema(s) as production. Then, after testing *
>
> *is completed, the schemas/tables are copied to production.*
>
>
This. Staging should look as much like production as possible for testing
purposes. If you HAVE to use the same server (not a good idea) then make
separate clusters. Separate servers here can save you a LOT of heartache
from someone fat fingering a script meant for staging hitting production.


Re: [GENERAL] partitioning question

2017-07-31 Thread Scott Marlowe
On Sun, Jul 30, 2017 at 7:13 PM, Alex Samad  wrote:
> How expensive is dynamic over static.  I'm looking at storing yearly now, so
> I figure if my if then clause has the latest year at the top it should be
> very quick.

Assuming you're not doing anything particularly crazy it's minimal.
But what is good performance for one application may not be acceptable
for others. Generally the cost of inserting is MUCH higher than the
cost of dynamically setting the target, esp if you stick to plpgsql
and don't try to use rules to accomplish it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about loading up a table

2017-07-31 Thread Scott Marlowe
On Mon, Jul 31, 2017 at 2:31 AM, vinny  wrote:
> On 2017-07-31 11:02, Alex Samad wrote:
>>
>> Hi
>>
>> I am using pg_dump | psql to transfer data from my old 9.2 psql into a
>> 9.6 psql.

Note that you should be doing pg_dump with 9.6's pg_dump, as it's
possible for 9.2's pg_dump to not know about a 9.6 feature.

>> The new DB server is setup as master replicating to a hot standby
>> server.
>>
>> What I have noticed is that the rows don't get replicated over until
>> the copy from stdin is finished...  hard to test when you have M+ lines
>> of rows.

SNIP

>> Is there a way to tell the master to replicate earlier
>
> I highly doubt it, because the master cannot know what to replicate until
> your transaction is ended with a COMMIT. If you end with ROLLBACK,
> or your last query is DELETE FROM (your_table>;
> then there isn't even anything to replicate at all...

This is actually a visibility issue. All the new changes are
replicated to the slave, but just like on the master, other
connections can't see the change because it's not visible. The slave,
except for some small delay (seconds etc) is an exact replica of the
master. So even a delete at the end gets replicated. You just don't
see anything but possible table bloat to show for it.

To prove this to oneself, start the copy, and get into another session
to the master. You don't see any rows there either until the commit
after the copy.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Interesting streaming replication issue

2017-07-27 Thread Scott Marlowe
On Wed, Jul 26, 2017 at 11:55 PM, James Sewell 
wrote:

> On Thu, Jul 27, 2017 at 4:41 PM, Gunnar "Nick" Bluth <
> gunnar.bl...@pro-open.de> wrote:
>
>>
>> are you sure you're scp'ing from the archive, not from pg_xlog?
>>
>
> Yes:
>
> restore_command = 'scp -o StrictHostKeyChecking=no 
> 10.154.19.30:/archive/xlog//%f
> %p'
>
> Although you are right - that would almost make sense if I had done that!
>

Sounds a lot like a cleanup process on your archive directory or something
getting in the way. Are the logs pg is asking for in that archive dir?


Re: [GENERAL] autovacuum holds exclusive lock on table preventing it from to be updated

2017-06-19 Thread Scott Marlowe
On Mon, Jun 19, 2017 at 1:53 PM, Dmitry O Litvintsev  wrote:
> yes, we had to restart database 4 days ago (and vacuum has resumed on start).
> I checked the log files and discovered that autovacuum on this table takes
>
> pages: 0 removed, 14072307 remain
> tuples: 43524292 removed, 395006545 remain
> buffer usage: -1493114028 hits, 107664973 misses, 30263658 dirtied
> avg read rate: 1.604 MB/s, avg write rate: 0.451 MB/s
> system usage: CPU 2055.81s/17710.94u sec elapsed 524356.57 sec
>
> 6 days. So it is perpetually being autovacuumed (which I assumed to be a good 
> thing)
>
> Table has 400M entries, 115 GB.
>
> I will try your suggestions in the test environment.
>
> Thank you,
> Dmitry

Once you get this sorted, look into using the checkpostgresql.pl
script and a monitoring solution like zabbix or nagios to monitor
things like transactions until wraparound etc so you don't wind up
back here again. Best of luck in. Note that if you drop the vacuum
delay to 0ms the vacuum will probably complete in a few hours tops.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Data Encryption Using LUKS with dm-crypt ?

2017-06-19 Thread Scott Marlowe
On Sun, Jun 18, 2017 at 2:20 PM, Condor  wrote:
> Hello ppl,
>
> a few years ago I asked the same question but did not receive valued answers
> and we use different way to realize the project.
> Today I wanna ask did some one do it and most important for me, can some one
> share his experience ?
> What I should expect, what is good and bad things that can be happened.
>
> Im thinking the problems can be occurred if server is restarted and data is
> not synced, but for that is raid cache battery.
> Also if hard drive need to be checked for bad clusters or broken index /
> files on filesystem what will happened with data?
> Because postgresql does not support data level encryption, Im wanna realize
> with third party tools.

The one and only time I setup a server to us LUKS was for a demo
laptop so that if it was lost our code / data / db etc etc were not
accessible. In that instance we didn't test for fsync reliability
because it was an easily recreateable system.

Generally speaking PostgreSQL expects "perfect" storage that writes
when it says it writes and doesn't present bad sectors to the database
to handle but rather maps such sectors out of the way silently without
data corruption.

I think the only real test here is to build a luks system, initiate
some pgbench type runs, wait a minute, run checkpoint and then yank
out the plug. Run a dozen or so times looking for data corruption.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Double Precision or Numeric

2017-06-01 Thread Scott Marlowe
On Thu, Jun 1, 2017 at 10:42 AM, Steve Atkins  wrote:
>
>> On Jun 1, 2017, at 9:26 AM, Louis Battuello  
>> wrote:
>>
>> Is the round() function implemented differently for double precision than 
>> for numeric? Forgive me if this exists somewhere in the documentation, but I 
>> can't seem to find it.
>
> https://www.postgresql.org/docs/current/static/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL
>
> "When rounding values, the numeric type rounds ties away from zero, while (on 
> most machines) the real and double precision types round ties to the nearest 
> even number.".
>
>> Why does the algorithm vary by data type?
>
> Just guessing, but I'd assume because the NUMERIC type behaves as required by 
> the SQL spec, while float and double are vanilla IEEE754 arithmetic and will 
> do whatever the underlying hardware is configured to do, usually round to 
> nearest even.
>

Exactly. It's important to remember that floats and doubles are
imprecise representations meant for speed, while numeric is designed
for accuracy not speed.

If one needs an exact answer, one does not use floats.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

2017-05-24 Thread Scott Marlowe
On Wed, May 24, 2017 at 6:24 AM, Bill Moran  wrote:
>
> A few years ago, I was working with "big" servers. At least, they were
> big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?
>
> Anyway, at that time, I tried allocating 64G to shared buffers and we
> had a bunch of problems with inconsistent performance, including "stall"
> periods where the database would stop responding for 2 or 3 seconds.
> After trying all sorts of tuning options that didn't help, the problem
> finally went away after reducing shared_buffers to 32G. I speculated, at
> the time, that the shared buffer code hit performance issues managing
> that much memory, but I never had the opportunity to really follow up
> on it.
>
> Now, this was back in 2012 or thereabouts. Seems like another lifetime.
> Probably PostgreSQL 9.2 at that time.
>
> Nowadays, 128G is a "medium sized" server. I just got access to one
> with 775G. It would appear that I could order from Dell with 1.5T of
> RAM if I'm willing to sell my house ...
>
> Yet, all the docs and advice I'm able to find online seem to have been
> written pre 2008 and say things like "if your server has more than 1G
> of RAM ..."
>
> I feel like it's time for a documentation update ;) But I, personally
> don't have the experience recently enough to know what sort of
> recommendations to make.
>
> What are people's experience with modern versions of Postgres on hardware
> this size? Do any of the experts have specific recommendations on large
> shared_buffers settings? Any developers care to comment on any work
> that's been done since 2012 to make large values work better?

My most recent employment was working on machines with 512GB to 1TB
memory. We never saw real performance increases past 10GB or so of
shared memory. That was with pg 9.2 and testing on 9.6.  The 512GB
machines were processing something on the order of 500 or so writes
per second and 3k to 5k reads per second. Under testing we were able
to push through 18k writes and reads per second on those machines.
These dbs were in the 4 to 5TB range so could not fit in memory.
Letting the linux kernel (3.11 or 3.13 at the time) handle the caching
seemed to get best, most reliable performance. These machines ran big
RAID-5 arrays (6 to 7 TB) with write caching off and could read from
the IO really fast, so mostly we were bound by IO performance not
memory caching.

If you allocate 50% of memory to shared buffers then you're basically
caching everything twice, once in kernel cache and once in shared
memory. The general consensus is that you're better off going one way
or another, either let linux do the caching work, or crank up the
shared memory to 90% or so and let postgresql do it. My experience has
been that the kernel wins almost every time.

But about 95% of all my testing and 100% of my production experience
is on 3.13 kernels with pgsql 9.2 on top of it. 9.6 and 10 etc may
well be much faster with bigger shared memory.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Scott Marlowe
On Thu, May 18, 2017 at 3:40 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> I would say that the best thing to do is to run 9.6 grab pgadmin4 and do all
> the examples in the doc page on partitioning.
>
> https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html
>
> If that works well then the question becomes are there any esoteric cases
> where pgadmin4 won't quite get you there?

Or maybe what can we automate from pgadmin4 that you currently need to
script etc?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Scott Marlowe
I would say that the best thing to do is to run 9.6 grab pgadmin4 and do
all the examples in the doc page on partitioning.

https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html

If that works well then the question becomes are there any esoteric cases
where pgadmin4 won't quite get you there?


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Scott Marlowe
On Thu, May 18, 2017 at 1:21 PM, Melvin Davidson 
wrote:

> Shirley,
> I am a bit confused. AFAIK, it is PostgreSQL that is responsible for table
> partitioning. PgAdmin4 is just an administrative tool.
> Are you saying that PgAdmin4 now can make partition tables automatically?
>

I think maybe she just means that their GUI is pgadmin4 for reference as
opposed to the partitioning being IN pgadmin4.

If she'd like to set up a discussion thread HERE on partitioning I'm more
than willing to add to it.

More importantly, while I think the postgresql documentation on
partitioning gives you everything you need to roll your own, it doesn't
inform you on all the ways to do partitioning for various jobs. That's
where partitioning gets interesting and requires business requirements and
all that fun stuff.

Do you hand off maintenance jobs to cron or do you check and run it every
time a function's called etc.

Do you put materialized views on top to stop scanning all the partitions
ever?

fun fun.


Re: [GENERAL] Top posting....

2017-05-14 Thread Scott Marlowe
On Sat, May 13, 2017 at 7:48 PM, Bruce Momjian  wrote:
> On Thu, May 11, 2017 at 01:43:52PM -0400, Tom Lane wrote:
>> Absolutely.  The point of quoting previous messages is not to replicate
>> the entire thread in each message; we have archives for that.  The point
>> is to *briefly* remind readers what it is that you're responding to.
>> If you can't be brief, you are disrespecting your readers by wasting their
>> time. They've probably already read the earlier part of the thread anyway.
>
> Totally agree, and I am seeing non-trimmed bottom posts more often on
> the hackers list than I used to.  I am thinking someone needs to start a
> hackers thread about that.

Yeah I'd rather someone quote NOTHING than just top post with no trimming.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-08 Thread Scott Marlowe
On Mon, May 1, 2017 at 2:59 PM, Sven R. Kunze  wrote:
> On 30.04.2017 16:25, Steve Atkins wrote:
>
> You can use postgresql for caching, but caches don't require the data
> durability that a database offers, and can be implemented much more
> efficiently.
>
>
> I for one can understand Thomas' need for a single solution.
> Just recently I needed a cache which was supposed to be set up in a
> SERIALIZABLE manner as in
> https://www.postgresql.org/docs/devel/static/transaction-iso.html#xact-serializable
> Available cache mechanisms would have produce erroneous results. So, I went
> for PG.

This brings up another subject, reliability. If PostgreSQL is fast
enough, and on stable hardware, it's often the preferred choice
because of its very good stability. Try running a big production noSQL
cluster and you'll find plenty of sharp corners in most. A lot of
times it's just easier to set up a pair of VMs (on good hardware) and
toss a pg db at the problem, esp if performance is a secondary
consideration, or not likely to tax pgsql's basic architecture.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Required Monitoring

2017-04-28 Thread Scott Marlowe
On Fri, Apr 28, 2017 at 8:39 AM, Andrew Kerber  wrote:
> I am a fairly experienced Oracle DBA, and we are starting to move in to the
> PostgreSQL world.  I would expect the standard monitoring items are required
> for mission critical postgres apps, Ie, disk space, wal log space, log
> monitoring, process counts,software running, connection available on the
> correct port, CPU usage.
>
> Are there additional PostgreSQL specific items that need to be monitored?
> if so, what items?

Amid all the other recommendations standard OS level monitoring is a
good idea. You don't wanna run out of space on drives etc.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Block size recommendation for Red Hat Linux 7.2

2017-04-24 Thread Scott Marlowe
On Mon, Apr 24, 2017 at 12:43 PM, pinker  wrote:
> I've seen very big differences with huge_pages set to on, especially in
> context of CPU usage on multiple socket servers.
>
> You could play as well with storage options, for instance inode size and
> check if there is any advantage for your db from inlining, which is
> supported by xfs. You can find more informations here:
> http://beegfs.com/wiki/StorageServerTuning
>
> An interesting option for WAL would be to add the mount option- allocsize -
> and set it to 16MB - so the exact size of WAL segment to reduce the risk of
> fragmentation and optimal streaming write throughput.
>

All good options. Also make sure zone reclaim mode is set to 0 on big
memory machines. It's a great setting for big VM hosts but a terrible
one for file or db servers.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Block size recommendation for Red Hat Linux 7.2

2017-04-24 Thread Scott Marlowe
Stick to 4k linux block size and you should be OK. I've yet to run
into a situation where changing either has made any measurable
difference.

On Mon, Apr 24, 2017 at 11:58 AM, chiru r <chir...@gmail.com> wrote:
> Thanks Scott.
> Please suggest the OS block sizes for Linux redhat 7.2, where as default
> Linux block size is 4k.
>
> If we keep 8k block size at OS level is it improves PostgreSQL performance?
> Please suggest what is the suggestible default OS block size for Linux
> systems to install PostgreSQL.
>
> Thanks,
> Chiru
>
> On Mon, Apr 24, 2017 at 12:29 PM, Scott Marlowe <scott.marl...@gmail.com>
> wrote:
>>
>> On Mon, Apr 24, 2017 at 9:41 AM, chiru r <chir...@gmail.com> wrote:
>> > Hello,
>> >
>> > I am building new server to run PostgreSQL 9.5.4 version on it. Please
>> > provide the recommended Block size for Linux systems.
>> >
>> > We are using PostgreSQL blocks size is 8k default one.
>> >
>> > postgres=# show block_size ;
>> >  block_size
>> > 
>> >  8192
>> > (1 row)
>> >
>> > Is there any recommendation for separate block sizes on OS level for
>> > Pg_xlog, pg_log  and  Actual data files to improve the performance for
>> > reads/Writes?.
>>
>> Unless you've done some testing to show some other block size is
>> better, it's best to stick to 8k block size. Keep in mind that while
>> it is configurable at compile time, it doesn't get much testing at
>> other sizes and you could run into corner cases where there are
>> problems and the only acceptable fix is to compile with 8k blocks and
>> reload your whole db etc.
>>
>> tl;dr: Stick to 8k blocks.
>>
>> --
>> To understand recursion, one must first understand recursion.
>
>



-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Block size recommendation for Red Hat Linux 7.2

2017-04-24 Thread Scott Marlowe
On Mon, Apr 24, 2017 at 9:41 AM, chiru r  wrote:
> Hello,
>
> I am building new server to run PostgreSQL 9.5.4 version on it. Please
> provide the recommended Block size for Linux systems.
>
> We are using PostgreSQL blocks size is 8k default one.
>
> postgres=# show block_size ;
>  block_size
> 
>  8192
> (1 row)
>
> Is there any recommendation for separate block sizes on OS level for
> Pg_xlog, pg_log  and  Actual data files to improve the performance for
> reads/Writes?.

Unless you've done some testing to show some other block size is
better, it's best to stick to 8k block size. Keep in mind that while
it is configurable at compile time, it doesn't get much testing at
other sizes and you could run into corner cases where there are
problems and the only acceptable fix is to compile with 8k blocks and
reload your whole db etc.

tl;dr: Stick to 8k blocks.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup issue

2017-04-23 Thread Scott Marlowe
On Sat, Apr 22, 2017 at 8:03 PM, David G. Johnston
 wrote:
> On Saturday, April 22, 2017, chiru r  wrote:
>>
>> Thank you Adrian.
>>
>> It seems the code is allowing only who has Superuser/Replication role
>> directly.
>>
>> Is there any possibility in future releases they allow both case A & B
>> Users able to  use pg_basebackup.
>>
>
> It does not seem wise to introduce inheritance of such powerful capabilities
> when for many years now we have not done so.  It seems like reality could be
> better documented but the present behavior should stay.  I also find the
> original choice to be quite sane regardless.
>
> David J.

I would point out that these roles are already inheritable elsewhere,
so it's not like adding it to the connection coming from another
machine for basebackup is somehow a new thing really, just making it
act the same in both places.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Not sure this should be asked here but...

2017-04-23 Thread Scott Marlowe
Well no one is stopping you from starting a forum. But then you've got
the herculean task of getting people to move to it from the comfort of
the mailing list. I'm guessing that's a non-starter.

On Sun, Apr 23, 2017 at 4:31 AM, Ron Ben  wrote:
>
> A simple open source forum system can be enough simetng like php-bb
> example: warez-bb.org
>
> the installation of such system is like 1 hour of work.
>
> In my point of view something like stack overflow is the best but i'm not
> sure if it's open source.
>
>
> ב אפר׳ 22, 2017 22:15, David G. Johnston כתב:
>
> On Sat, Apr 22, 2017 at 12:04 PM, Ron Ben  wrote:
>
> Why to use mailining list rather than forum?
> forum gives much more flexablitiy, easy to read and respond, allows to
> search for other simillar issues, topics can be tagged...
>
>
>
> Was this intended to display irony re: "allows to search for other similar
> issues"?
>
> https://www.postgresql.org/message-id/CY4PR13MB1751ED37A959456DA0DE115599310%40CY4PR13MB1751.namprd13.prod.outlook.com
>
> "browser interface to forums please?"
>
> David J.
>
>
>



-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large data and slow queries

2017-04-20 Thread Scott Marlowe
On Thu, Apr 20, 2017 at 6:30 AM, Vick Khera  wrote:
> I'm curious why you have so many partial indexes. Are you trying to make
> custom indexes per query? It seems to me you might want to consider making
> the indexes general, and remove the redundant ones (that have the same
> prefix list of indexed fields).
>
> Secondly your table is 102Gb. Clearly there's a lot of data here. How many
> rows does that take? I would further suggest that you partition this table
> such that there are no more than about 10 million rows per partition (I've
> done this by using a id % 100 computation). Maybe in your case it makes
> sense to partition it based on the "what" field, because it appears you are
> trying to do that with your partial indexes already.

I would think a two field index might be just as effective and not
require a lot of maintenance etc.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advise on primary key for detail tables (OS: Raspberry Pi)

2017-04-06 Thread Scott Marlowe
On Thu, Apr 6, 2017 at 4:27 AM, Ertan Küçükoğlu
 wrote:
> Sorry for top posting.
>
> I have a serial in master table because I need to know data insertion order.
> DateTime on Raspberry Pi is not accurate due to power loss and lack of
> internet access to fetch correct time from.

Note that serial / sequences do NOT guarantee exact order of
insertion. It's quite possible to have a few switched here and there.
It gets you an approximate order of insertion but a few values could
get swapped around depending on timing of your application and how you
do inserts. Just an FYI.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How does hot_standby_feedback work

2017-04-05 Thread Scott Marlowe
On Wed, Apr 5, 2017 at 3:37 AM, Александр Киселев  wrote:
> Hello!
>
> My name is Alexander. I am an administarator PostgreSQL.
> I am studying PostgreSQL's 9.6 documentation.
> I am interested in parameter hot_standby_feedback in postgresql.conf
> Can you explain more detail than in documentation for what this parameter is
> used?
> In what situations the parameter can show itself?
> How to test influence of the parameter on database's behavour.

It's possible for a long running query on a slave to live past the
time the tuples it needs exist on the master. This is because the
transaction the read only query is running under cannot "lock" those
tuples against vacuum on the master. The master gets vacuumed, those
tuples go away, the streaming / wal log replication gets relayed to
the read only slave, it loses the tuples and your query fails because
it can no longer access the snapshot of those tuples it started under.

With feedback, the slave can tell the master "I'm using these tuples,
please don't delete them just yet" and the master will skip vacuuming
them up.

It's darned useful for long running queries on a slave.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL being written during SELECT * query

2017-04-04 Thread Scott Marlowe
On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo  wrote:
> Hi,
> I have a very big table (10GB).
> I noticed that many WAL segments are being written when elaborating read
> only transactions like this:
> select * from dati256 where id >4300 limit 100;
> I don't understand why are there WAL writings during read only transactions.
> Regards
> Pupillo



I think this is the db setting hint bits, but I'm no expert in that area.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL general discussions list - 2010 Thread: Wikipedia entry - AmigaOS port - error?

2017-03-13 Thread Scott Marlowe
On Mon, Mar 13, 2017 at 11:15 PM, John R Pierce <pie...@hogranch.com> wrote:
> On 3/13/2017 10:06 PM, Scott Marlowe wrote:
>
> I am pretty sure it is:
>
> https://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems#Operating_system_support
>
> PostgreSQL  Yes Yes Yes Yes Yes Yes (MorphOS)[24]
>
> Oh man, 2005's versuin 6.3.2.  It's more of a historical oddity than a

Correction 1998's.

> version I'd ever wanna run. Isn't there some way to get gnuc etc
> running on an amiga and then work on direct support of the platform if
> that's what you want? I'd think paying a well known amiga hacker to
> port it would be a worthwhile investment.
>
>
>
> indeed, a 12 year old one-time port to an obscure platform which is at best
> a historical oddity does no one any favors.  if its not supported and
> reasonably current, it might as well not exist.
>
>
> --
> john r pierce, recycling bits in santa cruz



-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL general discussions list - 2010 Thread: Wikipedia entry - AmigaOS port - error?

2017-03-13 Thread Scott Marlowe
On Mon, Mar 13, 2017 at 10:41 PM, Adrian Klaver
 wrote:
> On 03/13/2017 02:13 AM, Raffaele Irlanda wrote:

>> http://aminet.net/package/dev/gg/postgresql632-mos-bin
>>
>> In 2010 it has been sure completely outdated but sure you can see proof
>> it exists, and that Amiga had full dignity of being present into
>> Wikipedia Article.
>
>
> I am pretty sure it is:
>
> https://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems#Operating_system_support
>
> PostgreSQL  Yes Yes Yes Yes Yes Yes (MorphOS)[24]
>

Oh man, 2005's versuin 6.3.2.  It's more of a historical oddity than a
version I'd ever wanna run. Isn't there some way to get gnuc etc
running on an amiga and then work on direct support of the platform if
that's what you want? I'd think paying a well known amiga hacker to
port it would be a worthwhile investment.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Request to confirm which command is use for exclusive operation

2017-03-07 Thread Scott Marlowe
On Tue, Mar 7, 2017 at 11:55 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> On Tue, Mar 7, 2017 at 11:21 PM, Yogesh Sharma <yogeshra...@gmail.com> wrote:
>> Dear David,
>>
>>  I want to apply explicitly lock mechanism once inset operation is in
>> progress then REINDEX will wait.
>> And vice versa.
>> So, please let me know this type of handling is possible.
>> Regrds,
>> Yogesh
>
> Create two roles grant / revoke permissions as needed. maybe
> pg_stat_activity for locks etc

Also you could have a table with a simple token in it etc. active
process gets token, all other processes wait on it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Request to confirm which command is use for exclusive operation

2017-03-07 Thread Scott Marlowe
On Tue, Mar 7, 2017 at 11:21 PM, Yogesh Sharma  wrote:
> Dear David,
>
>  I want to apply explicitly lock mechanism once inset operation is in
> progress then REINDEX will wait.
> And vice versa.
> So, please let me know this type of handling is possible.
> Regrds,
> Yogesh

Create two roles grant / revoke permissions as needed. maybe
pg_stat_activity for locks etc


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG on SSD

2017-03-02 Thread Scott Marlowe
On Thu, Mar 2, 2017 at 12:42 PM, scott ribe  wrote:
> Is it reasonable to run PG on a mirrored pair of something like the Intel SSD 
> DC 3610 series? (For example:
> http://ark.intel.com/products/82935/Intel-SSD-DC-S3610-Series-480GB-2_5in-SATA-6Gbs-20nm-MLC)
>  I'd *hope* that anything Intel classifies as a "Data Center SSD" would be 
> reasonably reliable, have actually-working power loss protection etc, but is 
> that the case?


From the spec sheet they certainly seem to be safe against power loss.
I'd still test by pulling the power cables while running benchmarks to
be sure.

I've used the other Intel enterprise class ssds with good results on
the power plug pull tests.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: GMT FATAL: remaining connection slots are reserved for non-replication superuser connections, but I'm using pgBouncer for connection pooling

2017-02-28 Thread Scott Marlowe
On Tue, Feb 28, 2017 at 10:00 AM, Lisandro  wrote:
> Hi Steve, thanks for your help.
> Your comment made me realise that maybe the problem is my pgBouncer
> configuration, specifically default_pool_size. It took me a while to
> understand pgbouncer, and I still had some doubts when I configured it.  Now
> I undesrtand better.
>
> I connect to all databases with the same user. However, I had set
> default_pool_size=10. So with more than 150 databases, it was very probable
> that postgresql reach max_connections=250 limit.
>
> I didn't have set reserve_pool_timeout or max_db_connections, but docs say
> their default values are reserve_pool_timeout=5 seconds,
> max_db_connections=unlimited.
>
> I've reviewed pgBouncer configuration and did some search. I've found this
> thread where the first person that responds gives a great explanation abount
> how pgbouncer do its maths:
> https://github.com/pgbouncer/pgbouncer/issues/174
>
> So, what I did for now was to set this in pgbouncer configuration:
> default_pool_size=1
> min_pool_size = 0
> server_idle_timeout = 30
> reserve_pool_size = 10
> reserve_pool_timeout = 5
> max_db_connections = 5
>
>
> I'll keep monitoring with this setup, but I can already tell you that the
> total number of connections in postgres has significantly reduced (from ~100
> to ~60). So I guess the problem was a bad setup of pgbouncer.

Those seem pretty reasonable. Note that if you need to you CAN set
default pool size and other settings per database etc. So if you have
a more active db that needs more connections etc you can adjust these
as needed per db and it will override the default overall settings.

As for monitoring I'd suggest setting up Nagios or Zabbix. They both
can give you some nice pretty graphs of what your system looks like
over time so you can do simple trend analysis and alerting to avoid
these problems in the future.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum stuck for hours, blocking queries

2017-02-17 Thread Scott Marlowe
On Fri, Feb 17, 2017 at 1:38 PM, Rakesh Kumar
 wrote:
> LOCK TABLE yourtable ;
> CREATE TEMPORARY TABLE keep AS SELECT * FROM yourtable WHERE ;
> TRUNCATE yourtable;
> INSERT INTO yourtable SELECT * from keep;
> COMMIT;
> ===
> the above snippet assumes truncate in PG can be in a transaction. In other 
> words, while truncate by itself
> is atomic, it can't be rolled back. So in the above case, if "INSERT INTO 
> yourtable SELECT * from keep;" and
> we rollback, will it rollback yourtable.


Yes it can. Truncate has been rollbackable for a while now.

begin;
create table
insert into table
truncate old table
. something goes wrong .
rollback;

Unless I misunderstand your meaning.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum stuck for hours, blocking queries

2017-02-15 Thread Scott Marlowe
On Wed, Feb 15, 2017 at 3:26 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> On Wed, Feb 15, 2017 at 10:30 AM, Tim Bellis <tim.bel...@metaswitch.com> 
> wrote:
>> I have a postgres 9.3.4 database table which (intermittently but reliably) 
>> gets into a state where queries get blocked indefinitely (at least for many 
>> hours) behind an automatic vacuum. I was under the impression that vacuum 
>> should never take any blocking locks for any significant period of time, and 
>> so would like help resolving the issue.
>>
>> The process blocking the query is:
>> postgres 21985 11304 98 Feb13 ?1-14:20:52 postgres: autovacuum 
>> worker process   
>> which is running the query
>> autovacuum: VACUUM public.
>>
>> The query being blocked is:
>> ALTER TABLE  ALTER COLUMN  DROP DEFAULT
>> (But I have seen this previously with other queries being blocked. I used 
>> the SQL in https://wiki.postgresql.org/wiki/Lock_Monitoring to determine 
>> which queries were blocked)
>
> There are also ways of making the table less likely / not likely /
> will not get vacuum automatically. If you're willing to schedule ddl
> and vacuum on your own you can then mix the two in relative safety.

Followup: 
https://www.postgresql.org/docs/9.3/static/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum stuck for hours, blocking queries

2017-02-15 Thread Scott Marlowe
On Wed, Feb 15, 2017 at 10:30 AM, Tim Bellis  wrote:
> I have a postgres 9.3.4 database table which (intermittently but reliably) 
> gets into a state where queries get blocked indefinitely (at least for many 
> hours) behind an automatic vacuum. I was under the impression that vacuum 
> should never take any blocking locks for any significant period of time, and 
> so would like help resolving the issue.
>
> The process blocking the query is:
> postgres 21985 11304 98 Feb13 ?1-14:20:52 postgres: autovacuum worker 
> process   
> which is running the query
> autovacuum: VACUUM public.
>
> The query being blocked is:
> ALTER TABLE  ALTER COLUMN  DROP DEFAULT
> (But I have seen this previously with other queries being blocked. I used the 
> SQL in https://wiki.postgresql.org/wiki/Lock_Monitoring to determine which 
> queries were blocked)

Yup, there's a priority inversion in DDL, DML and maintenance
(vacuum). Vacuum runs slow in the background. Normal
update/delete/insert work fine because of the type of lock vacuum has.

Then a ddl gets in line. It has to wait on the vacuum, and the vacuum,
set to run super slow. And everybody waits. On vacuum.

Basically it's bad practice to alter tables that are big and being
worked on, because one way or another you're going to pay a price.

I've used partitions for logging and auditing that autocreate and drop
and vacuum, but they never get ddl done on them when they're getting
updated and vice versa.

There are also ways of making the table less likely / not likely /
will not get vacuum automatically. If you're willing to schedule ddl
and vacuum on your own you can then mix the two in relative safety.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL corruption

2017-02-13 Thread Scott Marlowe
On Mon, Feb 13, 2017 at 9:41 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> On Mon, Feb 13, 2017 at 9:21 PM, James Sewell <james.sew...@jirotech.com> 
> wrote:
>>
>> Hello All,
>>
>> I am working with a client who is facing issues with database corruption 
>> after a physical hard power off (the machines are at remote sites, this 
>> could be a power outage or user error).
>>
>> They have an environment made up of many of the following consumer grade 
>> stand alone machines:
>>
>> Windows 7 SP1
>> PostgreSQL 9.2.4
>> Integrated Raid Controller
>>
>> Managed by Intel Rapid Storage Technology
>> RAID 1 over two disks
>> Disk caching disabled
>> Not battery backed
>> Disk cache disabled
>
> Some part of your OS or hardware is lying to postgres about fsyncs.
> There are a few test suites out there that can test this independent
> of postgresql btw, but it's been many years since I cranked one up.
> Here's a web page from 2005 describing the problem and using a fsync
> tester written in perl.
>
> Try to see if you can get the same types of fsync errors out of your
> hardware. If you can, stop, figure how to fix that, and then get back
> in the game etc. Til then try not to lose power under load.

http://brad.livejournal.com/2116715.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL corruption

2017-02-13 Thread Scott Marlowe
On Mon, Feb 13, 2017 at 9:21 PM, James Sewell  wrote:
>
> Hello All,
>
> I am working with a client who is facing issues with database corruption 
> after a physical hard power off (the machines are at remote sites, this could 
> be a power outage or user error).
>
> They have an environment made up of many of the following consumer grade 
> stand alone machines:
>
> Windows 7 SP1
> PostgreSQL 9.2.4
> Integrated Raid Controller
>
> Managed by Intel Rapid Storage Technology
> RAID 1 over two disks
> Disk caching disabled
> Not battery backed
> Disk cache disabled

Some part of your OS or hardware is lying to postgres about fsyncs.
There are a few test suites out there that can test this independent
of postgresql btw, but it's been many years since I cranked one up.
Here's a web page from 2005 describing the problem and using a fsync
tester written in perl.

Try to see if you can get the same types of fsync errors out of your
hardware. If you can, stop, figure how to fix that, and then get back
in the game etc. Til then try not to lose power under load.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disabling inheritance with query.

2016-12-21 Thread Scott Marlowe
On Wed, Dec 21, 2016 at 3:36 PM, Edmundo Robles  wrote:
> Hi!
>
> i need  disable  inheritance  from many tables in a query like
>
> "delete from pg_inherits where inhparent=20473"  instead alter table ...
>
> but  is safe?   which is the risk for  database if  i  delete it?

You could change the source query to use the only keyword:

delete from only parenttable where ...

OR you could write a rule or trigger that rewrote the query to have
the only keyword in it under certain circumstances.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-12 Thread Scott Marlowe
On Sat, Nov 12, 2016 at 2:31 PM, Adrian Klaver
 wrote:
> On 11/12/2016 01:20 PM, aws backup wrote:
>>
>> Hi,
>>
>> I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is
>> part of the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.
>>
>> Unfortunately I get following failure message:
>>
>> pg_dumpall: could not connect to database "template1": FATAL: password
>> authentication failed for user "postgres"
>>
>> Maybe you can help me to solve this problem.
>
>
> Two choices:
>
> 1) Determine what the password is for the postgres user and provide it when
> you connect.
>
> 2) If you have access to the pg_hba.conf file create a access line that uses
> trust as the auth method for user postgres connect that way.
>
> https://www.postgresql.org/docs/9.5/static/auth-methods.html#AUTH-TRUST

OR you could use the -l switch and specify another db.

pg_dumpall -l mydbnamehere


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql errors on Windows with F-Secure AntiVirus

2016-11-09 Thread Scott Marlowe
On Wed, Nov 9, 2016 at 11:56 AM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> On Wed, Nov 9, 2016 at 11:29 AM, Moreno Andreo <moreno.and...@evolu-s.it> 
> wrote:
>> Hi again,
>> our support team is reporting cases where postgres connections are
>> suddenly dropped, but reconnecting again soon everyting is OK.
>> Environment is PostgreSQL 9.1 on Windows (various versions)
>> Asfer some research, I found on postgresql log many of these messages
>> could not reserve shared memory region (addr=02E8) for child 094C:
>> 487
>> Searching the archives I found a bug fixed with 9.3.x, since I had 9.1, I
>> migrated to 9.5.6 (standard for new installations). After restarting, I got
>> the exact same error and response behaviour.
>> After another search, I stopped F-Secure Antivirus services, and everything
>> is now going smoothly. So, with the help of a mate that knows how this
>> antivirus work, we put exceptions to all PostgreSQL directories.
>> After restarting, the error was back. Nothing in the AV log. Tomorrow we'll
>> try with adding exclusions to port 5433
>>
>> Someone else facing this problem?
>>
>> Thanks
>> Moreno.-
>
>
> Shared memory doesn't come from a file on teh hard drive.  You need an
> exception for whatever postgres uses for shared memory in windows. I
> don't know much about pgsql on windows, but that should get you
> started.
>
>
> Post back if you find anything that makes it work.
>

Oh and if you can report it to the antivirus vendor so they can fix it
permanently on their end. pgsql is not a virus.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql errors on Windows with F-Secure AntiVirus

2016-11-09 Thread Scott Marlowe
On Wed, Nov 9, 2016 at 11:29 AM, Moreno Andreo  wrote:
> Hi again,
> our support team is reporting cases where postgres connections are
> suddenly dropped, but reconnecting again soon everyting is OK.
> Environment is PostgreSQL 9.1 on Windows (various versions)
> Asfer some research, I found on postgresql log many of these messages
> could not reserve shared memory region (addr=02E8) for child 094C:
> 487
> Searching the archives I found a bug fixed with 9.3.x, since I had 9.1, I
> migrated to 9.5.6 (standard for new installations). After restarting, I got
> the exact same error and response behaviour.
> After another search, I stopped F-Secure Antivirus services, and everything
> is now going smoothly. So, with the help of a mate that knows how this
> antivirus work, we put exceptions to all PostgreSQL directories.
> After restarting, the error was back. Nothing in the AV log. Tomorrow we'll
> try with adding exclusions to port 5433
>
> Someone else facing this problem?
>
> Thanks
> Moreno.-


Shared memory doesn't come from a file on teh hard drive.  You need an
exception for whatever postgres uses for shared memory in windows. I
don't know much about pgsql on windows, but that should get you
started.


Post back if you find anything that makes it work.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hardware recommendations?

2016-11-02 Thread Scott Marlowe
On Wed, Nov 2, 2016 at 4:19 PM, John R Pierce  wrote:
> On 11/2/2016 3:01 PM, Steve Crawford wrote:
>>
>> After much cogitation I eventually went RAID-less. Why? The only option
>> for hardware RAID was SAS SSDs and given that they are not built on
>> electro-mechanical spinning-rust technology it seemed like the RAID card was
>> just another point of solid-state failure. I combined that with the fact
>> that the RAID card limited me to the relatively slow SAS data-transfer rates
>> that are blown away by what you get with something like an Intel NVME SSD
>> plugged into the PCI bus. Raiding those could be done in software plus $$$
>> for the NVME SSDs but I already have data-redundancy through a combination
>> of regular backups and streaming replication to identically equipped
>> machines which rarely lag the master by more than a second.
>
>
> just track the write wear life remaining on those NVMe cards, and maintain a
> realistic estimate of lifetime remaining in months, so you can budget for
> replacements.   the complication with PCI NVMe is how to manage a
> replacement when the card is nearing EOL.   The best solution is probably
> failing over to a replication slave database, then replacing the worn out
> card on the original server, and bringing it up from scratch as a new slave,
> this can be done with minimal service interruptions.   Note your slaves will
> be getting nearly as many writes as the masters so likely will need
> replacing in the same time frame.

Yeah the last thing you want is to start having all your ssds fail at
once due to write cycle end of life etc. Where I used to work we had
pretty hard working machines with something like 500 to 1000 writes/s
and after a year were at ~90% writes left. ymmv depending on the ssd
etc.

A common trick is to overprovision if possible. Need 100G of storage
for a fast transactional db? Use 10% of a bunch of 800GB drives to
make an array and you now have a BUNCH of spare write cycles per
device for extra long life.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hardware recommendations?

2016-11-02 Thread Scott Marlowe
On Wed, Nov 2, 2016 at 11:40 AM, Joshua D. Drake  wrote:
> On 11/02/2016 10:03 AM, Steve Atkins wrote:
>>
>> I'm looking for generic advice on hardware to use for "mid-sized"
>> postgresql servers, $5k or a bit more.
>>
>> There are several good documents from the 9.0 era, but hardware has moved
>> on since then, particularly with changes in SSD pricing.
>>
>> Has anyone seen a more recent discussion of what someone might want for
>> PostreSQL in 2017?
>
>
> The rules haven't changed much, more cores (even if a bit slower) is better
> than less, as much ram as the budget will allow and:
>
> SSD
>
> But make sure you get datacenter/enterprise SSDs. Consider that even a slow
> datacenter/enterprise SSD can do 500MB/s random write and read just as fast
> if not faster. That means for most installations, a RAID1 is more than
> enough.

Just to add that many setups utilizing SSDs are as fast or faster
using kernel level RAID as they are with a hardware RAID controller,
esp if the RAID controller has caching enabled. We went from 3k to 5k
tps to 15 to 18k tps by turnong off caching on modern LSI MegaRAID
controllers running RAID5.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Turning slave into a master - PostgreSQL 9.2

2016-11-01 Thread Scott Marlowe
On Tue, Nov 1, 2016 at 1:43 PM, Patrick B <patrickbake...@gmail.com> wrote:
>
>
> 2016-11-02 2:55 GMT+13:00 Scott Marlowe <scott.marl...@gmail.com>:
>>
>> On Mon, Oct 31, 2016 at 8:01 PM, Patrick B <patrickbake...@gmail.com>
>> wrote:
>> > If I change recovery.conf:
>> >
>> > recovery_target_time = '2016-10-30 02:24:40'
>> >
>> >
>> > I get error:
>> >
>> > FATAL:  requested recovery stop point is before consistent recovery
>> > point
>>
>> You can try using pg_basebackup to get the replica setup. In 9.2 you
>> gotta make your own recovery.conf, but you already know how to do
>> that. Way easier than trying to rsync by hand etc.
>
>
>
> I did a pg_basebackup!

Huh, after a pg_basebackup all you should need is a recovery.conf in
place and a trigger file.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Statistics on index usage

2016-11-01 Thread Scott Marlowe
On Tue, Nov 1, 2016 at 8:43 AM, François Battail
 wrote:
> Dear List,
>
> does pgsql maintains statistics on index usage? I mean just a counter
> for each index in the database, incremented each time time it is used.
> It would be useful to help cleaning almost unused index and to avoid
> poisoning the global cache.
>
> I've found nothing so far but may be I've been missing something.

Yup it does keep track of index usage. To see all the various stuff
postgres keeps track of etc, try typing

select * from pg_ then hit tab. The two common ones I look at are:

pg_statio_all_indexes
pg_stat_user_indexes


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Turning slave into a master - PostgreSQL 9.2

2016-11-01 Thread Scott Marlowe
On Mon, Oct 31, 2016 at 8:01 PM, Patrick B  wrote:
> If I change recovery.conf:
>
> recovery_target_time = '2016-10-30 02:24:40'
>
>
> I get error:
>
> FATAL:  requested recovery stop point is before consistent recovery point

You can try using pg_basebackup to get the replica setup. In 9.2 you
gotta make your own recovery.conf, but you already know how to do
that. Way easier than trying to rsync by hand etc.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to hint 2 coulms IS NOT DISTINCT FROM each other

2016-10-29 Thread Scott Marlowe
On Sat, Oct 29, 2016 at 6:55 AM, Kim Rose Carlsen  wrote:
>> try this :-D
>
>> create or replace function indf(anyelement, anyelement) returns anyelement
>> as
>> $$
>>   select $1 = $2 or ($1 is null and $2 is null);
>> $$ language sql;
>>
>> CREATE VIEW view_circuit_with_status AS (
>>SELECT r.*,
>>   s.circuit_status,
>>   s.customer_id AS s_customer_id,
>>   p.line_speed,
>>   p.customer_id AS p_customer_id
>>  FROM view_circuit r
>>  JOIN view_circuit_product_main s
>>ON r.circuit_id = s.circuit_id
>>   AND indf(r.customer_id, s.customer_id)
>>  JOIN view_circuit_product p
>>ON r.circuit_id = p.circuit_id
>>   AND indf(r.customer_id, s.customer_id)
>>
>> merlin
>
> This doesn't do much good. This doesn't tell the planner that the 3
> customer_ids are actually of same value, and it therefore can't filter them
> as it sees fit.

You do know you can index on a function, and the planner then keeps
stats on it when you run analyze right?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to hint 2 coulms IS NOT DISTINCT FROM each other

2016-10-28 Thread Scott Marlowe
On Fri, Oct 28, 2016 at 10:29 AM, Kim Rose Carlsen  wrote:
> Hi
>
> I was wondering if there is a way to hint that two columns in two different
> tables IS NOT DISTINCT FROM each other. So that the optimizer may assume if
> table_a.key = 'test' THEN table_b.key = 'test' .
>
> The equals operator already does this but it does not handle NULLS very well
> (or not at all). And it seems IS NOT DISTINCT FROM is not indexable and
> doesn't establish the same inference rules as equals.

The whole idea behing Postgres' query planner is that you don't have
to use any hints. Late model versions of postgres handle nulls fine,
but nulls are never "equal" to anything else. I.e. where xxx is null
works with indexes. Where x=y does not, since null <> null.

Suggestion for getting help, put a large-ish aka production sized
amount of data into your db, run your queries with explain analyze and
feed them to https://explain.depesz.com/ and post the links here along
with the slow queries. A lot of times the fix is non-obvious if you're
coming from another db with a different set of troubleshooting skills
for slow queries.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange? BETWEEN behaviour.

2016-10-20 Thread Scott Marlowe
On Thu, Oct 20, 2016 at 6:27 AM, Bjørn T Johansen  wrote:
> On Thu, 20 Oct 2016 14:04:51 +0200
> vinny  wrote:
>
>> On 2016-10-20 13:51, Bjørn T Johansen wrote:
>> > I have the following SQL:
>> >
>> > SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
>> > 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
>> > 23:59:59','DD.MM.
>> > HH24:MI:SS')
>> >
>> > date is of type timestamp.
>> >
>> > I was expecting to get all the records that had datepart = 20.10.2016
>> > but I am not getting that..
>> >
>> > What am I missing?
>> >
>> >
>> > Regards,
>> >
>> > BTJ
>> >
>>
>> What are you getting?
>>
>>
>
> The sql returns 5 of the expected 72 rows...

Could you be running into timezone issues?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple multithreaded insert

2016-10-14 Thread Scott Marlowe
On Fri, Oct 14, 2016 at 7:12 AM, Арсен Арутюнян  wrote:
> Hi, everyone!
>
> I have a table:
>
> create table testpr(id serial,priority integer,unique(priority) DEFERRABLE,
> primary key(id));
>

This:

> and a trigger which, when added to this table, automatically sets priority
> as the maximum value +1

Leads to THIS:

> The result (priority):
>
> Thread 1: (1) (2) (3) (4) (5) (6) (7)
>
> Thread 2: (8) (9) (10) (11) (12) (13) (14)

If you have to have monotonically increasing priorities with no gaps,
that's the price you pay, unless you can pre-allocate them or
something.

Basically max(id)+1 is a db anti-pattern.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-12 Thread Scott Marlowe
On Sun, Sep 11, 2016 at 3:26 AM, Patrick B  wrote:
>
>
> 2016-09-11 14:09 GMT+12:00 Jim Nasby :
>>
>> On 9/8/16 3:29 PM, David Gibbons wrote:
>>>
>>>
>>> Isn't this heading in the wrong direction?   We need to be more
>>> precise than 0 (since 0 is computed off of rounded/truncated time
>>> stamps), not less precise than 0.
>>>
>>> Cheers,
>>>
>>> Jeff
>>>
>>>
>>>
>>> Hmm, You may be right, reading it 4 more times for comprehension it
>>> looks like it should be set to -1 not 1.
>>
>>
>> Not according to my man page:
>>
>>--modify-window
>>   When comparing two timestamps, rsync treats the timestamps
>> as being equal if they differ by no more than the modify-window value.  This
>> is normally 0 (for an exact match), but you
>>   may find it useful to set this to a larger value in some
>> situations.  In particular, when transferring to or from an MS Windows FAT
>> filesystem  (which  represents  times  with  a
>>   2-second resolution), --modify-window=1 is useful (allowing
>> times to differ by up to 1 second).
>>
>>
>> --
>> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
>> Experts in Analytics, Data Architecture and PostgreSQL
>> Data in Trouble? Get it in Treble! http://BlueTreble.com
>> 855-TREBLE2 (855-873-2532)   mobile: 512-569-9461
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
>
>
> So... what do u guys recommend? which options should I use?
>
> Patrick

Why not subscribe a new cluster on the same box with pg_basebackup?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-08 Thread Scott Marlowe
On Tue, Sep 6, 2016 at 5:25 PM, John R Pierce  wrote:
> On 9/6/2016 4:20 PM, Melvin Davidson wrote:
>>
>> If you use slony to replicate, you CAN have 9.2 on the master and 9.4 on
>> the slave.
>
>
> does rackspace support slony?  how about amazon dms ?
>
> slony requires configuring replication on each table.  if the database has a
> large complex schema this could take considerable setup effort.
>

Not really. As of slony 2.2 you can use a regular expression to add
tables or sequences.

i.e.:

SET ADD TABLE (
SET ID=1,
TABLES='public\\.*'
);

repeat for sequences.  Two commands. I don't consider that considerable effort.
-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-08 Thread Scott Marlowe
On Wed, Sep 7, 2016 at 5:00 PM, Jim Nasby  wrote:
> On 9/2/16 11:44 AM, David Gibbons wrote:
>>
>> rsync -va /var/lib/pgsql/ /var/lib/pgsql2/
>> service postgres stop
>> rsync -va /var/lib/pgsql/ /var/lib/pgsql2/
>>
>> The second rsync will only copy the deltas from the first, it still has
>> to go in and determine what needs to be copied/what changed but the bulk
>> of it can be prepared/migrated before the actual downtime window.
>
>
> That is NOT safe. The problem is it allows rsync to use mtime alone to
> decide that a file is in sync, and that will fail if Postgres writes to a
> file in the same second that the first rsync reads from it (assuming
> Postgres writes after rsync reads). You need to add the --checksum flag to
> rsync (which means it will still have to read everything that's in
> /var/lib/pgsql).
> --

I'm still wondering why my advice to just subscribe a new cluster on
the master machine was just ignored by OP. Postgresql already has a
pretty reliable method for doing what the OP wants using
pg_basebackup. Using rsync etc is like reinventing the wheel imho.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Scott Marlowe
On Tue, Sep 6, 2016 at 1:18 PM, Steve Atkins <st...@blighty.com> wrote:
>
>> On Sep 6, 2016, at 12:08 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
>>
>> On Fri, Sep 2, 2016 at 9:38 PM, Pradeep <pgund...@avineonindia.com> wrote:
>>>
>>> max_connections = 100
>>> shared_buffers = 512MB
>>> effective_cache_size = 24GB
>>> work_mem = 110100kB
>>
>> This is WAY too high for work_mem. Work_mem is how much memory a
>> single sort can grab at once. Each query may run > 1 sort, and you
>> could have 100 queries running at once.
>>
>> This setting is 110GB. That's about 109.9GB too high for safety. When
>> things go wrong with this too big, they go very wrong, sending the
>> machine into a swap storm from which it may not return.
>
> It's an oddly spelled 110MB, which doesn't seem unreasonable.

oh yeah. still kind biggish but not as big as I had thought.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Scott Marlowe
On Fri, Sep 2, 2016 at 9:38 PM, Pradeep  wrote:
> Dear Team,
>
>
>
> Could you please help me, after changing the below parameters in PostgreSQL
> configuration file it was not reflecting in OS level and also Database
> performance is degrading.
>
>
>
> Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated
> 24GB RAM out of 32GB.

Actually effective_cache_size allocates nothing. It tells the pgsql
server about how much memory the machine it is running on is using for
OS level caching. On  32G machine with 1G or so of shared_buffers that
number is about right.

> However after changing the below parameters, In task bar it is showing 2.7GB
> Utilization even though my utilization is more.

2.7G is ok. Postgresql expects the OS to help out with caching so it
doesn't need to grab all the memory in the machine etc. In fact that
would be counterproductive in most situations.

> So kindly suggest us, whether it will impact or not in Open source
> PostgreSQL database
>
> max_connections = 100
> shared_buffers = 512MB
> effective_cache_size = 24GB
> work_mem = 110100kB

This is WAY too high for work_mem. Work_mem is how much memory a
single sort can grab at once. Each query may run > 1 sort, and you
could have 100 queries running at once.

This setting is 110GB. That's about 109.9GB too high for safety. When
things go wrong with this too big, they go very wrong, sending the
machine into a swap storm from which it may not return.

> maintenance_work_mem = 2GB
>
> checkpoint_segments = 64
>
> checkpoint_completion_target = 0.9

Too high of a checkpoint completion target may cause buffers to get
written out more often than needed. but it varies based on load etc.

> wal_buffers = 16MB
>
> default_statistics_target = 100

It's far more likely that you've just got poorly written queries. I'd
make a post with explain analyze output etc. Here's a good resource
for reporting slow queries:

https://wiki.postgresql.org/wiki/Slow_Query_Questions

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-02 Thread Scott Marlowe
On Thu, Sep 1, 2016 at 8:48 PM, Patrick B  wrote:
> Hi guys,
>
> I'll be performing a migration on my production master database server,
> which is running PostgreSQL 9.2 atm, from SATA disks to SSD disks.
> I've got some questions about it, and it would be nice if u guys could share
> your experiences/thoughts:
>
> SCENARIO:
>
> I currently have one MASTER and two Streaming Replication Slaves servers...
>
>> master01
>> slave01 (Streaming replication + wal_files)
>> slave02 (Streaming replication + wal_files)
>
>
> ...Postgres is mounted on: /var/lib/pgsql/... The SSD disks will be
> installed only on my Master server, because my main problem is Writes and
> not reads.
>
> The new SSD volume will be mounted on /var/lib/pgsql2/
>
>
> The slave02 server will loose the streaming replication connection to the
> master, once slave01 becomes the new master a new timeline will be settled?
> Will slave02 be able to connect to the slave01 server for streaming
> replication?
>
>
>
>
> MIGRATION OPTIONS:
>
> Migration Option 1: I know this option will work
>
> Mount the new volume /var/lib/pgsql2/ on the master01 server
> Turn slave01 into a master server
> once I can confirm everything is working fine, I can go to step 4
> Stop postgres on the master01, start copying the DB using pg_basebackup from
> slave02 to master01 (Will have to edit postgres to use /var/lib/pgsql2/
> instead /var/lib/pgsql - Is that possible? Or I'd have to create a symbolic
> link?)
> Start postgres on master01 server and check if all goes well as streaming
> replication server (Will test it for days)
> Turn master01 into a master server and I'll have to re-copy the DB into
> slave01 to make it a streaming replication server again
>
>
> Migration Option 2: I don't know if this is possible - IS THIS POSSIBLE
>
> Mount the new volume /var/lib/pgsql2/ on the master01 server
> Stop postgres on the server (I won't stop postgres on the slave so the users
> will be able to use the server as read-only)
> Copy the data from /var/lib/pgsql/ to /var/lib/pgsql2/
> Configure postgres to start using the new volume(/var/lib/pgsql2/)
>
>
> What do you guys think? Is option possible? if so it would be much easier :)
> Thanks!

Why not just subscribe to another cluster on the master, then sub the
slaves to that, then switchover to the new cluster on the master?

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What limits Postgres performance when the whole database lives in cache?

2016-09-02 Thread Scott Marlowe
On Fri, Sep 2, 2016 at 4:49 AM, dandl  wrote:
> Re this talk given by Michael Stonebraker:
>
> http://slideshot.epfl.ch/play/suri_stonebraker
>
>
>
> He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle, DB2, MS
> SQL Server, Postgres, given enough memory that the entire database lives in
> cache, the server will spend 96% of its memory cycles on unproductive
> overhead. This includes buffer management, locking, latching (thread/CPU
> conflicts) and recovery (including log file reads and writes).
>
>
>
> [Enough memory in this case assumes that for just about any business, 1TB is
> enough. The intent of his argument is that a server designed correctly for
> it would run 25x faster.]
>
>
>
> I wondered if there are any figures or measurements on Postgres performance
> in this ‘enough memory’ environment to support or contest this point of
> view?

What limits postgresql when everything fits in memory? The fact that
it's designed to survive a power outage and not lose all your data.

Stonebraker's new stuff is cool, but it is NOT designed to survive
total power failure.

Two totally different design concepts. It's apples and oranges to compare them.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ON CONFLICT does not support deferrable unique constraints

2016-08-24 Thread Scott Marlowe
On Wed, Aug 24, 2016 at 6:05 AM, Andreas Joseph Krogh
 wrote:
>
> Hope some -hackers read this...
>
> Are there any plans to lift this restriction?

I'm trying to figure out a method for making this work in my head.
These two things seem kind of opposed to each other.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Scott Marlowe
On Thu, Jul 28, 2016 at 11:23 AM, Alex Ignatov <a.igna...@postgrespro.ru> wrote:
>
> On 28.07.2016 19:43, Scott Marlowe wrote:
>>
>> On Thu, Jul 28, 2016 at 10:32 AM, Alex Ignatov <a.igna...@postgrespro.ru>
>> wrote:
>>>
>>>   Oh, so in contrast to "Oracle world"  "Postgres world" DBA in their
>>> right
>>> to do major upgrade without complete and tested backup?
>>> Ok,  I understand you. In Postgres world there always sky is blue and sun
>>> is
>>> shining.
>>
>> Of course we have backups. But we also have slony. So we CAN go back
>> and forth between latest and previous without a restore.
>>
>>
>
> And? Oracle and MySql doesnt have it but can downgrade right out the box.
> Quick and easy.

So you can swap between oracle 11 and 12 back and forth in a live
environment with no downtime? Please pull the other leg.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Scott Marlowe
On Thu, Jul 28, 2016 at 10:32 AM, Alex Ignatov  wrote:
>
>  Oh, so in contrast to "Oracle world"  "Postgres world" DBA in their right
> to do major upgrade without complete and tested backup?
> Ok,  I understand you. In Postgres world there always sky is blue and sun is
> shining.

Of course we have backups. But we also have slony. So we CAN go back
and forth between latest and previous without a restore.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-28 Thread Scott Marlowe
On Wed, Jul 27, 2016 at 9:51 AM, Geoff Winkless  wrote:
> On 27 July 2016 at 15:22, Scott Mead  wrote:
>>
>>  "The bug we ran into only affected certain releases of Postgres 9.2 and
>> has been fixed for a long time now. However, we still find it worrisome that
>> this class of bug can happen at all. A new version of Postgres could be
>> released at any time that has a bug of this nature, and because of the way
>> replication works, this issue has the potential to spread into all of the
>> databases in a replication hierarchy."
>>
>>
>> ISTM that they needed a tire swing and were using a dump truck.  Hopefully
>> they vectored somewhere in the middle and got themselves a nice sandbox.
>
>
> At least his bug got fixed. The last 2 bugs I reported to MySQL resulted in
> an initial refusal to accept any problem existed, followed by (once that
> particular strategy had run out of steam) the developer simply ignoring the
> bug until it was closed automatically by their bug system. As far as I'm
> aware those bugs still exist in the most recent version.

Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.

The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.

As someone who has gotten more than one bug fix from pgsql in less
than 48 hours, I feel sorry for anyone who finds a bug in a MySQL
version they are running in production.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MediaWiki + PostgreSQL is not ready for production?

2016-07-20 Thread Scott Marlowe
On Mon, Jul 18, 2016 at 10:14 PM, Tatsuo Ishii  wrote:
> I found following comment for using PostgreSQL with MediaWiki:
>
> https://www.mediawiki.org/wiki/Compatibility#Database
>
> "Anything other than MySQL or MariaDB is not recommended for
> production use at this point."
>
> This is a sad and disappointed statement for us. Should we help
> MediaWiki community to enhance this?

A few years back I worked at a company that put mediawiki into our
school content management system with postgresql. We had zero issues
with postgresql support, it mostly just worked.  Took us about 4 weeks
to get it working and tested and deployed.

The cool bit was that by creating a few triggers and views, we made
mediawiki think it was just sitting on top of the default database
when it fact it was sitting on top of our custom db. Each teacher /
classroom had its own wiki, and we had literally 10s of thousands of
independent wikis running, and they were plenty fast.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql connection option: statement_timeout

2016-07-03 Thread Scott Marlowe
correction:

alter user reporting set statement_timemout=60 is handy for users that
should never take a long time to connect.

should read

alter user reporting set statement_timemout=60 is handy for users that
should never take a long time to run a statement.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql connection option: statement_timeout

2016-07-03 Thread Scott Marlowe
On Sun, Jul 3, 2016 at 5:15 PM, Melvin Davidson 
wrote:

>
>
> On Sun, Jul 3, 2016 at 6:54 PM, Craig Boyd  wrote:
>
>> Hello All,
>>
>> I am something of a newbie and I am trying to understand how to pass
>> connection options using the psql client.  My understanding is that it is
>> possible to do this as part of the psql connection event.
>> I am on Mint and my PostgreSQL Server version = 9.3.13.
>>
>> I am trying to connect to an instance on a different machine (also
>> 9.3.13).
>> The following works:
>> psql -U username -h 192.x.x.x 
>>
>> But when I try to set the statement like this it fails:
>> psql -U username -h 192.x.x.x statement_timeout=1000
>>
>> I get the following "invalid connection option"  I am less concerned with
>> actually setting this parameter than I am learning how to pass or set
>> connection options when I log in.  If it is a case where this particular
>> option cannot be set as part of the connection string that is fine.  But
>> that leads me to ask what options can I set as part of the connection
>> string?
>> I have looked here:
>> https://www.postgresql.org/docs/9.3/static/app-psql.html
>> and here:
>> https://www.postgresql.org/docs/9.3/static/runtime-config-client.html
>>
>> I suspect I am close, but I can't seem to figure out where I am going
>> awry.
>> Any thoughts?
>>
>> Thanks in advance.I suspect what you want is connect_timeout
>>
>
> ie: psql -U username -h 192.x.x.x connect_timeout=1000
> see examples at https://www.postgresql.org/docs/9.3/static/app-psql.html
>
> However, if you truly want to set statement_timeout, that cannot be set at
> the command line. You must execute that AFTER you connect.
> ie: # SET statement_timeout = 1000;
>
> You can also place multiple commands inside a file and then execute that
> after you connect.
> eg: # \i your_filename.sql
>
>
Also you can set such things as statement_timeout in the postgresql.conf,
or set them by database or by user / role.

alter user reporting set statement_timemout=60 is handy for users that
should never take a long time to connect.

Note that the connect_timeout is a connection time setting, for how long to
attempt a connection to be made, not for how long to hold it while idle.


Re: [GENERAL] Replication with non-read-only standby.

2016-06-30 Thread Scott Marlowe
On Thu, Jun 30, 2016 at 7:15 AM, Nick Babadzhanian  wrote:
> Setup:
> 2 PostgreSQL servers are geographically spread. The first one is used for an 
> application that gathers data. It is connected to the second database that is 
> used to process the said data. Connection is not very stable nor is it fast, 
> so using Bidirectional replication is not an option. It is OK if data is 
> shipped in batches rather than streamed.
>
> Question:
> Is there a way to make the standby server non-read-only, so that it can keep 
> getting updates (mostly inserts) from the 'master', but users are able to 
> edit the data stored on 'slave'? Is there some alternative solution to this?

I'd probably solve this with slony.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure version control

2016-06-29 Thread Scott Marlowe
On Wed, Jun 29, 2016 at 12:00 PM, Adrian Klaver
 wrote:
> On 06/29/2016 09:37 AM, Mark Morgan Lloyd wrote:
>>
>> Elsewhere, somebody was asking how people implemented version control
>> for stored procedures on (MS) SQL Server.
>>
>> The consensus was that this is probably best managed by using scripts or
>> command files to generate stored procedures etc., but does anybody have
>> any comment on that from the POV of PostgreSQL?
>
>
> There is no mechanism internal to Postgres that will version control the
> procedures, so the answer will be the same as above. In other words some
> external mechanism to version control. A more complete answer will depend on
> the workflow you are currently using.

I like this: https://github.com/depesz/Versioning very simple and easy to use.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum full: alternatives?

2016-06-20 Thread Scott Marlowe
On Mon, Jun 20, 2016 at 3:18 AM, Job  wrote:
>
> Hello,
>
> we have a table with an heavy traffic of pg_bulkload and delete of records.
> The size pass, in only one day, for example for 1Gb to 4Gb and then 1Gb back.
>
> We have important problems on size and the only way to gain free space is 
> issueing a vacuum full .
> But the operation is very slow, sometimes 2/4 hours, and table is not 
> available for services as it is locked.
>
> We do not delete everything at one (in this case the truncate woudl resolve 
> the problem).
>
> The autovacuum is not able (same for normal vacuum) to free the spaces.
>
> Are there some suggestions or another way to manage this?

First off, from your comments I'm not sure you really get postgresql's
way of freeing space and reusing it via autovacuum. Basically
postgresql, autovacuum process marks space as free, and the backend
writes new data (inserts or updates) into the free space. You
eventually reach equilibrium of a sort when the vacuum is freeing up
space as quickly as it's being consumed, or faster. The problem occurs
when vacuum can't keep up with your delete / write and update rate
combined. If this is happening you need to:

A: Make sure your IO Subsystem is fast enough to handle BOTH your
update rate AND your vacuuming needed to keep up, You're better off
with a machine that can do 15,000 transactions per second running a
load of 1,000 than trying to handle it with a machine that can do
1,500 tps etc. Sizing the hardware is a whole other conversation.

AND

B: Make your autovacuum aggressive enough to NOT fall behind.

It's important to remember that autovacuum was built and designed in a
time when most databases lived on spinning media. It's designed to not
overload spinning discs with too much random IO. A super fast RAID-10
array from that time period could do 200 to 1,000 transactions per
second and that only with a top notch RAID controller etc. Regular
spinning discs have a maximum random write ops per second that measure
in the 100 per second range.

My laptop with no pgsql optimizations, can do 850 tps on it's SSD. A
server with 10 SSDs in RAID-5 can do 15,000 tps.  If you have a fast
IO subsystem and wish to utilize it with pgsql you're going to have to
examine whether or not autovacuum with default settings is fast enough
to keep up. Remmeber, Auto-vacuum, by design, is slow and tries not to
get in the way. It's fast enough for most mundane uses, but can't keep
up with a fast machine running hard. The default settings for
autovacuum to look at here are first these two.:

autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = 200

They govern how hard autovac works. By default autovac doesn't work
hard. Making it work too hard for a given machine can cause system
performance issues. I.e. it "gets in the way".

Lowering cost_delay is usually enough. As you approach 1ms autovac
starts to use a lot more bandwidth. I find that even on pretty fast
machines that are routinely doing 1,000 writes per second or more, 3ms
is fast enough to keep up with a cost limit of 200. 5ms is a good
compromise without getting too aggressive.

In contrast to autovacuum, REGULAR vacuum, by default, runs at full
throttle. It hits your db server hard, performance-wise. It has  zero
cost delay, so it works very hard. If you run it midday on a hard
working server you will almost certainly see the performance drop. The
difference between regular vacuum with a delay time of 0 and autovac
with a delay of 20ms is huge.

These settings become important if you have a LOT of tables or dbs.
Otherwise they're probably fine.

autovacuum_max_workers =3 # Adjust this last, unless you have
thousands of tables or dbs.
autovacuum_naptime = 1 min # How long to wait before checking the next
db. Default is usually fine unless you have a lot of dbs.

These settings tell autovacuum when to kick in. Keeping these low
enough to keep autovac busy is a good idea too:

autovacuum_vacuum_scale_factor
autovacuum_vacuum_threshold
autovacuum_analyze_scale_factor
autovacuum_analyze_threshold

I tend to go for threshold, which is an absolute number of rows
changed before autovac kicks off. Scale factor can be dangerous
because what seems small at the beginning, gets big fast. If it's 0.1
then that's 10%. 10% of 1,000 is 100 rows, but 10% of 1,000,000 is
100,000, which is a LOT of rows to ignore until you have more than
that that need vacuuming. Setting it to something like 100 or 1,000
will keep your db from growing hundreds of thousands ofr dead tuples
in a big table.

Either way you need to make sure your autovacuum is aggressive enough
to keep up with your db's throughput.

Checking for bloat. You can see what parts of your db are getting too
big.  First, go here:
https://www.keithf4.com/checking-for-postgresql-bloat/

The script there will let you check all your tables AND indexes for
bloat. This will let you know if you've got a simple space problem or
a vacuuming 

Re: [GENERAL] OT hardware recommend

2016-06-17 Thread Scott Marlowe
On Fri, Jun 17, 2016 at 2:36 PM, Andy Colson  wrote:

> Hi all.
>
> I have access to quite a few laptop HD's (10 to 15 of them at least), and
> thought that might make a neat test box that might have some good IO speed.
>
> Needs to be cheap though, so linux with software raid, rack mount
> preferred but not required.
>
> Anyone have any experience with anything like that?  $2K might be
> possible, painful, but possible.
>
> Suggestions?


Sell them all and buy a couple of 800G SSDs? :)


Re: [GENERAL] Silent data loss in its pure form

2016-05-30 Thread Scott Marlowe
On Mon, May 30, 2016 at 10:57 AM, Alex Ignatov  wrote:
> Following this bug reports from redhat
> https://bugzilla.redhat.com/show_bug.cgi?id=845233
>
> it rising some dangerous issue:
>
> If on any reasons you data file is zeroed after some power loss(it is the
> most known issue on XFS in the past) when you do
> select count(*) from you_table you got zero if you table was in one
> 1GB(default) file or some other numbers !=count (*) from you_table before
> power loss
> No errors, nothing suspicious in logs. No any checksum errors. Nothing.
>
> Silent data loss is its pure form.
>
> And thanks to all gods that you notice it before backup recycling which
> contains good data.
> Keep in mind it while checking you "backups" in any forms (pg_dump or the
> more dangerous and short-spoken PITR file backup)
>
> You data is always in danger with "zeroed data file is normal file"
> paradigm.

That bug shows as having been fixed in 2012. Are there any modern,
supported distros that would still have it? It sounds really bad btw.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scaling Database for heavy load

2016-05-11 Thread Scott Marlowe
On Wed, May 11, 2016 at 4:09 AM, Digit Penguin  wrote:
> Hello,
>
>
> we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with
> about 1.000 queries per second.
> Now we have to scale the system up to 100.000 queries per second (about).
>
> Bind/DNS is very light and i think can not give us bottleneck.
> The question is how to dimension the backend database.
>
> The queries are select (only few insert or update), but the 100.000 queries
> per second are only select.
>
> How can i calculate/dimensionate?
> We think to put mor ethan one Bind Server (with backend database) behinbd a
> router with balancing capabilities.
>
> The problem is to know which requirements and limits does a Postgresql 9.x
> installation - 64 bit - can have.
> Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other
> replication modules that can work also if connction link, from Database
> Server, went down?

Definitely looks like multiple read slaves is the answer. How man
depends on a few things.

How big is your data set? How many clients need to have an open
connection at a time? How man updates / inserts / second are we
talking equals "a few"? One per second? Ten, a hundred, a thousand?

How often and for how long will your connection link be going down?
Slony is quite robust. Postgresql's built in streaming replication
works well enough if you use something liek WALE or OmniPITR to
archive xlogs and make them available in case of loss of connection.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I/O - Increase RAM

2016-04-13 Thread Scott Marlowe
On Wed, Apr 13, 2016 at 2:43 PM, drum.lu...@gmail.com 
wrote:

>
> Hi all,
>
> At the moment I'm having 100% I/O during the day. My server has SATA HDs,
> and it can't be changed now.
> So, to solve the problem (or at least try) I was thinking about double the
> RAM, and by doing that, increasing the cache.
>
> [image: Inline images 1]
>
> The server has 128GB today:
>
> shared_buffers = 51605MB (I'll change it to 32GB)
>> effective_cache_size = 96760MB
>
>
>
> Question:
>
> I know that might not be the best option, but by increasing the RAM and
> the CACHE would help, right?
>
>
We're gonna need better stats. iostat, iotop, vmstat etc will all break
down your io between reads and writes, random vs sequential etc.

If you're at 100% IO Util, and iostat says you're writing is taking up 20
or 30% of the time, then no, adding cache probably won't help.

Start looking into adding SSDs. They are literally 20 to 1000 times faster
at a lot of io stuff than spinning drives. And they're relatively cheap for
what they do.

Note that a software RAID-5 array of SSDs can stomp a hardware controller
running RAID-10 with spinning disks easily, and RAID-5 is pretty much as
slow as RAID gets.

Here's a few minutes of "iostat -xd 10 /dev/sdb" on one of my big servers
at work. These machines have a RAID-5 of 10x750GB SSDs under LSI MegaRAIDs
with caching turned off. (much faster that way). The array created thus is
6.5TB and it's 83% full. Note that archiving and pg_xlog are on separate
volumes as well.

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz   await r_await w_await  svctm  %util
sdb   0.00   236.30 1769.10 5907.30 20366.80 69360.80
23.3836.384.740.346.06   0.09  71.00

So we're seeing 1769 reads/s, 5907 writes/s and we're reading ~20MB/s and
writing ~70MB/s. In the past this kind of performance from spinning disks
required massive caching and cabinets full of hard drives. When first
testing these boxes we got literally a fraction of this performance with 20
spinning disks in RAID-10, and they had 512GB of RAM. Management at first
wanted to throw more memory at it, these machines go to 1TB RAM, but we
tested with 1TB RAM and the difference was literally a few % points going
from 512GB to 1TB RAM.

If your iostat output looks anything like mine, with lots of wkB/s and w/s
then adding memory isn't going to do much.


Re: [GENERAL] PostgreSQL advocacy

2016-03-22 Thread Scott Marlowe
On Tue, Mar 22, 2016 at 9:15 AM, Thomas Kellerer  wrote:
> Bruce Momjian schrieb am 22.03.2016 um 16:07:
>>
>> However, I do think database upgrades are easier with Oracle RAC
>
> I think you can do a rolling upgrade with a standby, but I'm not entirely 
> sure.

I find Slony good for upgrading versions with minimal downtime,
including major version changes.  It's very nature allows you to
migrate pieces and parts for testing etc, in ways that any kind of
byte streaming just can't do.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL advocacy

2016-03-21 Thread Scott Marlowe
On Mon, Mar 21, 2016 at 7:44 AM, Mark Morgan Lloyd
 wrote:
> If anybody puts together a "just the facts" document after Oracle's attack
> on PostgreSQL in Russia, please make sure it's drawn to the attention of
> this mailing list for the benefit of those who aren't in -advocacy.
>
> I was discussing this sort of thing elsewhere in the context of MS's
> apparent challenge to Oracle and IBM, and the dominant feeling appeared to
> be that actual use of things like Oracle RAC was vanishingly uncommon. Which
> surprised me, and which I'm treating with caution since the fact that
> facilities aren't used (in a certain population of developers etc.) can in
> no way be interpreted as meaning that the technology is not unavailable or
> unreliable.


I've submitted three different bug reports and had a patch within 48
hours each time. the responsiveness of this list, and the folks who
code PostgreSQL is far above any level of support I've ever gotten
from Oracle.

I once asked Oracle to please package the newest connection libs into
an RPM for RHEL5 and their response was "do it yourself."

Yeah, I know which database has REAL, USEFUL support for a DBA and it
isn't Oracle.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regarding connection drops for every few minutes

2016-03-11 Thread Scott Marlowe
On Fri, Mar 11, 2016 at 3:22 AM, Durgamahesh Manne
 wrote:
> Hi Sir
>
> As per above discussion.GOOD response from PostgreSQL
>
> i am very happy to work on PostgreSQL.Super fast response only from postgres
> team regarding i asked any question related to postgres
>
> regards
> mahesh
>
> On Fri, Mar 11, 2016 at 3:28 PM, John R Pierce  wrote:
>>
>> On 3/11/2016 1:50 AM, Durgamahesh Manne wrote:
>>
>>  Thanks for quick response .as per above conversation. for which parameter
>> i can comment to resolve the  issue & please specify the value of parameter
>> sir
>>
>> LOG:  could not receive data from client: Connection reset by peer
>>
>>
>> your client is disconnecting without closing, thats all that error says,
>> it doesn't say why this is happening, and without knowing why, there's no
>> way to suggest a fix.
>>
>> I suppose you could try setting tcp_keepalives_idle, if your connections
>> are staying idle for a long time and your OS doesn't default to using
>> tcp_keepalive, this could help.
>>
>> I have no idea what a suitable value is, you didn't specify an operating
>> system, a postgres version, what API your client is using, or if this is a
>> localhost vs a LAN vs an internet connection, or really much of anything
>> else..
>>
>>
>>
>> btw, please reply to the list,  not to me privately, thank you.

You could also be getting bitten by a network timeout here. If a
connection sits idle for a while a firewall could be dropping the tcp
connection. You can often work around this with lower tcp_keepalive
timeout values.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regarding connection drops for every few minutes

2016-03-11 Thread Scott Marlowe
On Fri, Mar 11, 2016 at 2:22 AM, Durgamahesh Manne
 wrote:
> hi
>
> the following info generating in pg_log
>
> LOG:  could not receive data from client: Connection reset by peer
> LOG:  could not receive data from client: Connection reset by peer
> LOG:  could not receive data from client: Connection reset by peer
> LOG:  could not receive data from client: Connection reset by peer
>
>
> Can you please provide info to stop connection drops for every few minutes
>
>
> do i need to change the value of tcp related parameter or else
>
> ssl_renegotiation parameter
>  please let me know sir

The latest version of PostgreSQL turn off ssl renegotiation by
default, so it might be easier to just update to the latest release.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot create role, no default superuser role exists

2016-03-10 Thread Scott Marlowe
For future reference you can start the postmaster in single user mode
and create a new postgres account or grant the current one superuser
access if this happens again. Just google "Postgresql single user
mode".


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   8   9   10   >