On 2015-12-01 20:55:02 +0100, Peter J. Holzer wrote:
> On 2015-12-01 18:58:31 +0100, Peter J. Holzer wrote:
> > I suspect such an interaction because I cannot reproduce the problem
> > outside of a stored procedure. A standalone Perl script doing the same
> > requests doesn't get a timeout.
[...]
On Tue, Dec 1, 2015 at 1:48 PM, Roxanne Reid-Bennett
wrote:
> On 11/30/2015 9:58 AM, David Kensiski wrote:
>
> I am working with a client who has a 9.1 database rapidly approaching XID
> wraparound. They also have an exceedingly large pg_largeobject table (4217
> GB) that has
Hi, I installed postgresql-x64-9.4 (installed from
postgresql-9.4.5-2-windows-x64.exe) on a Windows Seven 64bits PC, and
I'm getting the error "Could not connect to server: No buffer space
available (0x2747/10055)" when I only have 4 or 5 connections.
Can anoyone help me fix this?.
On Tue, Dec 1, 2015 at 9:12 AM, Jeff Janes wrote:
> On Mon, Nov 30, 2015 at 9:58 AM, David Kensiski
> wrote:
> > I am working with a client who has a 9.1 database rapidly approaching XID
> > wraparound.
>
> The hard limit at 2 billion, or the soft limit
On 12/02/2015 07:53 AM, Leonardo M. Ramé wrote:
Hi, I installed postgresql-x64-9.4 (installed from
postgresql-9.4.5-2-windows-x64.exe) on a Windows Seven 64bits PC, and
I'm getting the error "Could not connect to server: No buffer space
available (0x2747/10055)" when I only have 4 or 5
On 12/02/2015 09:36 AM, David Kensiski wrote:
On Tue, Dec 1, 2015 at 1:48 PM, Roxanne Reid-Bennett > wrote:
On 11/30/2015 9:58 AM, David Kensiski wrote:
I am working with a client who has a 9.1 database rapidly
approaching XID
On Wed, Dec 2, 2015 at 8:25 AM, David Kensiski wrote:
>
>
> On Tue, Dec 1, 2015 at 9:12 AM, Jeff Janes wrote:
>>
>> On Mon, Nov 30, 2015 at 9:58 AM, David Kensiski
>> wrote:
>> > I am working with a client who has a 9.1 database
On Wed, 2 Dec 2015 09:31:44 -0800
Christophe Pettus wrote:
>
> On Dec 2, 2015, at 9:25 AM, Bill Moran wrote:
>
> > No. See the section on row level locks here:
> > http://www.postgresql.org/docs/9.4/static/explicit-locking.html
>
> That wasn't
El 02/12/15 a las 13:20, Adrian Klaver escribió:
On 12/02/2015 07:53 AM, Leonardo M. Ramé wrote:
Hi, I installed postgresql-x64-9.4 (installed from
postgresql-9.4.5-2-windows-x64.exe) on a Windows Seven 64bits PC, and
I'm getting the error "Could not connect to server: No buffer space
available
Christophe Pettus writes:
> Note that it's waiting for a ShareLock, not an AccessExclusiveLock, thus my
> question.
> Just to clarify, my very specific question is about "AccessExclusiveLock".
I believe it'll depend on which PG version you're testing. Alvaro whacked
that
On Wed, 2 Dec 2015 09:01:37 -0800
Christophe Pettus wrote:
> On 9.4, I've encountered a locking message I've not seen before:
>
> process 5293 still waiting for AccessExclusiveLock on tuple (88636,15)
> of relation 18238 of database 16415 after 5000.045 ms
>
> What
On Dec 2, 2015, at 9:25 AM, Bill Moran wrote:
> No. See the section on row level locks here:
> http://www.postgresql.org/docs/9.4/static/explicit-locking.html
That wasn't quite my question. I'm familiar with the row-level locking and the
locking messages in general,
On Dec 2, 2015, at 9:50 AM, Bill Moran wrote:
> Then wait for a little while and the message will be logged.
Well, yes and no :) :
2015-12-02 10:07:40.281 PST,"xof","xof",8465,"[local]",565f3365.2111,4,"UPDATE
waiting",2015-12-02 10:07:33
On 9.4, I've encountered a locking message I've not seen before:
process 5293 still waiting for AccessExclusiveLock on tuple (88636,15)
of relation 18238 of database 16415 after 5000.045 ms
What conditions produce an "AccessExclusiveLock on tuple"? Attempting to lock
a tuple when
On Dec 2, 2015, at 10:29 AM, Tom Lane wrote:
> The short answer is that heavyweight tuple locks can be taken internally
> by UPDATE, DELETE, and other row-level operations, and the specifics of
> which type of lock any given action takes are implementation details that
> can
On Wed, 2015-12-02 at 10:47 -0800, Christophe Pettus wrote:
> On Dec 2, 2015, at 10:29 AM, Tom Lane wrote:
> > The short answer is that heavyweight tuple locks can be taken
> > internally
> > by UPDATE, DELETE, and other row-level operations, and the
> > specifics of
> > which
Thanks for the info Scott.
Can I setup a primary server called A that uses the Postgres installation
on a nfs mounted filesystem and then A does a daily backup of database A
and restores to database B on same filesystem.
Then I have server B acting as cold standby and if server A goes down, then
On 11/30/15 6:28 AM, Yelai, Ramkumar IN BLR STS wrote:
> Hi All,
>
> I need some help in postgresql base backup.
>
> We are currently using multiple DBMS in our project and postgresql is
> one of them. Our private DBMS keeps the online data and postgresql keeps
> online as well as historical
On 11/25/15 7:40 AM, Stephen Frost wrote:
It seems easy conceptually, RLS just adds a WHERE clause to queries if I'm
>not mistaken, and conceptually a view is just a query. The CURRENT_USER
>issue is valid, but personally it's not too big for me as most auth is done
>through database parameters.
Current Architecture
PrimaryHot Standby
Postgresql 9.3 Postgresql 9.3
DB Size 1.4 TB Database Size 1.4 TB
Want to upgrade both primary and hot Standby to 9.4 at same time. I m not able
to figure out how to upgrade Hot
On Thu, Dec 3, 2015 at 12:33:25AM +, Sheena, Prabhjot wrote:
> Current Architecture
>
> PrimaryHot Standby
>
> Postgresql 9.3 Postgresql 9.3
> DB Size 1.4 TB Database Size 1.4 TB
>
>
> Want to upgrade both primary
On 12/2/15 9:26 AM, Peter J. Holzer wrote:
As explained in backend/utils/misc/timeout.c, the timers are never
cancelled: If a timeout is cancelled, postgres just sees that it has
nothing to do and resumes whatever it is doing.
Hrm, if those timers are really just for auth purposes then perhaps
On 12/2/15 7:06 PM, Merlin Moncure wrote:
> The basics is, that I have a column with what is essentially json data; a
> number of data structures of different depths. Perhaps 10 - 30 top
levels,
> and probably no more than 3, max 4 levels deep. In total there are some
> hundred thousands of
On 12/1/15 10:26 AM, Scott Mead wrote:
The data directory will cause you many problems. You will need one data
directory that is accessed by one AND ONLY one host for each node
connected. You can't run an instance on multiple machines pointing to
the same 'data' directory simultaneously. Data
On Wed, Dec 2, 2015 at 12:03 AM, Kaare Rasmussen > wrote:
> Hi
>
> As json essentially only has three basic data types, string, int, and
> boolean, I wonder how much of this - to index, search, and sort on
> unstructured data - is possible. I guess part of the
Unfortunately, no can not do what you have proposed because to make B a slave
of A requires that you make binary copy of A to B and then turn on replication
from A to B which will break B to C replication.
What you can do with your current configuration is do a pg_dumpall or pg_dump
of A and
On 2015-12-03 01:04, Jim Nasby wrote:
We have a client that has a similar (though also a bit different)
need. Specifically, they get an XML document that has element
attributes that tell you what data type the element should contain. We
convert the XML to JSON (easy thanks to plpython), which
Hi,
Postgres allows client-side SSL requests to use secret keys on hardware
tokens via OpenSSL engine support. Is there an equivalent way to store the
server key on a hardware token.
Similarly, is it possible to specify private keys on a hardware token for
replication connections? Does the
Hi,
Are there any scripts which will diff two pg_dump files for t1 and t2 time
period. For example pg_dump taken on t1 -> 01/11/2015 and then on t2 ->
30/11/2015.
backup_01112015.dump (dump taken on 01/11/2015)
backup_30112015.dump (dump taken on 30/11/2015)
Any help will be highly appreciable.
On 2015-12-03 02:06, Merlin Moncure wrote:
I feel your pain. jsquery is superb for subdocument searching on
*specific* subdocuments but range searching is really limited. Value
searching is there for numerics but dates and text range searching are
not present. We also have to understand
Merlin Moncure writes:
> On Wed, Dec 2, 2015 at 12:03 AM, Kaare Rasmussen wrote:
>> As json essentially only has three basic data types, string, int, and
>> boolean, I wonder how much of this - to index, search, and sort on
>> unstructured data - is
On 12/2/15 11:18 AM, Jeff Janes wrote:
Is there a lot of free space in pg_largeobjects table (i.e. recently
ran vacuumlo)? I wonder if it weren't doing a very slow backwards
scan over the table in order to truncate away unused space. The
problem is that the backwards scan might not trigger the
On 12/2/15 1:56 PM, David Steele wrote:
>Also, I don’t want enable archive_mode = on as it needs to maintain
>archives files.
As it turns out, archiving would be the solution to your problem. If
you were archiving you could restore a*previous* backup and then replay
WAL to exactly T3. There
On 12/2/15 12:03 AM, Kaare Rasmussen wrote:
The hard part is that some of the data items really have another type.
There are dates and floating points, as the most important ones. And the
really hard part is that sorting and range searches are important,
especially for these two types. Having
On 12/2/15 4:59 PM, Caleb Meredith wrote:
What is an SRF?
Set returning function.
CREATE FUNCTION srf() RETURNS SETOF pg_class LANGUAGE sql AS 'SELECT *
FROM pg_class';
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
On 2 December 2015 at 20:51, shili wrote:
> I had saw this sentence: SQL statements that use the EXISTS condition in
> PostgreSQL are very inefficient since the sub-query is RE-RUN for EVERY row
> in the outer query's table. There are more efficient ways to write most
>
On 11/25/15 5:38 AM, Tomas Vondra wrote:
But be generally wary of turning of fpw's if you use replication.
Not having them often turns a asynchronously batched write workload
into one containing a lot of synchronous, single threaded, reads.
Even with SSDs that can very quickly lead to not being
I have an old production instance, let's call it A, that I need to
decommission soon.
I've created a pair of new instances, B and C, with B replicating to C,
following this procedure:
https://wiki.postgresql.org/wiki/Streaming_Replication
But B and C have no data yet. I need to transfer all
38 matches
Mail list logo