On Tue, Nov 3, 2015 at 11:29 PM, Albe Laurenz wrote:
> Michael Paquier wrote:
>>> So, as Albe posted pg_archivecleanup is only cleaning up the WAL files, not
>>> the auxiliary files. The WAL files would be the ones with no extension and a
>>> size of 16 MB(unless someone changed the compile settin
Hi,
Thank you for reply.
I think I need to do some more research of means to implement searching for
json databases.
I'll look the module.
Thanks
2015-11-03 18:43 GMT+02:00 Merlin Moncure :
> On Tue, Nov 3, 2015 at 9:57 AM, Vick Khera wrote:
> >
> > On Tue, Nov 3, 2015 at 10:07 AM, Sami Piet
"=?UTF-8?Q?Leonardo_M._Ram=c3=a9?=" writes:
> Hi, I'm trying to build the client library of PostgreSql 9.3.x using
> this version of MinGW's gcc:
> ...
> g++ -DFRONTEND -I../../src/include -I./src/include/port/win32
> -DEXEC_BACKEND "-I../../src/include/port/win32" -DBUILDING_DLL -c -o
> rel
On 11/3/15 7:44 AM, Michael Paquier wrote:
I doubt there is anything involving Postgres here. It seems that some
process is still holding a lock on a relation that is being dropped,
caused by a race condition in pg_repack code.
>PS: I was trying a mailing list of pg_repack
>(http://lists.pgfoun
On 11/03/2015 04:23 PM, Dane Foster wrote:
On Tue, Nov 3, 2015 at 7:09 PM, David G. Johnston
mailto:david.g.johns...@gmail.com>> wrote:
On Tue, Nov 3, 2015 at 4:55 PM, Dane Foster mailto:studdu...@gmail.com>>wrote:
Hello,
I have a design/modelling puzzle/problem. I'm trying
On Tue, Nov 3, 2015 at 7:09 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Tue, Nov 3, 2015 at 4:55 PM, Dane Foster wrote:
>
>> Hello,
>>
>> I have a design/modelling puzzle/problem. I'm trying to model a series of
>> events. So I have two tables w/ a parent child relationship. T
Nevermind, this was fixed with:
ssl_renegotiation_limit = 0
--
Florin Andrei
http://florin.myip.org/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Tue, Nov 3, 2015 at 4:55 PM, Dane Foster wrote:
> Hello,
>
> I have a design/modelling puzzle/problem. I'm trying to model a series of
> events. So I have two tables w/ a parent child relationship. The child
> table has the rule/constraint/etc that for every row in the parent table
> there mus
Hello,
I have a design/modelling puzzle/problem. I'm trying to model a series of
events. So I have two tables w/ a parent child relationship. The child
table has the rule/constraint/etc that for every row in the parent table
there must be at least 2 rows in the child because a series must have at
Hi, I'm trying to build the client library of PostgreSql 9.3.x using
this version of MinGW's gcc:
gcc.exe (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 5.2.0
I did a ./configure --without-zlib
then make (mingw32-make.exe), and got this:
$ /e/MinGW/mingw32/bin/mingw32-make.exe
E:/MinGW/m
On 11/3/2015 6:42 AM, Ramesh T wrote:
I have a Query it taking a lot of time to fetch results
so,explain query gave
"Hash Join (cost=55078.00..202405.95 rows=728275 width=418)"
" Hash Cond: (itd.tran_id = iad._adj__id)"
" -> Seq Scan on inv_detail itd (cost=0.00..40784.18 rows=731029
width
And in addition to providing the actual query, the structure (and indexes)
of all tables involved) are needed.
On Tue, Nov 3, 2015 at 5:59 PM, Rob Sargent wrote:
> On 11/03/2015 07:42 AM, Ramesh T wrote:
>
> I have a Query it taking a lot of time to fetch results
> so,explain query gave
>
> "Has
On 11/03/2015 07:42 AM, Ramesh T wrote:
I have a Query it taking a lot of time to fetch results
so,explain query gave
"Hash Join (cost=55078.00..202405.95 rows=728275 width=418)"
" Hash Cond: (itd.tran_id = iad._adj__id)"
" -> Seq Scan on inv_detail itd (cost=0.00..40784.18 rows=731029
wid
On 11/03/2015 06:42 AM, Ramesh T wrote:
I have a Query it taking a lot of time to fetch results
so,explain query gave
"Hash Join (cost=55078.00..202405.95 rows=728275 width=418)"
" Hash Cond: (itd.tran_id = iad._adj__id)"
" -> Seq Scan on inv_detail itd (cost=0.00..40784.18 rows=731029
widt
I have a Query it taking a lot of time to fetch results
so,explain query gave
"Hash Join (cost=55078.00..202405.95 rows=728275 width=418)"
" Hash Cond: (itd.tran_id = iad._adj__id)"
" -> Seq Scan on inv_detail itd (cost=0.00..40784.18 rows=731029
width=95)"
"Filter: (event_type = ANY
BDR-0.9.3 and PG-9.4.4 on Ubuntu 14.04
Two nodes, BDR replication. Cluster is newly created, no nodes have been
removed from it. Making/deleting small tables works well across the
cluster.
Now I'm trying to pg_restore a larger database from another system
(pg_dump output file is 3.1 GB compr
Still having issues with this with BDR-0.9.3
This is how I join a new node to the cluster:
su - postgres
psql pgmirror
-- fire up BDR extensions
CREATE EXTENSION btree_gist;
CREATE EXTENSION bdr;
-- join BDR group via an existing node there
SELECT b
Victor Blomqvist writes:
> In case any of you are interested of recreating this problem, I today had
> the time to create a short example that reproduce the error every time I
> try.
Hmm. If you just do that serially:
regression=# select * from select_a() ;
id | x
+---
(0 rows)
regressio
On Tue, Nov 3, 2015 at 9:57 AM, Vick Khera wrote:
>
> On Tue, Nov 3, 2015 at 10:07 AM, Sami Pietilä
> wrote:
>>
>> Unfortunately I could not figure out how to select rows which, for
>> example, contain following json: '{"a":"world","c":{"b":"helloworld"}}' by
>> search with "hello" string.
>
> ca
On Mon, Nov 2, 2015 at 4:14 PM, droberts wrote:
> Hi, I have a table that contains call records. I'm looking to get only
> records for users who made the most calls over a particular time duration in
> an efficient way.
>
> calls()
>
> time, duration, caller_number, dialed_number
>
>
>
> -- qu
On Tue, Nov 3, 2015 at 10:07 AM, Sami Pietilä
wrote:
> Unfortunately I could not figure out how to select rows which, for
> example, contain following json: '{"a":"world","c":{"b":"helloworld"}}' by
> search with "hello" string.
>
cast the field to a text:
select * from t where myfield::text li
Hi,
Thank you for reply!
I am using version 9.4.5.
Unfortunately I could not figure out how to select rows which, for example,
contain following json: '{"a":"world","c":{"b":"helloworld"}}' by search
with "hello" string.
I am trying to create a query which looks values in any field in JSON, "a"
Michael Paquier wrote:
>> So, as Albe posted pg_archivecleanup is only cleaning up the WAL files, not
>> the auxiliary files. The WAL files would be the ones with no extension and a
>> size of 16 MB(unless someone changed the compile settings).
>
> The docs mention that "all WAL files" preceding a
On Tue, Nov 3, 2015 at 1:33 AM, Adrian Klaver wrote:
> On 11/02/2015 08:17 AM, Paul Jungwirth wrote:
>>>
>>> Is there anything else beside *.backup files in the directory?
>>
>>
>> There were a few *.history files, and a few files with no extension,
>> like this: 000600BE0040.
>
>
> So
On Tue, Nov 3, 2015 at 9:51 PM, Jiří Hlinka wrote:
> After the 10 min timeout, the OS sends SIGINT to pg_repack process so the
> pg_repack calls:
> SELECT repack.repack_drop($1, $2)
> and it causes a deadlock with other process which is INSERTing into
> frequently_updated_table that has a pg_repac
I'm running a pg_repack from a bash script with timeout of 10 minutes like
so (simplified version):
timeout -s SIGINT 10m pg_repack --table=frequently_updated_table
After the 10 min timeout, the OS sends SIGINT to pg_repack process so the
pg_repack calls:
SELECT repack.repack_drop($1, $2)
and it c
26 matches
Mail list logo